Archive for the ‘www’ Category

DNS takeover, 1998

December 18, 2009

Jon Postel, researcher and original DNS registrar, temporarily takes back two thirds of the internet naming authority from the U.S. government with a single email:

Date: Wed, 28 Jan 1998 17:04:11 -0800
From: postel@ISI.EDU
Subject: root zone secondary service
Cc: postel@ISI.EDU, iana@ISI.EDU

The following messages is pgp signed by “iana “.

—–BEGIN PGP SIGNED MESSAGE—–

====================================
====================================

Hello.

As the Internet develops there are transitions in the management
arrangements.
The time has come to take a small step in one of those
transitions. At some point on down the road it will be appropriate for
the root domain to be edited and published directly by the IANA.

As a small step in this direction we would like to have the
secondaries for the root domain pull the root zone (by zone transfer)
directly from IANA’s own name server.

This is “DNSROOT.IANA.ORG” with address 198.32.1.98.

The data in this root zone will be an exact copy of the root zone
currently available on the A.ROOT-SERVERS.NET machine. There is no
change being made at this time in the policies or procedures for
making changes to the root zone.

This applies to the root zone only. If you provide secomdary service
for any other zones, including TLD zones, you should continue to
obtain those zones in the way and from the sources you have been.

– –jon.

Jon Postel
Internet Assigned Numbers Authority
c/o USC – ISI, Suite 1001
4676 Admiralty Way
Marina del Rey, CA 90292-6695

Talk: +1-310-822-1511
Fax: +1-310-823-6714
EMail: IANA@ISI.EDU

====================================
====================================

—–BEGIN PGP SIGNATURE—–
Version: 2.6.2

iQCVAwUBNM/OggXEg/2i5jY1AQFOSgQAmFKo34Ytxi+8R78qG7/2BUP3KdWqH2Aj
zufrv5sYkfQDNeW+02JA5LZT6ZW5AgRgTDJpQkZlKKvBfzD52GCsDpgt1yUdxxUJ
3VfmK48AIEV9LVKAwlDmOqia++cp1nA8Jd7en35HnKAuFVFEKN0fYEq8FHXEAuOJ
TXXrSiVyCHE=
=qZXq
—–END PGP SIGNATURE—-

SOURCE: http://www.rfc-editor.org/pipermail/internet-history/2002-November/000376.html

All but the U.S. government secondary name servers agreed to switch.

Is the web sticky enough?

August 21, 2009

Domain names are, by policy, transferrable

We’re using the internet for a world-wide-conversation, and lately we’ve been wrapping a lot of communications in atoms of information we sometimes call “posts”. People pass around links to posts and sometimes copies of posts, and we have feeds full of posts, tweets and things included. It’s all good. Hopefully the posts have unique IDs so we don’t confuse them with one another and they can be cross-referenced. But what is that ID? How do we choose them? Usually today we use URLs. URLs are rooted in DNS names. Full rights to DNS names are reassignable according to the policy of ICANN. There’s currently no policy to say the future owners of a domain can’t reuse URLs for most anything they want. Whoops?
Illustration

If we want our decentralized world-wide-conversation to work for a very long time, we need to consider strategies for keeping our post IDs unique. If we use URLs, do we care that a malicious future owner may in some ways “overwrite” our posts in history by issuing their own posts with the same ID? For that matter, are some gTLDs more stable roots than others for long-term ownership?

Would we be better off working with an ID namespace for posts that had an explicit policy for the persistence of identity uniqueness at least written down, if not enforceable by contract law? A new gTLD?

Post archives

Here’s one idea that might get around all that. What if there was a giant P2P historical web that could be queried with a date and a URL (or any URI) and it did its best to return whatever post or resource was issued there at that time. Peers could make sure that posts they felt were important stayed in the archive by mirroring them, all regardless of where on the web the information was originally posted. A distributed file system of sorts.

Atomism and Holism

Of course, where we stop writing atomic posts and start communicating in high definition streams, it seems the persistence of ideas gets a lot harder, as one of the inspirations [scripting.com] for this #blogpostfriday post starts to get at.

What would Heraclitus say?

Further reading

Drummond Reed. “The Persistence of Persistence” [www.equalsdrummond.name]

Bob Wyman. “The Persistence of Identity” [www.wyman.us]

Thoughts on .Tel and WebFinger

August 20, 2009

Which makes a better architecture for service discovery, XRD lookups on email addresses (a la WebFinger) or DNS NAPTR records (a la Telnic’s .tel domains)? Hot topic. The WebFinger project has Google and Yahoo folks behind it and uses an XML format compatible with OpenID’s work. The .tel domains forego HTTP and take advantage of DNS’s ready-made caching system.

The main functional difference is the caching

Telnic recently posted a graphic showing how much simpler using DNS directly is than finding XRD. Called an “Idiot’s Guide” – I assume tongue-in-cheek – the best argument for using DNS is almost entirely unrepresented. Future library functions could ultimately make both lookup procedures trivial (ignoring for the moment you can’t do DNS lookup from javascript), so the graphical complexity does not represent a practical issue. Using DNS to make an HTTP request and parsing the response is pretty much what the entire web is made of. How often are we doing these lookups to where the overhead would be a problem?
WebFingerAndNAPTR

The one advantage of using DNS to identify service endpoints is that DNS gives you a ready and reliable cache. We don’t see this in the graphic, but the authoritative source in the DNS lookup is shielded from repeat requests by DNS’s tiered cache. Better still about this, simple client apps automatically take advantage of DNS’s cache; it’s been the fabric of the internet since 1983.

There are drawbacks. UPDATE: As Blaine Cook points out in the comments below, perhaps the biggest is that DNS has security issues until DNSSEC is widely implemented. Without that, one can’t trust that the distributed cache is providing authoritative information. For storing public keys for digital signatures, that’s important.

The caching also means DNS records can take some time to propagate under normal circumstances. While DNS records can be explicit about their preferred Time-To-Live (TTL) in the cache, it’s ultimately up to the bottom tier name server to decide what minimum values it will allow. Though Telnic seems to be specifying 1 minute TTLs, we can’t be sure this low value will be observed by every bottom tier ISP.

Unpredicatable latency could potentially be an issue for applications that depend on service endpoint registration for a quick setup-and-run operation, like downloading and installing a new chat app and immediately trying to start a chat using the endpoint the app just registered on your DNS-based id.

WebFinger servers, on the other hand, will (I assume) work like the rest of the web and rely on a combination of client-side caching according to HTTP headers and in-memory caches at the authoritative server (or servers, as DNS may specify alternates). At worst, these servers see every request, and of course, no cache can be shared across individual clients without some additional mechanism being built.

So that’s what I take away from the Idiot’s Guide to WebFinger and .tel. DNS comes with free caching to lighten the server load, while HTTPS+XRD uses the web’s caching mechanisms.

Also – and I mentioned this earlier – something that’s not obvious from the diagram is that DNS can’t be directly accessed by javascript web apps. Flash apps can open a socket, but javascript cannot. Presumably this is one reason WebFinger is not using the DNS solution.

Domains can be free

I’ve noticed that critics of the DNS service lookups often point out that domains are not free, but email addresses are. I want to take a second to point out that this is not true. A name server is considerably less expensive to run than a mail system, and if you have a domain, subdomains such as myname.example.com can be given away to end-users “free”. It’s even conceivable that an email provider could create subdomains for every email address and enable WebFinger-like lookups for email addresses through DNS NAPTR records (e.g. masonlee.webfinger.gmail.com.), should that be desirable. (This is not WebFinger’s plan, though. WebFinger passes the email address as a parameter to an HTTP method on the mail host.)

.Tel is not the only DNS solution

There are three arguments that resonate with me for registering a .tel domain. None seem compelling to the tune of $12/year, especially for people that already have domains.

Argument 1. You get to use Telnic’s nameservers and APIS for privacy controls and editing NAPTR records.

If DNS is the ideal decentralized service discovery system, and we aren’t tied to Telnic Limited, then other name servers should also be implementing these same management APIs, bringing us back to the question, what’s really special about the .tel domain?

Argument 2. If we assume third-party apps will use the Telnic APIs, then [domain].tel is the shortest possible way to express “Hey, I have Telnic conforming NAPTRs and NAPTR-editing and privacy APIs!”.

Might one just as well express the same by adding a tel. subdomain to an existing domain, e.g. tel.masonlee.org.?

Argument 3. If everyone had a .tel domain, it would be an awesome namespace, where we might silently drop the “.tel” and just have global “usernames” for finding service endpoints.

This would be ideal in a lot of ways, but the current pricing is a barrier such that this is unlikely to happen. The same problem exists for top-level i-names. Email addresses (and phone numbers) are what everybody already has.

One argument against adopting .tel is that Telnic overrides all .tel DNS A records to point to their Telnic webservers. I’ve heard their reasons for doing so – guaranteed usability in old-generation mobile web browsers and to provide a consistent looking directory – but I think this is a mistake. It’s not clear that all service endpoints, especially highly technical ones, should need to be shown on a human-friendly page (example), and as is, only Telnic has the authority to decide how the records will be presented (icons, ordering, etc.) for .tel domains on the web.

Are NAPTR and XRD mappable?

Suppose you think both DNS and HTTP+XRD service listings are cool, .tel completely aside. Could we create a unified service discovery library that allows clients to look up a service for both email and domain ids? There would be some confusion as to whether domain-looking strings provided by a user should be resolved to DNS NAPTR or OpenIDs XRD via HTTP, but we could check both and prefer one.

Secondly, if we want to use DNS sometimes, but think that XRD has the momentum right now, can we transform XRD to a set of NAPTR records, and can we agree to use the same service URIs in both systems? We could get some API compatibility this way.

That said, once the net supports email addresses as ids, how many people are going to get themselves a domain name to do the same thing? Frankly, WebFinger seems likely to obsolete OpenID urls as well.

APIs needed

For both XRD and NAPTR registries to be useful beyond OAuth and single-sign-on, there’s a lot of work still to be done. XRD providers need a standard API to allow authorized third party applications to update a user’s service listing. When I download Skype app, Skype should be able to add its endpoint to my service registry. (I’m hearing that XDI might have a solution.) And while Telnic does have APIs for this that other name servers could mimic (way to go!), cursory inspection indicates they still need improvement: for example, an authorization scheme other than username/password.

All good stuff to watch for.