Internet co-creator Vint Cerf welcomes IPv6 elbow room (Q&A)

Google's Internet evangelist, responsible for the address shortage on today's Internet, is anxious for IPv6 improvements. Also: his views on U.N. regulation, censorship, bandwidth caps, and .google.

Vint Cerf, a father of the Internet and Google's chief Internet evangelist
Vint Cerf, a father of the Internet and Google's chief Internet evangelist Google

"Predicting is hard, especially about the future," quips Vint Cerf -- and he should know.

That's because about 30 years ago, when the now-famous engineer was helping to design the technology that powers the Internet, Cerf decided just how many devices could connect to the network. His answer -- 2 to the 32nd power, or 4.3 billion -- looked awfully big at the time. A few decades later, we now know it's far short.

Accordingly, Google's chief Internet evangelist and one of the few people at the company who looks natural in a suit and tie, is eager for tomorrow's high-profile World IPv6 Launch. The event will usher in a vastly larger Internet as many major powers move permanently to the next-generation Internet Protocol version 6 technology . IPv6 is big enough to give a network address to 340 undecillion devices -- that's 2 to the 128th power, or 340,282,366,920,938,463,463,374,607,431,768,211,456 if you're keeping score.

The change actually began years ago: IPv6 was finished in 1996, IPv6 networks could be constructed since 1999, and any personal computer bought in the last few years can handle IPv6 if configured properly. But because IPv4 was spacious enough for a long time, moving to IPv6 was a potentially expensive hassle that didn't have much immediate payoff. It was only last year, when the pipeline of unused IPv4 addresses started emptying out , that a sense of real urgency gripped the computing industry.

The IPv6 transition will take years as Internet plumbing gradually is updated with the ability transfer packets of IPv6 data from point A to point B. That transfer uses technology that Cerf and colleague Bob Kahn invented in the 1970s. It's called TCP/IP, and it's what wires together the Net's nervous system.

When you download that cat photo from a server, it's the job of the Internet Protocol (IP) to deliver it, broken down into a collection of individual data packets, to your computer. Countless network devices in between examine the IP address of each packet to send them hop by hop toward to your machine so IP can reassembles them into the photo.

Closely paired is Transmission Control Protocol, which takes care of ensuring the packets are successfully delivered over this packet-switching network, requesting missing packets be retransmitted if necessary, and reassembling them into the proper order to reconstitute the original photo. Curious people can read the original paper, A Protocol for Packet Network Intercommunication (PDF), written before TCP and IP were split into separate technology layers.

Cerf is a somewhat unusual figure in today's Internet development realm. Hotshot young programmers are pushing the limits of Web programming and other novelties, but Cerf, born in 1943, has a much longer history watching the cutting edge advance. He witnessed the arrival of e-mail, e-commerce, and emoticons.

He even looks the part of a father of the Internet, balding but with a neatly trimmed white beard. And he remains an active parent: in the last two weeks, he warned the U.S. House of Representatives about the perils of U.N. regulation of the Internet , revealed Google's plans for new Internet domain names such as .google, was named president of the prestigious Association for Computing Machinery, and in a speech at the Freedom to Connect conference warned that blocking legislative successors to SOPA and PIPA might not be so easy.

Cerf spoke to CNET's Stephen Shankland in recent days in an e-mail conversation -- though for those who appreciate the Internet's newer communication mechanisms, Cerf will hold a Google+ hangout about IPv6 at noon PT today.

What do you think of the World IPv6 Launch event? Where on the spectrum of PR puffery and real engineering work does it lie?
This is not puffery. It is incredibly hard, painstaking work by engineers looking to make sure that every line of code that "knows" an IP address is 32 bits long in a certain format also "knows" that it could also be in IPv6 format, 128 bits long. This is a major accomplishment for ISPs and application providers around the world. The router and edge device providers have mostly done their homework years ago, but the ISPs and app providers are largely just getting there.

World IPv6 Launch graphic
Internet Society

Are you surprised that it took as long as it did for people to start moving to IPv6?
Yes. We hoped for much earlier implementation. It would have been so much easier. But people had not run out of IPv4 and NAT boxes [network address translation lets multiple devices share a single IP address] were around (ugh), so the delay is understandable but inexcusable. It is still going to take time to get everyone on board.

Why did you settle on 2^32 for the IPv4 address space? And when exactly did that happen?
Bob Kahn and I estimated that there might be two national-scale packet networks per country and perhaps 128 countries able to build them, so 8 bits sufficed for 256 network identifiers. Twenty-four bits allowed for up to 16 million hosts. At that time, hosts were big, expensive time-sharing systems, so 16 million seemed like a lot. We did consider variable length and 128-bit addressing in 1977 but decided that this would be too much overhead for the relatively low-speed lines (50 kilobits per second). I thought this was still an experiment and that if it worked we would then design a production version. The experiment lasted until 2011, and now we are launching the production IPv6 on June 6.

Ha! So if this is Internet 1.0, when will we have to move to version 2.0? Is there anything else disruptive at the level of IPv6 we'll have to endure, or have we laid a foundation for incremental improvements now?
New GTLDs [generic top-level domains such as .hotel], internationalized domain names, new mobile applications, delay- and disruption-tolerant networking, the interplanetary Internet, the Interstellar mission -- there is still a lot that can happen.

We're outgrowing its limits today, but what would the consequences have been if you'd picked a bigger than 2^32 back in the 1970s?
I think this would not have passed the "red face" test -- too much overhead, and what argument in 1973 or 1977 would have led to agreement that we needed 340 trillion trillion trillion addresses?

Might it have been possible to engineer some better forwards compatibility into IPv4 or better backwards compatibility into IPv6 to make this transition easier?
We might have used an option field in IPv4 to achieve the desired effect, but at the time options were slow to process, and in any case we would have to touch the code in every host to get the option to be processes... Every IPv4 and IPv6 packet can have fields in the packet that are optional -- but that carry additional information (e.g. for security)... We concluded (perhaps wrongly) that if we were going to touch every host anyway we should design an efficient new protocol that could be executed as the mainline code rather than options. IPng (next generation) was debated for a couple of years, I think, then Bob Hinden's proposal became IPv6.

How accurately have you been able to forecast the growth and development of the Internet over the years? What did you think its future looked like during, say, those early days working on TCP/IP, or when you were working on MCI Mail, or when you co-founded the Internet Society, and how closely did reality match your predictions?
I would say that starting about 1988 we could see 100 percent per year growth in the number of hosts on the Internet and nearly that for users. In the last 12 years, the compounded growth rate for users has been 15.5 percent per year. During the "dot-boom" period between 1995 and 2000 we saw extremely high growth in capacity (not necessarily in demand). At the point that Netscape Communications launched its IPO there was dramatic growth in web page creation and use. MCI Mail was launched in September, 1983, just nine months after the Internet was launched on the ARPANET and associated networks. However, it was probably about ten years too early to catch the wave because not too many people had terminals, modems or desktop computers at that point. The Internet Society was started in January 1992, almost 10 years after the Internet was launched and this coincided with the early beginnings of the WWW. Generally, though, it has been hard to make predictions. The growth of mobiles has been dramatic as has the growth of smartphones connected to and making use of the Internet. Predicting is hard, especially about the future :-)

Google Chief Internet Evangelist Vint Cerf testifying before the House Energy and Commerce Communications and Technology Subcommittee on May 31, 2012.
Google Chief Internet Evangelist Vint Cerf testifying before the House Energy and Commerce Communications and Technology Subcommittee on May 31, 2012. screenshot by Stephen Shankland/CNET

This ITU-T question over Internet governance is interesting. At the House hearing, there wasn't much disagreement that the status quo is preferable for Internet governance. Are there any good technical or governmental reasons that the ITU fans can point to? I recognize you prefer today's mechanism, but is there something technical and not just political at work here?
This is mostly political: countries that want the Internet to be more controllable as to content and application see ITU as a better venue than the multi-stakeholder institutions like ICANN, ISOC, IETF, IGF, etc.

What's your bigger worry right now regarding a hobbled Internet: state censorship or ITU-T oversight?
All of the above. State censorship is the more direct threat.

Is there an idealistic or moral element to the work you do, or is it purely technical? Someone in your position could get very excited about the opportunities to bridge cultural divides, link economies, and build something of a global community.
I have long believed that sharing of information is extremely powerful and that it is a human right to do so. The Internet facilitates that and I am committed to its use for that purpose.

Are there moral reasons to avoid ITU control over the Internet?
Yes, if you don't believe that every government is concerned about human rights and some wish to use the Internet to suppress those rights. ITU itself is not "evil" but it can be used to achieve ends that are inimical to human rights. So can other institutions, but the multi-stakeholder approach makes it harder, in my opinion.

If the ITU members decide they want more control, is that it? Fait accompli, no appeal process, game over?
No. Internet implementation will find its way around censorship, but it may take time and new developments and some brave citizens to achieve the objective.

It seems to me Google is betting in the long run on abundant broadband (and helping to make the vision a reality with Google Fiber). But right now, there's a big trend toward tiered or metered pricing. Do you think the today's data-transfer caps are a short-term blip or the new status quo? The ISPs and carriers have to make money somehow, and they don't generally seem to be finding ways to build higher-level services the way companies like Google or Facebook do.
I think caps are detrimental in the long run. However, I understand a process that might provide for tiered pricing for bandwidth (not total bytes sent but the rate at which they are sent). I think higher-level services and alternative revenue models are the best approach. I wish there had been more facilities-based competition. That is what we are doing in Kansas City -- vertical, facilities-based competition.

Data transfer speeds have generally been improving with Net access, but has latency been lowering? Why is low latency important It looks pretty good for fiber, but fiber is a relative rarity.
Low latency facilitates interactive applications: games, videoconferencing, shared document access, collaboration, etc.

Most of us these days have very asymmetric bandwidth -- much faster download speeds compared to upload speeds. Why does Google think fast upload speeds are important, too? Personally, I see the constraints with videoconferencing, uploading videos and photos, and online backup, but I'd like to hear the broader view.
We create and move increasingly large amounts of data. The Internet of Things is coming with lots of two-way interaction. Shared databases and instrument data-gathering will produce information that needs to be pushed into the Net.

Hurricane Electric has seen steadily increasing IPv6 traffic well before the official World IPv6 Launch event.
Hurricane Electric has seen steadily increasing IPv6 traffic well before the official World IPv6 Launch event. Hurricane Electric

One of the benefits of IPv6 is a more direct architecture that's not obfuscated by the address-sharing of network address translation (NAT). How will that change the Internet? And how seriously should we take security concerns of those who like to have that NAT as a layer of defense?
Machine to machine [communication] will be facilitated by IPv6. Security is important; NAT is not a security measure in any real sense. Strong, end-to-end authentication and encryption are needed. Two-factor passwords also ([which use] one-time passwords).

Namespaces are a notorious problem in computer science. Is the GTLD expansion actually adding value, or is it just making things more complicated and adding new trademark headaches for brand owners? I know Google is getting involved, but is that because Google excited by the possibilities or worried about the consequences of not defending its brand?
We are interested in new ways to use Google brands and new ideas for using GTLDs. The trademark/domain name interaction has always been problematic because trademarks are not unique. I think ICANN has provided strong guidance on sunrise processes to help protect trademark owners and faster dispute resolution mechanisms. I think only a few new TLDs will be notably successful, but I could be wrong.

I find it hard to grasp how long you've been involved -- you've seen the arrival of e-mail, emoticons, BSD Unix, the Web, e-commerce, and now so much streaming media. Do you suffer from future shock, or does it all just look like packets being routed appropriately?
No future shock. It is all unfolding as planned. (:-) From the network level, it's packets all the way down. The innovations are at the edges although the infrastructure is undergoing major change (IPv6, new TLDs [top-level domains such as .nyc or .google], Unicode domain names [that support non-Roman alphabets such as Chinese], OpenFlow [which enables network technology experimentation], mobile application platforms, cloud computing -- time-sharing on steroids....). But it comes in daily spoonfuls, not an avalanche and that makes it fairly absorbable.

This diagram of a packet-switching network appears in the 1974 paper by Vint Cerf and Bob Kahn describing what became the TCP/IP technology for transferring data reliably across such a network.
This diagram of a packet-switching network appears in the 1974 paper by Vint Cerf and Bob Kahn describing what became the TCP/IP technology for transferring data reliably across such a network. IEEE/University of Massachusetts Amherst

What originally got you started in development of the Internet, and when did that happen?
After working at UCLA on the ARPANET project (specifically the Network Measurement Center and the Host-Host Protocol AKA Network Control Program under the leadership of Steve Crocker (now ICANN [Internet Corporation for Assigned Names and Numbers] chairman), I joined the Stanford Faculty in Oct 1972. Bob Kahn and I had met while I was at UCLA and he was at Bolt Beranek and Newman. He was a key architecture of the ARPANET Interface Message Processors (IMPs) but joined ARPA about the time I joined Stanford. He had been thinking about open networking and multiple kinds of packet-switched networks. By the time we rendezvoused in the spring of 1973, he was talking about building and connecting mobile packet radio and packet satellite networks with the ARPANET. We had in mind mobile packet voice, packet video, and all the existing ARPANET applications (mostly remote time-sharing, e-mail, and file transfers). We worked through the spring and summer of 1973 and wrote a draft paper that we briefed to the International Network Working Group (INWG, which became IFIP WG 6.1) that I chaired at a meeting at the University of Sussex in September 1973. A revised version of that paper was published in May 1974 in the IEEE Transactions on Communications ("A protocol for packet network intercommunication"). During the calendar year 1974, I led a seminar and working group to refine the Transmission Control Protocol, ending up with a complete (but buggy) specification published as RFC 675: Internet Transmission Control Protocol.

But what piqued your interest? Was there some juicy combination of challenging but attainable, useful but experimental, technical but elegant? I've met a lot of engineers who are motivated by a lot more than just creating something to meet a spec.
ARPANET was a bold experiment in computer communication and I was fascinated by the possibilities. The Internet was even more challenging because of the diversity of networks and computers and operating systems that had to be made interoperable. We knew we were building the basis for a very, very powerful infrastructure.

What do you think of about being called a "father of the Internet"? Who else in your mind merits this parenthood title?
There are many. Bob Kahn started the ARPA Internetting program, so if anyone deserves the title, he does. We worked very closely together and put the first paper together as if there were two hands on one pen (or keyboard). The leaders of the ARPANET project, at DARPA: Larry Roberts; Bob Taylor; Frank Heart, Bob Kahn, Dave Walden, Severo Ornstein, Willy Crowther, Virginia Strazisar, Daniel Burchfiel, Ray Tomlinson, Bill Plummer and a bunch more at BBN; Len Kleinrock, Steve Crocker, Bob Braden, Jon Postel at UCLA; at MIT, David Clark, David Reed, Noel Chiappa; at USC-ISI, Dan Lynch (later Postel and Braden moved there from UCLA); there are many others in other countries who pioneered implementations of TCP/IP especially at University College London led by Peter Kirstein and at the Norwegian Defense Research Establishment led by Paal Spilling and Yngvar Lund. In addition, there were my graduate students, especially Carl Sunshine, Yogen Dalal (whose names are on RFC 675), Richard Karp, Judy Estrin, Ron Crane, James Mathis, Darryl Rubin; visitors: Kuninobu Tanno, Gerard LeLann, Dag Belsnes; Xerox PARC, Bob Metcalfe, John Shoch among others. I am sure this list is incomplete.

Yes, there's an endless number of giants on whose shoulders one must stand. You don't seem a particularly ego-mad person trying to claim undue influence, but where would you rank yourself in that list?
Well, Bob Kahn and I really did design TCP (later TCP/IP), and both of us ran the Internet research program while at ARPA [the U.S. Defense Department's Advanced Research Projects Agency, later the Defense Advanced Research Projects Agency, or DARPA]. I am a founder of the Internet Society and its first president; I was chairman of ICANN for seven years and on the board for eitght; I was on the IAB [Internet Architecture Board for years and its chair for a time; I have continued to support the growth of the Internet while at CNRI [Corporation for National Research Initiatives], MCI [a telecommunications company where he led development of the MCI Mail Internet e-mail service], at ARPA, and now at Google. So I guess Bob and I belong fairly close to the origins from the ARPANET to the present.

Roughly how many of those folks are still active?
All who have not passed on are still very active. Pretty amazing, huh?

 

Join the discussion

Conversation powered by Livefyre

Show Comments Hide Comments