Interview with Jon Postel
Interview conducted January 29, 1996
at Information Sciences Institute
Marina del Rey, California

This interview appeared in the September 1996 issue of OnTheInternet, the publication of the The Internet Society, of which Jon was a founding member. The Internet Assigned Numbers Authority maintains a Web page devoted to Remembering Jonathan B. Postel.
 

OTI: Jon, how long have you been working on internet technology?

Postel: Forever. I got involved when I was a graduate student at UCLA when UCLA was the first site on the net. [1968 - ed.]

OTI: How did you get started?

Postel: In a chemistry class there was a guy sitting in front of me doing what looked like a jigsaw puzzle or some really weird kind of thing. He told me he was writing a computer program. I wanted to learn how to do that so I signed up for a class in computers the next term. They had a computer club so you could submit a program that would run for about a minute at midnight on an IBM 7094 and you'd get your output the next day.

Then I started graduate school at UCLA. I got a part time research assistant job as a programmer on a project involving the use of one computer to measure the performance of another computer. After about a year, ARPA (The Advanced Research Projects Agency of the U.S. Department of Defense -- ed.) decided they were going to build the ARPANET. Since UCLA did all this stuff about measurements and performance analysis, they were to be the first site on the ARPA net. Our project got transformed into developing stuff for the ARPANET and measuring its performance.

OTI: I recall there were four initial nodes UCLA, SRI, ...

Postel: ...UC Santa Barbara was third and the University of Utah was the fourth.

OTI: When was the first link to the east coast?

Postel: It was the fifth link.

OTI: Fifth link excuse me.

Postel: It was from UCLA to BBN.

OTI: Do you like your work?

Postel: Well, I like thinking about the problems. I don't like the administration, the administrative stuff and writing proposals is not so much fun. Working on them is fun.

OTI: Are you married?

Postel: I have a significant other.

OTI: Do you keep all kinds of hours?

Postel: Well not completely crazy hours. I'm usually here between 9 and 6, and sometimes later. Not very often earlier.

OTI: In this day and age of telecommuting do you ever work at home?

Postel: Even with all the teleconferencing we do, we still have many face-to-face meetings. So being here trying to help manage is important. But I do have a computer at home and a pretty good ISDN connection. I have a clunky old Sun work station that I use mostly as an X windows terminal but actually run some programs on it.

========== RESEARCH WORK ==========

OTI: Could you outline what your work consists of, both in the area of research and in the area of work on standards and the IANA (Internet Assigned Numbers Authority)?

Postel: I work here for ISI (http://www.isi.edu) and ISI's view is that we're doing computer science research. I'm in a group here of about 50 people doing work on distributed systems. We write proposals to government agencies, mostly ARPA, to do new things in computer networking. We work on problems like very high speed local nets. One project is a gigabit speed local network -- a hundred times faster than an ethernet or ten times faster than an FDDI network. We have a prototype network installed we're beginning to play with. We're trying to figure out how to change operating systems in the workstations and applications so we can actually make use of it. One of the problems is that all the workstations are built to work on ethernets. When you take ethernet away and plug in a much faster network, it turns out that even though you have this really fast network connected to the workstation, you can only go a little bit faster because the workstations are optimized for ethernets and are not designed to work on faster networks! So we're working on how you change the protocol processing in the workstation and come up with applications which make use of these much higher speed networks.

Another part of our group, with Cliff Neuman in charge, is doing distributed computing for enabling coalitions of workstations to work together on a common problem.

Another aspect of our work is multimedia teleconferencing. We're very involved in having people work both in the local network and in the long haul network. We want to use the same sort of software techniques and philosophy for teleconferencing as we did in developing TCP/IP, which leads us into working on the Reservation Protocol, RSVP (http://www.isi.edu/div7/rsvp/rsvp.html). If you want to do good quality teleconferencing across wide area networks then you need to reserve bandwidth or somehow have capacity set aside for your teleconference.

OTI: Is the Reservation Protocol separate from TCP? Does it run on top of IP?

Postel: It runs on top of IP. But the teleconferencing stuff right now runs on UDP. TCP works very hard to get the data delivered in order without errors and does retransmissions and recoveries and all that kind of stuff which is exactly what you want in a file transfer because so you don't want any errors in your file. But if you're just sending a video stream or voice it's OK to let a few errors occur. It's got to be very small to be useful at all but a little bit of loss is okay rather than stopping the whole thing and going back and recovering those few scrambled bits. RSVP is in the context of the running on top of UDP where a little bit of loss is okay. It's also aimed at working in a context of multicast.

OTI: Does RSVP go right down to the routing level to reserve bandwidth between two sites for periods of time by preallocating it?

Postel: That's the idea. The routers get involved in this and they know that on the path between this router and that router a certain percentage of the bandwidth is reserved to these things and a certain percentage of it is allowed on a first come first served basis.

OTI: That will put an interesting dent in how ISPs will charge people for bandwidth!

Postel: That's certainly an issue. What will keep people from reserving it when they don't need it? That's an open question. That's a part of the research.

========== WORK ON STANDARDS ==========

OTI: Turning to your work on standards -- you're the chairman of the IANA.

Postel: There's a variety of things that I'm involved in. One is the IANA (http://www.iana.org), another is RFC editor (http://www.rfc-editor.org), another is the Internet Architecture Board (http://www.iab.org/iab/), and then the Internet Society (http://www.isoc.org/) board of trustees. So those are activities which take some time. In addition, ISI is involved in running a regional network called Los Nettos (http://www.isi.edu/div7/ln/). I'm sort of the manager of that although most of the day to day stuff is handled by other people. It's a regional network for Los Angeles which has six hub sites and about sixty associate members connected n a ring of T3 network lines and then a T3 connection to the MCI internet.

OTI: I understand you are making available some even higher than T3 speed links using fiber.

Postel: We are looking at that but it's difficult to get a higher speed connection to a backbone network. Within the Los Nettos community we are looking at higher speed connections, driven by our own need for good connectivity and sharing costs. Also, it gives us a first hand sort of truth about what's interesting and what the problems are in networking and gives our research a little more context.

Another thing we're involved in is the US domain. The top level domains like COM, NET and ORG are basically all run by the INTERNIC. Then there are country code domains. FR domain is run by somebody in France, DE domain is run by somebody in Germany, etc. The U.S. domain is managed by people here at ISI and I oversee that a little bit (http://www.isi.edu/in-notes/usdnr/).

OTI: Is there a superstructure that each location throughout the world adheres to manage a domain?

Postel: No. It's pretty much up to the guy in the country that's doing it. The overriding rule, if you want to run a domain, is to be fair. If you're in charge of managing domain name space you should treat everybody who asks for a registration the same. Whatever that is - whether it's nice or ugly or whatever - just be fair, treat them all the same.

OTI: Who invented the term 'RFC'?

Postel: Steve Crocker. Steve wrote the first RFCs (http://www.rfc-editor.org/overview.html) and then somebody figured we better keep a list of the RFCs, especially if somebody not from UCLA wanted to write one.

OTI: About what year?

Postel: 1969. I got the notebook and got the list of RFCs. That's how I got to be RFC editor -- by keeping the list of who is writing which one. I also got to maintain the distribution list because in the early days when you wrote an RFC you had to make fifteen copies and mail them to fifteen people via U.S. mail. Nowadays it's all e-mail and on- line and so distribution's not a problem. But looking at documents and trying to assure that they have some slightly coherent style is still a problem.

OTI: So you do literally function as an editor?

Postel: In the early days the RFC editor had to make a lot of judgements about whether the proposed RFC was an adequate description of a standard or did it need more work? That's been largely taken over by the IETF working group structure (http://www.ietf.cnri.reston.va.us/) and the IESG, the Internet Engineering Steering Group (http://www.ietf.cnri.reston.va.us/iesg.html). The head of the IETF takes care of a document that comes up from working groups to the IESG and decides if it is ready for prime time. The RFC editor no longer has to do much technical review.

OTI: Once an RFC is published is it a harder than it used to be for it to become a standard?

Postel: The process evolved a lot. It's a much more formal now. There's rules. It's easier now to understand how things become standards rather than being left by the wayside because you know what the process is. There are certainly a lot more RFCs in the proposed status than there are in the final standards status and most of them won't go anywhere. But it's always been that way. I think it's even more that way now.

OTI: The interesting question is where does the final arbitration really come from? You mentioned the working groups put this stuff together but then there's the IESG?

Postel: They're pretty much the arbiter of what's a standard and what's not a standard. They're largely influenced by community input. They're not a bunch of guys sitting around throwing the dice to decide these things. They take a lot of polls and each of the people on the steering group is involved in some area and in charge of five to ten working groups. So they go wandering around through their working groups at meetings and find out what the sense of the community is.

OTI: What's an example of a working group?

Postel: There's a whole variety of things. For each thing about how to do an internet protocol over a particular kind of network. For example, IP over ATM is a working group. IP over FDDI was a working group. In the routing area there might be somebody who comes up with a new routing like BGP. For a while there was a BGP working group to develop that routing protocol. In an application area there might be a working group to deal with e-mail extensions to the MIME standard. There's about seventy working groups at any given time. Something like ten areas and seventy working groups.

OTI: Let's turn to the IANA?

Postel: The Internet Assigned Numbers Authority came about the same way as the RCF editor function. For a huge variety of protocols there's a lot of numbers or parameters or key words and they need to be recorded in one place along with a little bit of identification about the people involved. Then, if somebody needs, say, number five can't find out who's using it, you can track back and find out who is actually using that number or can we recycle it and use it for something else.

OTI: Does the structure of the allocation technique evolve as different protocol techniques come into existence which need different kinds of numbers?

Postel: Well, the technique for allocating doesn't change very much. Send an e-mail message, ask for what you want, enough description that it's clear that you know what you're talking about.

OTI: It's self-defining? In other words, say someone has defined this new protocol over here called the Web and needs some new feature for it...

Postel: It's not really very different. What does evolve is keeping these numbers in a way that people can find out about them. We used to just sort of write them on a piece of paper and when people would call up we would tell them. That got real boring so we put them on line and published a memo every couple of years about what all the latest numbers were. Now it's all available via the web.

OTI: So the web has certainly been part of the evolution of how you deal with the Internet.

Postel: It's been a really tremendous change in how information is accessible. All this stuff was done via FTP but the web has put a really nice user interface on it.

OTI: Has the increased commercial use of the net impacted the standards process?

Postel: Definitely, in that commercial people are involved in the process. Years ago when you'd go to a working group most of the people in the working group would be from universities. Now most of the people are from companies who are building internet products and care what the standards turn out to be.

OTI: So they've adapted to it quite naturally.

Postel: The mechanism is pretty much the same but the people involved are from the commercial product development community.

OTI: The FNC, what is it?

Postel: It's the Federal Networking Council -- representatives from the federal agencies and departments that care about the internet in one way or another. Somebody form ARPA, the National Science Foundation, the more military part of DOD, the Department of Energy, the National Institute of Health, and NASA. Those are the key people. The FNC has something on the order of twenty different representatives across the whole government.

OTI: How much actual funding comes from these different departments?

Postel: Some funding for the critical infrastructure is coming from these agencies but in terms of the overall operation of the internet it's a very small percentage of the operational cost, considering that all the backbones are now run by commercial companies that are profit making, maybe, profit seeking at least (laughter). Some government funds for university research helps pay for that university's connection to the internet but it's not a direct subsidy, it's more like funds used to cover telephones or postage needed in the course of research projects.

OTI: Which of your accomplishments are you the most proud of?

Postel: I think having been involved in the basic protocols of the internet. TCP and IP and the key applications. It's fairly satisfying to see that those get a wide spread use. It's really very unusual in the research world that something done as a research project actually gets out there into the world to that extent.

=== DISCUSSION OF TECHNOLOGY AND BANDWIDTH ===

OTI: Have you been at all surprised that the fundamental technology has lasted this long and been as stable as it has or robust?

Postel: We shouldn't be surprised. We were trying to design it to be robust and general purpose and be able to grow and evolve. One of the things that is not so good is that a decision was made long ago about the size of an IP address -- 32 bits. At the time it was a number much larger than anyone could imagine ever having that many computers but it turned out to be to small.

OTI: IPng is going to deal with that?

Postel: IPng is going to deal with that by choosing another very large number much more than anyone can imagine ever needing this time and it's a very large number so it seems very unlikely we'll ever fill it up. One of the arguments years ago was that the address should be of variable length which could grow in time. There's tremendous resistance to that because of the few extra instructions you need to process the header. Some how I'm not really convinced.

OTI: Has the evolution of the net in recent years taken you by surprise as far as the extent to which it's been commercialized and in use by the general public?

Postel: Milestones for me are when you see things like a couple of years ago when the word ‘internet' started to appear in newspaper and magazine articles. That was clearly surprising, interesting -- a very interesting milestone was when you can pick up a magazine and read an article about some sort of computer related thing and they mention the word internet without explaining it. Then there was the New Yorker cartoon of the dog at the computer looking at the other dog saying “On the internet no one knows you're a dog.” That was a nice milestone. The world wide web has really been quite spectacular and not something I would have predicted. It's the class of thing that I expect in the sense that as we improve internet technology and especially as we raise the data rates available to the average person I expect there to be new applications and maybe ones that we hadn't ever thought of until suddenly something that just seemed really clunky before or wasn't even attempted before will become a killer application. I think that's what happened with the web. Sitting at home on your 28.8 modem downloading web pages is a real pain, but you do it because everybody's talking about it. But it's absolutely terrific here in the office where the slowest link is ten megabits per second. Everyone should have ten megabits and then the web will be a wonderful thing.

OTI: Why did the Web happen when it did?

Postel: My reason for why it happened is that the backbones had the capacity to make it possible.

OTI: It occurs to me that this is almost juxtaposed with when the backbone became commercial.

Postel: NSF had gotten the NSF net up to 45 megabits before the transition to commercial networks and essentially forced the first commercial networks to be at least that speed at 45 megabits which is called T3. If they were going to replace the NSF net backbone how could they offer a service that was lower quality? So they had to do at least that. And now you see the commercial guys exploring the next step. The speeds of backbone lines are sort of based on telephone company technology because they're all rented from telephone companies. So whatever the next step the telephone companies eventually make available at reasonable tariffs will be the next generation of backbones. I'm very concerned that people aren't thinking far enough ahead. Five or six years ago backbones were T1 and a couple of years ago they all became T3. That's a factor of 30 in speed. We need to be planning the next step of 30 in speed now. The optical technology equivalent of T3 is called OC-1. OC-3 is not really a speed they talk about so the next one is OC-48. So I think we ought to be planning OC-48 backbones now and I hear nobody talking about it. People think it's completely impossible. But I think we'll need it.

[OC-1 =    51.84Mbps 
 OC-3 =   155.52Mbps 
 OC-24 = 1244.16Mbps 
 OC-48 = 2488.32Mbps
 -- Ed.]

OTI: What do you think about the proposals to supply internet access via cable TV lines?

Postel: One way to get high speed to the home is over cable systems. There's a critical decision about to be cast in stone by the cable companies when they rewire everything in order to provide ten megabits per second down to your house. What data rate will they provide from your house out? If it doesn't provide ten megabits from your house out then it's really a disaster! You want to be able to buy a fairly powerful work station and run it as a server in your house. And have your web page always available to the outside world at a reasonable speed. Therefore you need high speed from your house out and the cable companies and the TV guys are just not thinking about it at all. They are basing their decisions on the wrong idea. It's one of those things that when you do it wrong once, it's really hard to overcome because you get installed base and the next guy comes along and says "Well, I'll do it the same way those other guys did it." You get more installed base that's wrong and it's really hard to throw away. We still have 525 line television.

OTI: Who should we be talking to about this problem?

Postel: The head of TCI. John Malone. William Randolph Hearst, III claims to be doing it. These guys should get the message.

OTI: Any thoughts on other applications that may arise?

Postel: I think that audio and video over the internet in the sense of teleconferencing and telephone calls. Maybe we'll actually have picture phone through your work station.

OTI: I use WebTalk and Netscape CoolTalk right now to talk to Albuquerque over regular TCP/IP.

Postel: I think that such separate applications are going to become much more popular. We're at this threshold now of audio and video. And the web stuff sort of works, but it's not really great because of bandwidth limitations and so there's tremendous pressure on higher speed access to the home. But as soon as we got that higher speed access to the home there's going to be a tremendous crunch on the backbones for a much higher speed bandwidth. People really ought to be planning for that.

========== CONCLUDING REMARKS ==========

OTI: You've been working forever in the back rooms. Recently you were recognized by Newsweek in the NET 50.

Postel: Was it the same issue with the year in cartoons and then they had this little small section? I don't know how they picked them -- 50 people who did some technology or internet or something. It was very strange, I knew about a third of the people out of the 50. Two thirds of the people I never heard of before. That was very strange. You know, in about one column inch or something.

OTI: Do you see yourself as being in the limelight? Do you like it?

Postel: No, luckily not. Being in the limelight has its minuses. I'm not pushing to have that happen.

========== end ==========

interviewer: Dennis G. Allard <allard@oceanpark.com>,