A visit to Stanford's Internet2 GigaPoP
November 3, 1999
November 3, 1999
by Rich Morin
(IDG) -- The infrastructure of the Internet has proven remarkably stable in the face of geometric growth rates and even outright abuse. Nonetheless, the system has some built-in deficiencies that need to be addressed if it is to carry us forward into the age of ubiquitous, realtime, multimedia networking.
And, despite its overall stability, the Internet is an unwieldy and expensive place to perform basic networking experiments. There are far too many players to coordinate, and the noise generated by random users (who tend to scream when things go awry) disturbs the researchers' concentration.
So, a collection of some 150 universities has joined with assorted high-tech corporations and government agencies to build Internet2, a prototype for the next generation of the Internet.
Lest you get the wrong impression, however, Internet2 is not so much a thing as an activity. The Internet2 researchers are performing assorted experiments, using both existing and newly created resources. The information produced by these experiments is the real product; the resources themselves may be borrowed or even temporary in nature.
The Internet2 mission statement is short and clear, if a bit nationalistic in tone:
Facilitate and coordinate the development, deployment, operation, and technology transfer of advanced, network-based applications and network services to further US leadership in research and higher education and accelerate the availability of new services and applications on the Internet.
Thus, the toys will belong to (and directly benefit) only domestic players, but we can expect interesting results to trickle out to the rest of the Internet over time.
Actually, I would expect the results to be pretty freely available: many of the researchers will be from other countries and some of the partner corporations (e.g., IBM and MCI) are distinctly multinational in scope.
Support for realtime and multimedia applications is crucial. The current Internet works fine for e-mail and FTP, not so well for complex Web pages, and rather poorly for streaming audio and video, time-critical scientific experiments, etc. The reason, in brief, is that all packets are treated in the same manner.
Even though I might not care about the exact delivery time of a particular e-mail message or FTP packet, such "bulk mail" gets the same handling as my time-critical audio stream. Worse, your Web page downloads can get in my way (and vice versa).
By establishing mechanisms for guaranteed quality of service (QoS), Internet2 can keep bulk traffic from interfering with time-critical data.
By doing this research in the cloistered halls of academia, the participants may be able to investigate the thorny allocation issues (whose data gets dropped in a pinch?) in a relatively civilized setting. In any case, Internet2 is committed to provide QoS support, so the technical aspects of these issues (at least) will need to be worked out.
Once QoS is in place, a variety of advanced applications can be investigated. Teleconferencing and shared whiteboards are obvious starting points, as is realtime data analysis and experiment control. Demonstrations of network-based health care and environmental monitoring are also being planned, showing that the QoS support is quite serious.
There are some other issues, such as next-generation IP routing, which also need to be examined in a high-bandwidth setting. The Internet's 32 bit IP addressing scheme won't last forever, but the current Internet isn't the best place to try out changes.
A typical experiment
A recent high-profile experiment (or demonstration, if you prefer) by Stanford University and the University of Washington was fairly typical of the kind of work being performed. "HDTV Over Internet2 Networks" (see link below) tested the ability of an active network link to carry multiple streams of high definition television.
HDTV provides a very high-resolution image (1,920 by 1,080 pixels), more than five times as detailed as the best standard (NTSC) TV picture. Not surprisingly, HDTV requires a lot of bandwidth. The HD video feed starts out as a 1.5 Gbps data stream (about 1,000 T-1 links).
Fortunately, HDTV compressors can reduce this total substantially. For this experiment, both a broadcast studio quality version (140 Mbps, embedded in a 270 Mbps data stream) and a lower quality version (40 Mbps) were sent. Thus, some 310 Mbps of HD video were added to the normal traffic on a 622 Mbps (OC-12) network link.
Because the experiment was designed to test the ability of current networks to carry high-bandwidth data, no explicit QoS support was employed. Consequently, one purpose of the experiment was to provide a baseline against which future QoS enhancements can be measured.
IP-based networks cannot guarantee instantaneous communication, so a substantial amount of buffering (several seconds) was used to smooth out any hiccups. This is quite acceptable for broadcasting television programs, but it would wreak havoc in any interactive application.
Because my brother Jerry was involved in setting up the experiment, I was able to go to the Stanford facility and look things over. I even got to help uncrate and set up a $250,000 HDTV compressor!
All told, perhaps $500,000 worth of equipment had been brought in specifically for the experiment: an HDTV camera, monitor, and recorder, a couple of video compressors, and some special-purpose interface gadgetry.
Despite the cost of the equipment, the setting was modest: a 20 by 20 workroom containing a communications relay rack and several mismatched tables full of electronic equipment. Most of the equipment, in fact, consisted of test systems for the Y2K-compliance testing project that normally occupies the room.
On the other hand, the room was located next door to Stanford's network operations center (NOC). Consequently, it was well served by high-bandwidth network connections, knowledgeable staff, and other critical resources.
In short, it was just about the perfect environment to conduct a small, high-speed networking experiment.
The receiving end of the demonstration, in contrast, was set in an auditorium at the University of Washington. A Sony HDTV Videowall was used to display the results. This kind of publicity effort, although useful in promoting particular applications, is very unlike the quiet research that characterized the early Internet (i.e., ARPAnet).
Unlike the ARPAnet, Internet2 isn't operating in a vacuum; the players know all too well that the world is watching them and waiting (a tad impatiently) for the results.
I'm happy to report that these players seem to be taking their responsibility seriously. Consequently, we should have some delightful technology coming our way over the next few years.
Rich Morin operates Prime Time Freeware (www.ptf.com), a publisher of books about open source software. He lives in San Bruno, CA, on the San Francisco Peninsula.
Faster, cheaper Internet on the horizon
RELATED IDG.net STORIES:
Internet2 network poised for launch
|Back to the top||
© 2001 Cable News Network. All Rights Reserved.|
Terms under which this service is provided to you.
Read our privacy guidelines.