Net's backbone tries to keep up with traffic explosion
October 20, 1998
by Jason K. Krause
(IDG) -- Hidden away in a steel cage in a building in San Jose are two refrigerator-size metal boxes that are the vestigial guts of the Internet. This is the home of MAE West - Metropolitan Area Ethernet West - one of the original hubs for Internet traffic. "When places like this run out of capacity," says Dan Lasater, VP with MCI WorldCom and director of MAE West, "things get pretty ugly for everyone on the Web."
It may seem fantastic to anyone who's been taught that the Internet is a living network, intelligently routing around obstacles and outages, but if a network access point like MAE West or another backbone goes down, huge geographic regions can go without access. In 1995, Minnesota lost access for half a day because a small fire melted fiber optic cables that served as the state's only Internet link. In April of this year, a bug in the router software on ATT's frame relay network caused Internet outages for more than a day for millions of customers. Last August, Network Solutions accidentally erased the domain names for hundreds of customers, effectively obliterating their Internet identities. And a satellite failure this summer disrupted pager and Internet service throughout the country for days.
The problem, of course, is traffic. To understand how much it has increased, consider the original MAE (now MAE East), an Ethernet network that connects Internet service providers in the Washington, D.C., area. It originally ran at 10 Mbps (10 megabits per second). Then came a 100 Mbps network switch. Even the new ATM switches, which can route 10 Gbps (10 gigabits per second or 10,000 megabits per second), won't be able to handle the load.
No one predicted such traffic. The precursor of the Internet, Arpanet (Advanced Research Projects Agency Network), the nuke-proof network used by the U.S. military, was first demonstrated to the public in 1972. Other networks existed mostly private academic networks but they all used proprietary protocols and couldn't communicate with each other. Then Vint Cerf and Bob Kahn developed TCP/IP, an open, flexible communications architecture.
In 1985, the National Science Foundation Network made TCP/IP mandatory. The NSF then sponsored the creation of network access points, including MAE East and West. Soon after came the sale of university and government infrastructures to companies like UUNet, MCI and Sprint, and the Internet as we know it was born.
In the early days, MAE East and four other network access points were peering partners (peering is when two networks exchange data), providing all the connectivity needed for the entire country. The number of network access points keeps growing, but they are being made less relevant by direct connections, called private peering agreements, among nearly 5,000 ISPs. Unfortunately, ISPs have made so many private peering agreements they've created new bottlenecks.
The problems multiply with the traffic. In the next few years, all of the physical infrastructure of the Internet the cables that carry traffic, the routers that direct it, the protocols that guide it, the network access points that move data, and even the lines that pump data into your home and business will be due for a serious overhaul.
The first new research network to be launched in recent years, Abilene, is accessible only to scientists and academic researchers. It was named after an ending point for American frontier railroads, funded by the University Corporation for Advanced Internet Development and supported by member universities and corporations. Unlike the creation of the original Internet, the Abilene project managers plan to keep the network a private affair.
"The original [National Science Foundation] and government-funded Internet was handed over to commercial interests not that that was a bad thing. In fact, it worked too well," says Terry Rogers, director of the Abilene project. "But universities no longer had uncongested high bandwidth networks for research. We won't give the network away this time."
Developers unveiled Abilene three weeks ago and demonstrated what the next-generation Net might be able to do. Ohio State broadcast a live gall bladder surgery over the network in gory, full-screen, high-resolution clarity. During the operation, the surgeon casually chatted with people in San Francisco. But the show also demonstrated some of the limitations of high-bandwidth technology: Many of the demos featuring multicast video were jerky or dropped so many frames they were unwatchable.
"We don't have a last-mile problem, we have a 50-foot problem," says Rogers. "The problem is getting the data into the hotel."
The 50-foot problem is ubiquitous. Although the trunk lines crisscrossing the country are made of high-capacity fiber, the local loops into businesses and homes are typically two- or four-wire unshielded copper with limited bandwidth. It doesn't matter if data is transferred at 10 Gbps on an ISP backbone if it gets choked by a 28.8 Kbps modem at customers' homes. Very few ISPs address this problem. The few that can provide high-speed access directly to consumers such as regional telephone companies that can provide DSL service, or ATT with its acquisition of TCI cable will have the advantage in the coming years.
The next important innovation will be an overhaul of the Internet protocol itself. We are about to get IP version 6, which will reportedly address "quality of service." To most service providers, this means they will be able to offer different types of service. Users will be able to reserve bandwidth or guarantee that there will be space on lines to carry their data. For example, an Internet phone call demands a higher quality of transmission and cannot be interrupted midstream, while e-mail can be sent at a lower priority.
The next-generation protocol will also include multicasting, which lets a broadcaster reach a number of recipients by sending out only one copy of data to a network, rather than sending a copy to every Web user that requests the data.
Available Internet addresses the strings of numbers known as IP addresses that identify users and hosts are quickly being used up. The good news is that the next generation protocols will support longer IP addresses. The bad news is that ISPs and router manufacturers routers are the boxes that direct all Internet traffic have been slow to support the longer addresses.
Now that voice is increasingly being transmitted over the Internet, routers must be developed with this capability. Router technology, as it stands now, cannot guarantee the same sound quality or reliable connections as the old, switched telephone networks. Before IP networks can guarantee voice service, routers will need to handle more traffic more reliably and integrate the outmoded traffic types that run on old public switch telephone networks.
"It's not effective to keep throwing up new equipment every time we hit a bottleneck," says Lasater. "I'd really like, for once, to have a long-term answer to the problem."
But the Web may always be its own worst enemy. It was born of competing networks cobbled together by competitors with a mutual interest in its success, and this inherited infrastructure will not go away. It will continue to be upgraded as growth demands, but those upgrades have rarely been more than a step ahead of the Web's growth.
ISPs for Miles and Miles
* Provide local Internet access from the main Internet backbone. The major exchange points not listed here are owned by Ameritech (Chicago) and US West (San Francisco).
** Connecting points between separate networks.
Back to the top
© 2000 Cable News Network. All Rights Reserved.
Terms under which this service is provided to you.
Read our privacy guidelines.