Internet Exchanges: Policy-Driven Evolution

Bilal Chinoy (bac@sdsc.edu) and Timothy Salo (salo@msc.edu)

Introduction and Motivation

Internet exchanges are systems within the Internet [1] which enable networks to meet and exchange data and control information. In order to enable networks to meet and exchange information, Internet exchanges (IXs) must do much more than merely forward packets. They must provide a robust environment in which differences between the attached client networks, such as in technologies used by different networks or in administrative and operational policies and procedures, do not become barriers to interconnection. Additionally, they must have policies that do not hinder competition between classes of attached networks (such as the often conflicting business interests of large, nationwide networks versus smaller, regional networks).

Internet exchanges exist in many different forms, often because of differences in the technology which was available or because they were created with different objectives. Some of the characteristics which can be useful in classifying different types of Internet exchanges are explored below. For example, some exchanges are collocated in a single room, while others are physically distributed. Access to some exchanges is relatively free, while other exchanges are open only to certain networks. Likewise, the focus of some exchanges is interconnecting regional networks, while other exchanges provide interconnections for nationwide networks.

Major policy decisions also had a strong effect on the structure of many Internet exchanges. Conversely, some exchanges enabled some policies or made other policies difficult to implement or enforce. Two policies which had a tremendous effect on the configuration of today's Internet are commercialization, the use of the Internet for commercial as well as research and educational purposes, and privatization, the implementation and operation of the Internet networks by the private sector rather than the government sector. The interaction between the policies of commercialization and privatization and the evolution of Internet exchanges is examined here in detail.

The Internet has been growing at an explosive rate for several years, with no end to its growth in sight. This growth has placed considerable stress on the Internet infrastructure. Many of these strains are visible at the Internet exchanges. For example, some Internet exchanges are experiencing sustained aggregate traffic loads of 200 - 400 Mbps. At some point, existing products and technologies will be inadequate. Several of the significant issues facing Internet exchanges are summarized in this chapter.

Historically, the U.S. federal government has played a significant role in funding the development of much of the key technology used in the Internet, funding the early deployment of some of the networks which comprise the Internet, particularly the NSFNET[2] backbone and regional networks, and coordinating much of the operations of the Internet, again largely through the NSFNET program. With the privatization of the Internet, exemplified by the decommissioning of the NSFNET backbone network in April, 1995 and the dramatic rise in commercial network service providers, the role of the U.S. federal government has been considerably diminished. These changing roles beg the question of what the role of the federal government ought to be in today's Internet. The authors do not claim to know the answer to this extremely complex question, but they do believe that there are a few specific aspects of Internet exchanges which would effectively leverage federal funding.

It should be noted that most of the examples cited in this chapter are from the experiences of the U.S. portion of the Internet and the NSFNET in particular. Nonetheless, the authors believe that most of the lessons learned in this portion of the Internet are applicable to a wide variety environments beyond those in which they were first learned.

A final note on terminology: we use the term Network Service Provider (NSP) to denote an organization that provides Internet connectivity services. Network generally refers to an NSP's network. We occasionally use the term national NSP when referring to those NSPs which provide service nationwide and operate their own nationwide network, and regional NSP to identify NSPs which provide Internet services only in a limited geographical region and depend upon a national NSP for interregional transport.

The Purpose Of Internet Exchanges

Internet exchanges have been created to allow independently administered networks to connect with each other and exchange data and routing information in a controlled manner. Figure 1 represents of a typical Internet Exchange. The interconnect allows the attached networks to exchange data and routing information. Note that the IX exists apart from the networks that connect to it. In many cases, the IX is administered independently of the attached networks. In a similar fashion, the NSP networks are not connected directly to each other. Rather, each NSP is connected to the interconnect, which in turn enables the networks to communicate. This structure provides a useful isolation between, for example, NSPs that use different technology internally or compete with each other, or have other reasons to prefer not to be directly attached.

NSPs establish bilateral connectivity by exchanging routing information using a routing protocol, a process termed route peering. NSPs may choose to advertise reachable destinations to peer NSPs or decide to filter (i.e. not announce destinations even thought they are reachable via the NSP) announcements based on some technical or policy criterion. Similarly, an NSP may choose to propagate a route internally or to ignore the announcement, again to implement a technical or administrative policy. Having established mutual route advertisements and acceptance lists, NSPs can then exchange user traffic between each other across an IX.

Figure 1. A Typical Internet Exchange

Attributes of Internet Exchanges

Internet Exchanges have been created in response to a variety of demands and their structure and management policies reflect these diverse requirements. Categorizing IXs by both their technology and policy architectures leads us to the following four classes.

We examine each category in detail below.

IX Network Structure: Collocated and Distributed IXs

Most IXs have been built with the traditional assumption that IX clients will attach their routers at a common geographical point, called the collocation point. For example, one of the earliest collocated IXs was an Ethernet network at the Pittsburgh Supercomputer Center (PSC), which housed routers from the ARPANET and MILNET, the NSFNET, and other wide area networks such as SATNET. The shared Ethernet allowed these networks to communicate routing information, often with protocol translators, and to exchange traffic. LAN technologies such as Ethernet and Fiber Distributed Data Interface (FDDI) continue to provide a reliable and robust substrate upon which interconnection networks can be built.

The advent of wide area link-layer network technologies such as Frame Relay, Switched Multi-Megabit Data Services (SMDS) and Asynchronous Transfer Mode (ATM) makes possible a second form of IX, namely the distributed IX. In this model, rather than NSPs purchasing leased lines from their closest Point Of Presence (POP) to the collocated site, the IX network attempts to span a geographic area that includes its client's POPs. A good example of this type of IX is the Ameritech Network Access Point (NAP), which is based on a wide-area ATM network. Here, NSPs in the Chicago area can connect to the NAP by simply connecting to Ameritech's ATM service at their own POP.

For an IX manager, collocated architecture is easier to manage. Upgrading the IX network to keep ahead of traffic and client demand is also easier and cheaper. However, the cost to an NSP to connect to a collocated IX is higher because the burden of providing a high-speed network to the IX falls upon the NSP.

A distributed IX enables an easier and cheaper connection for NSPs. The IX manager, however, is now faced with scaling a Metropolitan Area Network or a Wide Area Network to stay ahead of demand. Local Area Networks are much more cost- effective to scale with both traffic and port access demands.

Mission Objectives: Public and Private IXs

A public IX places no restrictions on which NSPs are permitted to connect. In practice, such IXs do have some basic requirements such as a minimum NSP connection bandwidth (such as DS-3 or 45 megabits per second), access to relevant management information and, of course, payment of IX connection fees, if any. Examples of public IXs are the Network Access Points funded by the National Science Foundation (NSF)[3].

A private IX allows NSPs that meet some policy criteria established by the IX managers to connect. These criteria could relate to the type of NSP or the size of the NSP in terms of traffic carried or clients attached. An example of a private IX is the Federal Internet Exchange Point (FIX), where only U.S federal agency networks are allowed to interconnect. The FIX-West network located in Ames, California, enabled the Department Of Energy's ESnet, NASA's NSInet, the National Science Foundations' NSFNET, and other agency-sponsored networks to exchange traffic and routing information.

A special case of a private IX is a pairwise IX, where only two NSPs interconnect. A pairwise IX is typically created between two NSPs that exchange a large amount of traffic at topological points in the Internet that are not served well by other IXs. Pairwise private IXs between high-volume NSPs off-load traffic from the public IXs as well as ensure better service for participants of the private IX. As the Internet connectivity market continues to evolve, a small number of large NSPs have begun to emerge and establishing pairwise IXs between them is becoming common. Examples include numerous pairwise IXs between SprintLink and InternetMCI.

IX Route Peering Policies: Multilateral and Bilateral Peering

Merely having a presence at an IX does not guarantee an NSP connectivity with other attached NSPs; NSPs can exchange traffic only if they peer with each other. Additionally, IX managers may have peering policy affecting all attached NSPs, or may let NSPs decide on peering policies themselves, typically on a bilateral basis.

A few IXs have a multilateral peering policy, which implies that a client NSP is expected to carry traffic from all the other NSPs attached to the IX. Conversely, by attaching to such an IX, client networks are assured that they will receive all routes that all the other NSPs carry in their networks. Most IX managers have no peering policies, allowing client NSPs to set up bilateral peering with other NSPs of their choosing.

IXs may have clients that are dissimilar in terms of the volume of traffic they exchange with each other. For example, consider the case in which NSP A has a small number of customers and NSP B has a relatively larger number of customers. A thus advertises a small number of reachable destinations to B, which typically results in a relatively smaller amount of traffic flow from B to A. However, NSP B advertises a larger number of destinations to A, which results in a larger traffic flow from A to B. Thus, with an enforced multilateral peering arrangement, B would carry more traffic from A than it offloads to A. This is why most larger, well-established NSPs prefer bilateral peering arrangements with other NSPs that have similar customer bases.

Traffic-volume-based settlements have been proposed as a means of charging based on the ratio of traffic sent to another NSP to traffic received from that NSP. An NSP with a disproportionate balance of traffic would either pay or receive settlements to compensate for the resources expended in carrying its peers traffic.

IX Geographic Scope: Regional and National IXs

Internet Interconnection Exchanges have typically been categorized as national IXs because of the scope of their client NSP backbone networks. These IXs serve to interconnect nationwide and international Network Service Providers. However, the

increase in the number of NSPs serving local and regional geographic areas has motivated the creation of regional IXs. These exchanges typically aggregate traffic to and from a smaller geographic area, such as a metropolis or a state. National-scope NSPs then carry the IX traffic to and from a national IX. The traffic aggregation hierarchy thus created is a very important architectural requirement for scaling the number of Internet IXs. Traffic between local and regional NSPs that have different national network service providers does not need to traverse a national IX that may be topologically distant. Rather, local traffic is restricted to the regional IX and network resources are more efficiently utilized.

Policy Driven Evolution of Internet Exchanges

Policy affected the evolution of Internet exchanges, just as did technology, competitive forces and other factors examined above. Internet exchanges also affected policy, making the implementation of some policies straight forward and other policies difficult. The relationship between the Internet exchanges and policy is perhaps best illustrated by the interaction with commercialization and privatization policies.

During the 1990s, the evolution of the U.S. portion of the Internet has, to a very large extent, been driven by two related policies: commercialization and privatization. Under commercialization, the mission of the Internet was broadened from its initial focus on supporting research, education, and defense to include commercial (as well as nearly any imaginable) activity. At the same time, privatization shifted responsibility for the design, implementation, operation, and funding of the Internet from the federal government to the private sector.

In this section, we examine how Internet Exchanges evolved hand-in-glove with the commercialization and privatization of the Internet; how exchanges were both driven by a desire to commercialize the Internet and how interconnects enabled a smooth transition from a government funded to a privatized Internet. While policy, and particularly the policy shift toward a commercial, privatized Internet, was significant in the development of Internet exchanges, economic, competitive, and technical factors also played important roles.

For our purposes here, we have divided this evolution of the Internet from a federal initiative to a privatized, commercialized service into three phases. In this section we examine the role of Internet Exchanges during each of these three phases and relate those roles to the changing policies, technologies, economics, and competitive forces.

This very brief history is, by necessity, only a terse summary of one facet of a very large project. During this period, many interconnects existed in many forms. We have chosen to focus on interconnects that we consider important either because they enabled a particular policy, because they were created in response to a policy, or because they were critical to the operation of the Internet.

Structured Exchanges in a Federally Supported Internet

The early exchanges within the NSFNET were created primarily to assist the administration and operation of the NSFNET by providing well-defined interfaces between independently administered and operated portions of the NSFNET.

The original NSFNET architecture was a three-tier hierarchy. The NSFNET backbone was at the top of the hierarchy. It was funded by the National Science Foundation (with considerable cost-sharing by industry), and designed, implemented, and operated by a consortium lead by MERIT Inc. At the second tier were the NSFNET regional networks (also called mid-level networks), which were administered and operated independently of the NSFNET backbone, but were heavily dependent upon and closely coordinated it. At the bottom of the hierarchy were the organizations receiving Internet connectivity, which in this era were typically called "campuses."[4]

The NSFNET backbone provided interregional transit services to the regional networks. This considerably simplified the routing challenges faced by the regional networks because the regionals needed only to provide routing between the campus networks that they served. Traffic destined for campuses attached to other regional networks was forwarded by the regional networks to the NSFNET backbone, which was responsible for the correct routing of traffic between regions.

Figure 2. Early NSFNET Architecture

The NSFNET backbone was implemented as a few dozen nodes interconnected with a partial mesh of point-to-point links. The NSFNET nodes were generally hosted by academic institutions or NSF-funded supercomputer centers. The regional networks connected to the NSFNET backbone by extending their networks to a convenient NSFNET node. A LAN-based exchange, often called a DMZ, (referring to the isolation aspect of demilitarization zones), enabled communications between regional networks and the NSFNET backbone, while at the same time providing a degree of isolation between these independently administered networks.

Figure 3. A Typical NSFNET DMZ

The NSFNET experience demonstrated that well-structured Internet Exchanges contributed to the smooth interaction between independently administered parts of the Internet. A clear demarcation of responsibilities, supported by the Internet Exchange architecture, contributed to the success of the exchanges. The regional network was responsible for transporting its traffic to the DMZ LAN, the site hosting the NSFNET node was responsible for ensuring that the DMZ LAN operated smoothly, and the MERIT consortium was responsible for transporting traffic between the DMZ LAN and the rest of the Internet.

In retrospect, the demands placed upon these structured exchanges were simplified by the relatively homogeneous policy environment within which they existed. The attached networks, the NSFNET backbone and the regionals, were completely dependent upon each other and upon the NSF. Without a connection to the NSFNET backbone, which was awarded by the NSF, it was nearly impossible for the regionals to provide Internet connectivity. Conversely, the NSFNET backbone was dependent upon the regional networks to provide Internet connectivity to the campuses. In a similar fashion, both the regionals and the NSFNET backbone were heavily dependent upon the NSF for funding (although some of the regionals' funding came indirectly through grants to connect academic institutions to the regional networks).

Alternative Exchanges in an Emerging Commercial Internet

Undoubtedly the most significant policy of the NSFNET backbone was its acceptable use policy (AUP)[5], which specified that only traffic supporting research and education was permitted on the NSFNET backbone. Inasmuch as the NSFNET backbone was the principal mechanism for interregional transport, it was very difficult for regional networks to exchange traffic that did not conform to the NSFNET AUP (traffic that was usually called "commercial" traffic). Alternative Exchanges were developed to bypass the NSFNET AUP, creating the "commercial" Internet.

Commercial organizations have been connected to the Internet nearly since its inception. Within the NSFNET community, there was a general feeling that commercial organizations should be permitted to connect to the NSFNET, particularly if they communicated primarily with educational and research organizations. The connection of commercial organizations supported the research and education mission of the NSFNET in several ways. It aided and sometimes even enabled educational and research collaboration between industry and academia. Some vendors, particularly those of computer or networking products, used the Internet to provide better service to their research and education customers. Commercial organizations often subsidized academic connections by spreading some of the relatively fixed costs of the regional networks over a larger number of customers, by helping the regional networks attain some economies of scale and, in many cases, through a rate structure which favored academic institutions over commercial organizations.

A number of the regional networks saw commercial organizations as a tremendous potential market segment. Many of them allowed commercial traffic within their own networks, even though commercial traffic was not permitted on the NSFNET backbone. Commercial traffic could be exchanged with "nearby" sites attached to the same regional network, but only research and education traffic could be exchanged with "distant" sites which involved transit across the NSFNET backbone. The complete lack of tools that would enable users to determine whether commercial traffic was permitted between a pair of sites caused no small amount of confusion for users; users simply could not determine when commercial traffic was prohibited and when it was permitted.

In 1990, three network service providers, CERFNET, Alternet and PSI, formed the Commercial Internet Exchange (CIX)[6] to allow them to exchange commercial traffic among themselves. The CIX created an exchange point in Santa Clara, California, that enabled the three networks to exchange commercial traffic without using the NSFNET backbone. The CIX quickly expanded its membership beyond the three founding members. Approximately a year later, another alternative Internet interconnect, MAE-East, was created in the Washington, D.C., area.

The CIX was perhaps the first Internet exchange created in response to policy concerns, in this case the NSFNET AUP.

Figure 4. Alternative Exchanges and the NSFNET

The CIX was successful, in a policy sense, in that it achieved its mission of providing an AUP-free method of exchanging commercial traffic between regional networks. On the other hand, it created a number of difficulties. The CIX created a single point of failure and congestion , because most commercial traffic between regional networks was transported through the CIX. It also created some very long paths, for example when traffic between two East Coast sites was transported through the CIX on the West Coast.

Perhaps the most serious difficulty which resulted from the creation of the CIX was that in many cases two paths existed between a pair of sites: an AUP path through the NSFNET backbone and a non-AUP path through the CIX. The choice between these two potential paths should have been, in theory, based solely on the content of the traffic, namely whether the traffic conformed to the NSFNET AUP. Unfortunately, there was no mechanism that could mark traffic as AUP or non-AUP. To the network, packets were simply data; there was no difference between AUP and non-AUP traffic. Of course, this situation only compounded the difficulties users faced in trying to determine whether non-AUP traffic was permitted between a pair of sites.

The strong desire to select appropriately between the two alternative paths, the NSFNET or the CIX, was also important because the two paths were very different. The NSFNET backbone had evolved to T3 (45 Mbps) speeds while the CIX remained a T1 (1.5 Mbps) interconnect. Therefore, regional networks wanted to use the CIX only to exchange non-AUP traffic while at the same time using the higher-performance NSFNET backbone to exchange AUP traffic. However, because the available routing technology was completely incapable of routing AUP traffic across one path and non-AUP traffic across another path, regional networks used one or more approximations, none of which were particularly good:

There was an effort to simplify this confusion by classifying end sites as either "research and education" or "commercial." This attempt didn't get very far because the path through an IP network is based on the destination of the traffic, not the source and not whether the source and destination are in the same class, in this case whether the source and destination were both commercial sites. This scheme also ignored the fact that, for example, some traffic originating from a nominally commercial site would conform to the NSFNET AUP while other traffic would not.

At about this same time, there was debate about the proper architecture for an Internet which could easily support both AUP and non-AUP traffic. One model proposed that a single backbone network transport both AUP and non-AUP traffic, and that mechanisms be developed which would identify non-AUP traffic and charge different rates for transporting AUP and non-AUP traffic. Opponents of this plan viewed it as creating a monopoly for inter-regional Internet transport. They advocated a number of competing, nationwide networks which exchanged traffic among themselves at a Internet exchanges throughout the country. The success of the CIX and other alternative exchanges largely made this debate moot, because they allowed a monopolistic backbone to be bypassed in the same way that they had allowed the AUP NSFNET backbone to be bypassed.

Peer Exchanges in a Privatized Commercialized Internet

Peer interconnects have made possible today's privatized Internet, facilitating the relatively smooth transition from a federal support. In today's Internet, interregional data transport is provided by a collection of interconnected, nationwide network service providers. A single, centralized core network analogous to the NSFNET backbone no longer exists. Internet interconnects are a fundamental component of this new architecture.

Today's Internet architecture is the product of two complementary policy trends. Commercialization was furthered by the alternative exchanges described above. Meanwhile, the NSF found that a substantial portion of its funds available for networking activities was being consumed by the support of operational networks. This operational network funding comprised three components: support for the NSFNET backbone, direct support for regional networks, and grants to educational institutions for connection to the Internet. The heavy demand for funds for operational networking displaced funding for network research projects, a source of some concern on the part of the NSF. The desire to shift funds from operational networking led to a policy of privatization, the migration of responsibility for interregional transport away from the NSF-funded NSFNET backbone to private, nationwide network service providers. A key part of this strategy was the creation of Internet interconnects that ensured continued universal connectivity.

In 1993, the NSF issued solicitation NSF 93-52 , which specified a new NSFNET architecture in which interregional transport was provided by several nationwide NSPs connected by Internet interconnects. The solicitation called these interconnects "network access points" or "NAPs." A NAP was described as a "conceptual evolution of the FIX and the Commercial Information eXchange (CIX)." NAPs were AUP-free, so the transit of commercial traffic was not to be impeded by policy. Proposals to host and manage NAPs were solicited.

The NSF sponsored four NAPs in response to the solicitation: New York (Sprint), Chicago (Ameritech), California (Pacific Bell) and Washington, DC (MFS). These NAPs enabled the transition from a federally funded NSFNET backbone to an architecture in which interregional transport is provided by interconnected nationwide NSPs. This project enabled the NSF to eliminate the NSFNET backbone and shift funds to other projects.

The peer interconnect-based architecture has been effective in mitigating the effects of other policy-based impediments to traffic. In 1994, the CIX decided that it would not transport data for regional NSPs that were connected to CIX members but were not CIX members themselves. The announcement of this policy generated quite a bit of concern about a balkanized Internet. However, by the time this policy was finally implemented, it affected only a small number of small NSPs. There were enough Internet interconnects that only those NSPs which were connected to the CIX, but to no other Internet interconnect, were the only ones potentially affected. Most Internet connectivity either occurred or could occur at other Internet interconnects, so the CIX had only a very limited ability to impose its policy on the Internet as a whole. The parallels between the CIX as a method of circumventing the restrictive effects of the NSFNET AUP and multiple Internet interconnects circumventing the CIX's later efforts to impose policy-based restrictions on traffic flows seem rather ironic.

The CIX's effort at policy-based filtering did, however, highlight a continuing conflict in the privatized Internet. Regional NSPs depend on national NSPs for interregional data transport. The CIX, by its very nature, allowed both regional and nationwide NSPs to connect and specified that all networks should exchange traffic with each other. The nationwide NSPs viewed this as requiring them to provide the smaller NSPs attached to the CIX with free transit services. There was also a feeling, particularly on the part of the small NSPs attached to the CIX, that the CIX filtering proposal was being used by some of the nationwide NSPs as an unfair competitive tool against the small NSPs.

Scaling The Internet: Interconnect Exchange Issues

As traffic on the Internet continues to grow and connectivity becomes more ubiquitous, designers and policy makers are faced with a number of system scaling

issues. The increasing number of NSPs and the need for globally efficient routing requires establishing a greater number of IXs, while the increasing traffic requirements forces existing IX managers to ensure that IX capacity stays ahead of demand. National IXs are typically located at points of high traffic and physical bandwidth (trunk lines) aggregation, which suggests co-locating other services such as large Web caches to gain economies of scale. Another key component of Internet system scaling thus far has been the technical evolution of protocols and architecture to accommodate new services. The availability of traffic and performance data has helped researchers suggest protocol improvements. As the number, size and technical complexity of IXs increases, traffic statistics and other sources of data required for network analysis are increasingly difficult to obtain, with potentially severe consequences for scaling the Internet.

Scaling the Number of Interconnect Exchanges

A key advantage of more interconnection points is potentially shorter traffic paths between sources and destinations that are not attached to the same NSP. Routing policy control tools allow, albeit with hand-crafted configurations, NSPs to optimize traffic flows on their backbone networks. A system architecture in which regional IXs use one or more national NSPs for connectivity to the national and international IXs provides a hierarchical means of scaling the number of Internet exchanges.

However, in order to take advantage of these multiple paths, NSP routers at the interconnect exchanges must carry a complete set of Internet routes. Moreover, every routing fluctuation, or flap, must be processed by these routers. IP routing protocols do not currently have efficient means of damping down the propagation of route flaps, and as the number of IXs increase, so does the amount of information that each NSP router must process. This results in a degradation in packet forwarding rates and lost packets. Current research into damping mechanisms shows some promise of alleviating this problem.

Along with the ability to deal gracefully with route flapping, Internet routing protocols are still evolving to best accommodate load balancing and path optimization across multiple IXs. The closest IX exit point for a packet is typically determined by only the destination address and not the combination of the source and destination addresses. The result is an asymmetric path, with traffic in one direction using a different IX than traffic in the opposite direction. While this is usually not a serious operational problem, end-to-end path optimization is still a manual process and fraught with high potential for operator error.

Scaling the Performance of IXs

A packet between two end-sites, A and B, traverses an interconnection exchange if A and B are customers of two different NSPs. This implies that as the number of NSPs continues to grow and share the marketplace, the volume of interconnection traffic will continue to increase. Currently we see approximately 200-400 megabits per second of sustained traffic across the busiest interconnects (MAE-East and the New York NAP). Additionally, many larger NSPs and some research networks are moving towards OC-3 (155 megabits per second) connections to IXs. IXs are high traffic aggregation networks and, if not designed to sustain offered load, can be choke points in the Internet infrastructure. Thus IX capacity must scale with demand.

Current switching technology dictates the offered capacity at IXs. Some IXs use shared and switched FDDI as the switching substrate, which limits inter-NSP bandwidth to 100 megabits per second (or 200 megabits per second with full-duplex router interfaces). Other IXs use ATM switches, which appear to have the potential to scale to higher speeds, but have not yet been proven to operate robustly under high production traffic loads.

System-wide Measurements and Capacity Planning

With new services and applications constantly being prototyped on the Internet, it is crucial that we continue to understand the nature of Internet traffic. Protocol researchers and developers need to adapt existing protocols or create new ones to deal with these new services. Access to traffic statistics of various types and at various time granularities is essential for the effort. In the past, backbone and regional networks operators exchanged relevant statistics to optimize traffic flows between their networks. Currently, most commercial NSPs treat their traffic statistics as confidential information.

However, the challenge for policy makers is to create a forum where traffic information provided by the NSPs is used only for research and analysis, and NSPs can continue to compete on the basis of well defined service metrics. To this end, the Internet Engineering Task Force (IETF) IP Provider Metrics (IPPM) working group is attempting to define a set of service quality metrics that could be used to objectively characterize NSP service quality. We would then have effectively removed any advantage (or disadvantage) for NSPs in making available traffic data, whilst still allowing service distinction on the basis of standard metrics.

Summary: Implications For Policy Makers

Oversight

The Internet model of an interconnection of independently administered networks implies that Internet Exchanges will continue to play an important role in the global information infrastructure. The analogy with the U.S system of airports seems appropriate. Each IX, like each airport, is independently owned and administered. However, because of the interconnectedness of the airline system and because the safety and convenience of passengers is of primary concern, oversight in the form of the Federal Aviation Administration (FAA) exists to protect the integrity of the system. The question for policy-makers is: What oversight of the system of independently owned and administered IXs, if any, is necessary to protect Internet end-user service and performance ? Who ought to provide that oversight?

Scaling

With the National Science Foundation sponsoring the NSFNET project, the federal government played the roles of both network service provider through its cooperative agreement with MERIT, as well as client for network service through regional network support programs such as the Connections program. A consequence of this dual role was that considerable attention was given to the stability of the system of networks, through applied research in traffic analysis, network management tools, and protocol engineering. While most larger NSPs do have network engineering efforts in house, system-wide research must be continued. Historically, federal funding of Internet-related research has been fundamental to the success of today's Internet. Operation of the Internet has been successfully privatized, an evolution in which IXs played a key role. However, it is not clear that the funding of research necessary to scale the Internet to meet projected demands has been as successfully assumed by the private sector. It is the authors' opinion that there still remain important roles for the federal government to play in supporting critical aspects of Internet research.

References

[1] Cerf, V., "The Catenet Model For Internetworking", IEN 48, Information Processing Techniques Office, Defense Advanced Research Projects Agency, July 1978

[2] Mills, D.L and Braun, H., "The NSFNET Backbone Network", Proceedings Of ACM SIGCOMM 1987, Stowe, VT

[3] NSF 93-52 - "Network Access Point Manager, Routing Arbiter, Regional Network Providers, And Very High Speed Backbone Network Services Provider For NSFNET And The NREN(SM) PROGRAM", May 1993

[4] Chinoy, B and Braun H., "The National Science Foundation Network", SDSC Technical Report GA-A21029, Sep 1992.

[5] "The NSFNET Backbone Acceptable Use Policy", June 1992, available from Merit Inc, as http://www.merit.edu/nsfnet/acceptable.use.policy

[6] The Commercial Internet eXchange, http://www.cix.org