A Survey of Internet Statistics / Metrics Activities

by Tracie Monk (DynCorp) and K C. Claffy (NLANR)

1. Overview

Introduction:

In this paper, we present a survey of the current activities in Internet performance and workload measurement. The environment is dynamic enough that we can only provide an indicator of the growing activity and interest surrounding this important topic; we apologize for anyone omitted. Hopefully, through the hyperlinks and other references, readers will be able to followup directly with the various researchers and organizations who have (or plan) initiatives relating to the collection, analysis and/or presentation of Internet statistics.

We also outline in this paper a recommendation for the establishment of a `provider consortium' to facilitate cooperation. The National Laboratory for Applied Networking Research (NLANR) is working to develop a framework for such a collaboration, and believes that it could serve as an appropriate forum for...

  • facilitating the identification, development and deployment of measurement tools across the Internet;
  • providing commercial providers with a neutral, confidential vehicle for data sharing and analysis;
  • providing networking researchers and the general Internet community with additional realtime data on Internet traffic flow patterns; and
  • enhancing communications among commercial Internet service providers, exchange/peering point providers and the broader Internet community.

Transition from the NSFnet Backbone Service:

Throughout the lifetime of the NSFnet Backbone Service, Federal agencies and various user communities were engaged in the development, operation and ultimate transition of the Internet to the commercial sector. The involvement of many of these groups was channeled through the Federal Networking Council (FNC) and its Advisory Committee (FNCAC), which consists of representatives from the higher education and research communities, service providers and industry. Following the April meeting of the FNCAC, its members distributed a set of recommendations calling upon government to...

  • Initiate efforts to expand the collaborative development of performance measurements and trouble ticket tracking on the Federally-sponsored segments of the Internet, with specific attention to factors relating to the privacy of Internet users and providers and security of Internet facilities and usage.
  • Increase funding / prioritization of research on measurements and measurement techniques that can be employed by ISPs and users (or their representatives) to quantify Internet quality of service (packet loss, packet delay, route availability, etc.).
  • Identify critical networking metrics and tools which could be run over Federal R&E networks, including defining the characteristics of an "ideal" measurement tool which could gather data on both end-to-end performance and workflow characterization.

These recommendations resulted from concerns expressed by the members' communities (such as the High Energy Physics community) about the current state of the Internet as well as briefings from researchers such as Hans-Werner Braun (Teledesic - previously NLANR), K Claffy (NLANR), and Mark Garrett (Bellcore). The most recent presentation before this body concerned results from statistics gathering efforts over the FIX-West facility and the conclusions of the Internet Statistic and Metrics Analysis workshop. The results of this workshop and other statistics / metrics activities are discussed below.

2. The Demand for Metrics

ISMA Workshop:

In February 1996, NLANR and Bellcore hosted an NSF-supported workshop entitled Internet Statistics and Metrics Analysis (ISMA). One outcome of this workshop was the articulated need for smoother coordination and information exchange among Internet service providers -- both providers of Internet access and traffic exchange services.

Although most participants at the workshop felt that the Internet development and provider community should seek out measurement infrastructure and sources of statistics in the commercially decentralized Internet, there was definite dissonance as to which measurements would help, and who should have access to them. While a public infrastructure for end-to-end performance measurements could help researchers and end users study the infrastructure, participants concurred that Internet service providers (ISPs) themselves would be the greatest beneficiaries of an enhanced statistics capability. Opinions varied as to the sensitivity of some data and how much could be released publically. But there appeared to be consensus supporting the need for desensitized versions of such statistics to be made available in a neutral forum. Such a neutral forum could also facilitate more comprehensive collaboration, consensus-building, and the development of tools to measure new metrics that are important to stability, service, efficient resource allocation, and more economically viable network usage pricing policies.

Financial Settlements

The Internet is still relatively devoid of pricing models or other mechanisms to allocate and prioritize scarce resources -- particularly bandwidth. Enhanced network functionalities such as multiple qualities of service and bandwidth reservation systems such as Resource Reservation Protocol (RSVP) will require some mechanisms for accounting for network usage at a finer granularity than is generally deployed, or even possible, within many operational Internet components today. They will also require basic statistics to facilitate accounting and the development of economic models.

In his recent Internet Draft, on Metrics for Internet Settlements, Brian Carpenter (CERN), asserts that financial settlements are a critical mechanism for exerting pressure on providers to strengthen their infrastructures. He writes:

    "...the current crisis in Internet performance, although superficially caused by very high growth rates, is to a significant extent due to the lack of efficient financial pressure on service providers to strengthen their infrastructure. One way to fix this is to institute financial settlements between the providers."

He suggests that metrics used in Internet settlements

    ...should not rely on expensive instrumentation such as detailed packet of flow analysis -- simple bulk measurements on a wire, or off-line measurements, are preferable.

    ...should preferrably be estimated, if necessary, by statistical sampling.

    ...should be symmetric in nature so that the resulting settlements can be associative and communicative, thereby allowing settlements to be aggregated by upstream service providers, and redistributed by transit carriers, in an equitable manner.

Carpenter further suggests that each metric used in the settlement agreement should define or refer to a measurement method and should specify a settlement rate and currency.

Factors which continue to inhibit implementing settlements include the lack of a common understanding of the business mechanics of inter-ISP relations. Some suggest that the ITU / telco settlements model may have relevance to ISP settlements. Others suggest that the connectionless nature of the IP protocol demands that entirely new pricing models be developed. For a good introduction to discussions on Internet economics, see Hal Varian's web site on The Information Economy.

User Communities:

The higher education and research communities are among the more vocal Internet users. Being among the first to find the Internet mission critical, they are quite chagrined with the current state of congestion -- and with the fact that the Internet (given its design) offers only best- effort guarantees. They claim that service quality over recent months has been significantly degraded due to the growth in web activity, the proliferation of users (see Mark Kosters' May 1996 presentation) and the lack of cooperation among commercial providers / users which exemplify the current Internet environment.

In a series of meetings over the last year, representatives from EDUCOM, FARNET, and related institutions have discussed the communications requirements of their communities and shared their concerns about the ability of the Internet to meet their future needs. A paper by Doug Gale and the Monterey Futures Group, concludes "that by the year 2000,higher education will require an advanced internetworking fabric with the capacity to:

  • support desktop and room-based video teleconferencing;
  • support high-volume video streaming from distant video servers;
  • incorporate and integrate voice traffic;
  • insert large-capacity inter-institutional projects into the fabric at will in response to the needs of research projects and network experiments;
  • interconnect enterprise networks, that are in various stages of migration to higher performance technologies, and;
  • control costs and pricing/allocation models from within the enterprise."

Similar demands are beginning to be made by other user groups who now view the Internet as the appropriate medium for their future communication needs. The automotive industry, for example, adopted TCP/IP in 1995 as a standard for data communications among its thousands of trading partners. Through the Automotive Industry Action Group and its Telecommunications Working Group, the major auto manufacturers are working to set forth necessary requirements for the quality of service necessary to satisfy their industry's needs. Specific areas being examined include specifications related to...

  • certifying a small number of highly competent Internet service providers to interconnect automotive trading partner's private networks;
  • monitoring providers' ongoing compliance with performance standards that support business requirements;
  • enforcing strict security mechanisms to authenticate users and protect data, thereby creating a virtual private network for the auto industry

The AIAG has identified several metrics which it views as critical to this initiative and to future monitoring efforts. These include Performance Metrics such as latency, packet/cell loss, link utilization, throughput, and BGP4 configuration and peering arrangements; as well as Reliability Metrics such as physical route diversity; routing protocol convergence times; disaster recovery plans; backbone, exchange point and access circuit availability; and speed of failed customer premise equipment replacement.

As more user groups (e.g., the financial sector, energy industry, and others) move toward the Internet as their preferred communications vehicle, we are likely to witness increasing pressure on providers and others to collect, collate, analyze, and share data related to Internet (and provider) performance.

Instrumenting Networks:

Many internet service providers currently collect basic statistics on their own network's performance and traffic flows. Typically this includes measurement of throughput, delay, and availability. In the era of the post-NSFnet Backbone Service, the only baseline against which networks evaluate performance is their past performance metrics. There are no data available against which national level comparisons or comparisons with other networks' performance can be made. Increasingly, what is needed by both users and providers is information on end-to-end performance -- information which is beyond the realm of what is controllable by individual networks.

Another example of statistics maintained in isolation within an individual ISP is trouble ticket tracking of problems that originate and are resolved within the context of that ISP itself. Throughout most of the lifetime of the NSFnet backbone, resolving route instabilities and other trouble tickets were the the responsibility of Merit (under its cooperative agreement with NSF). In the current environment there is no such entity to claim to share responsibility for national level management of the Internet. As a result, there are no scalable mechanisms available for resolving or tracking problems originating or extending beyond the control of an individual network.

Route instabilities is another area which can have a direct, sometimes profound, affect upon the performance of individual networks. Some networks are seeking to improve the stability of their routing by peering directly with the routing arbiter (RA) at network access points (e.g., SprintNAP and FIX-West/MAE-West). In the context of the routing arbiter project, Merit/ISI have also developed statistics, including those on route flapping and inappropriate announcement, that represent a macroscopic characterization of routing stability and identify trouble areas for the networks with which they peer. However, these efforts are still in their nascent stages and do not yet have sufficient buy-in or support from commercial players to make them a fundamental component of the Internet architecture.

The vacuum created in national-level statistics/metrics collection which followed the transition to the commercial architecture has also complicated planning by national service providers and others. While detailed traffic and performance measurements are essential to identifying the causes of network problems and formulating corrective actions, it is trend analysis and accurate network/systems monitoring which permit network managers to identify "hot spots" (overloaded paths), predict problems before they occur and identify ways to avoid them by efficient deployment of resources and optimization of network configuration. As the nation and world become increasingly dependent on the National and Global Information Infrastructures (NII/GII), it is critical that mechanisms be established to re-enable infrastructure-wide planning and analysis.

3. Current Activities:

The importance of measuring Internet metrics, particularly those related to performance, has been discussed most recently at the ISMA workshop (Feb. 1996); meetings of the FNCAC and Educom/Farnet (both in April 1996); and at a BOF following the May 1996 NANOG meeting. Future events, government-related activities, and various QoS-related efforts are discussed below.

IETF / INet & Related Sessions:

    IEPG (June 23, 1996): At its June 1996 meeting, the Internet Engineering Planning Group (IEPG) will discuss the need for greater collaboration in collection, analysis, and tool development related to Internet statistics and metrics.

    IPPM BOF (June 25, 1996) - The IP Provider Metrics (IPPM) group, derived from the Benchmark Methodology Working Group (BMWG) of the Operational Requirements (OR) Area of the IETF, is comprised of researchers and service providers interested in defining basic metric terms and a formal structure for defining new metrics and measurement methodologies in order to develop standardized performance evaluations across different Internet components, particularly ``IP clouds''. Such evaluations, according to Vern Paxson, can assist by:

    • aiding trouble-shooting and capacity planning in a complex world of tens of thousands of networks and interconnecting links;

    • providing market incentives for network service providers to optimize their networks, by giving Internet customers sound techniques for evaluating the service they are receiving and comparing the performance of different providers;

    • enabling Internet research geared towards a better understanding of the behavior of network traffic and how the Internet evolves.

    For information on the IPPM BOF, contact Guy Almes (Advanced Network Services) at almes@advanced.org.)

    PIARA: There will also be a BOF at the Montreal IETF on Pricing for Internet Addresses and Route Assignments, which will discuss Brian Carpenter's Internet Settlements RFC (discussed above), as well as Rekhter, Resnick and Bellovin's draft on charging for routing and addressing. Contact Allison Mankin (ISI) for details at mankin@isi.edu.

    During INet '96's Network Measurement and Metrics Session on June 26, 1996, papers will be presented by Steve Corbato on Backbone Performance Analysis Techniques; Matt Mathis on Diagnosing Internet Congestion with a Transport Layer Performance Tool; and Vern Paxson on Towards a Framework for Defining Internet Performance Metrics. Guy Almes will chair this session.

Government-Related Activities:

Internet statistics / metrics are of great importance to FNC agencies, as well as to the broader Internet provider / user communities (see the FNC paper presented at the ISMA workshop). In May 1996, the FNC chartered an ad hoc working group with the explicit purpose of sharing information on existing and planned measurement activities and enhancing collaboration among various Federal / Federally-sponsored statistics and metrics initiatives. These initiatives include projects to develop and deploy statistics / metrics tools and analysis techniques and workshops and other efforts to improve the community's understanding of emerging technologies/applications and to disseminate relevant results. Current participants include: ARL, DARPA, DOE/SLAC, Kansas University, NCCOSC (Naval Command, Control and Ocean Surveillance Center), NLANR, and NSF. Immediate collaborations include:

  • DOD's High Performance Computing Modernization Program (HPCMP) is making its Sparc 20 Mbone servers available to NLANR and Kansas University for flow measurement. As part of this effort, Phil Dykstra (ARL) has modified NLANR's statistics code and is using the results of pings to measure available bandwidth on the IDREN (Expired Link).

  • DOE's ESnet established a "State of the Internet" working group in May 1996. The working group and ESnet's Network Monitoring Task Force (NMTF) are working with other organizations to implement enhanced WAN statistics collection / analysis throughout its network and the global high energy physics community. DOE/SLAC is working with NLANR on possible application of NLANR's mapping tools as a format for end-to-end reporting.
  • NLANR is developing end-to-end Internet performance assessment information using a modified ping (using microsecond timestamps). They are also depicting the Mbone tunnel structure using a VRML 3D visualization system, drawing the tunnels as arcs on a globe. A VRML browser then supports access to hypertext information about tunnels by interactively clicking on them.
  • Kansas University is supporting ATM performance analysis across the Acts ATM Interconnect (AAI) network. Their web site includes a Traffic Data Plotter for the AAI; info/code for the end-to-end performance measurement tool known as NetSpec; and results of pings.
  • Kansas University / NLANR are coordinating on statistics acquisition / analysis across the CAIRN fabric -- and expansion of existing / planned efforts across the vBNS and AAI networks.
  • NLANR is working with MCI in connection with it's new monitoring platform which will be capable of tapping into optical fiber to gather data on workload characterization. This tool will be deployed across the vBNS (and possibly CAIRN) once it is available.

International collaborations related to Internet statistics and metrics will also be discussed at the annual meeting of the Coordinating Committee for Intercontinental Research Networking (CCIRN) in Montreal, CA on June 29, 1996.

Quality of Service (QoS) Efforts:

Demands for implementing QoS levels for Internet offerings are increasing. From the providers standpoint, such offerings will enable increased revenue through being able to offer business quality services to users. From the users perspective, their ability to contract for higher QoS levels will enable many industries to switch from intranets and private networks to the Internet, or some variation thereof. The emerging QoS requirements of users -- most notably the higher education and research communities and automotive industry -- are addressed above. In addition to these user-driven demands for QoS, similar needs are being expressed by providers and (internationally) by regulators.

The Commercial Internet eXchange (CIX) has recently formed a QoS Working Group co-chaired by Bob Collet (Teleglobe & Chairman of CIX) and Barry Raveendran Greene (Singapore Telecom). The group is sponsoring a workshop entitled Quality of Service Metrics: Remaining Competitive on the afternoon of June 24, 1996, in conjunction with the CIX's annual meeting. In addition, the CIX is considering plans to initiate a survey of business-related metrics among its members this summer.

On the international front, user groups are demanding that current U.S.-centric pricing models be made more equitable. Current policies tend to saddle foreign Internet users with a disportionate responsibility for the cost of intercontinental circuitry. Recent meetings of Europe's DANTE organization have discussed this issue in some detail. Others involved in related discussions suggest that revision of intercontinental pricing models may occur concurrent with deployment of advanced services, such as RSVP, and improvements in traffic monitoring.

Foreign regulatory organizations are also increasingly interested in metrics and related measurements. Singapore Telecom, for example, requires Singapore's three ISPs to report quarterly QOS statistics. Primary business metrics include: network availability and system accessiblity (dial-up access, leased-line access international connectivity). Secondary indicators associated with service activation time include dial-up access and leased line access. Singapore Telecom also monitors customer support metrics such as number of telephone inquiries, enquiries via Internet e-mail, and the number of customer complaints per 1,000 subscribers. As measurement tools become more widely available, Singapore Telecom anticipates monitoring actual performance of these ISPs.

In a related area, MCI has announced two new alliances in June 1996 which it says are aimed (in part) at improving its ability to offer QoS over the net. The Concert alliance with British Telecom is proclaimed to offer first-ever global Internet service performance guarantees on a global scale. Through a separate alliance with Intel entitled WebMaker, MCI plans to utilize emerging Internet standards such as IP Multicast, RSVP and Real-Time Transport Protocol (RTP) to develop new QoS levels that support multicasting and bandwidth reservation across the Internet.

New IETF RFCs are also emerging suggesting alternatives to improve QoS. B. Rajagopalan and R. Nair's RFC on issues related to Quality of Service (QoS) - Based Routing in the Internet presents some potential requirements on path computation, efficiency, robustness and scalability, and describe some issues in realizing a QoS-based routing architecture. Other QoS RFCs which have recently been released include one by Fred Baker (Cisco), Roch Guerin (IBM), and Dilip Kandlur (IBM) entitled Specification of Committed Rate Quality of Service and by Peter Kim (Hewlett Packard Laboratories Bristol) on Link Level Resource Management Protocol (LLRMP) Protocol Specification. These RFCs and related QoS issues will be discussed at the IETF / INet '96 meetings

4. Tools / Data Collection / Analysis

IP & Routing:

The table below provides an overview of the types of metrics which are currently desired related to IP traffic and routing. The relevance of these metrics to future financial settlements and to analyzing network performance is included -- ranging from low (minimal) relevance to high relevance. The table also indicates the tools currently available -- or yet to be developed -- related to gathering statistics on each metric.

NLANR maintains a repository of links to operational statistics data from research sites, ISPs and the NAPs, at http://oceana.nlanr.net/INFO/oldindex.html (Expired Link). NLANR will continue to support workload characterization at the FIX-West exchange point ( http://www.nlanr.net/NA/) until the gigaswitch is installed this summer.

Excellent sources of information on IP tools are also available from ESnet's Network Monitoring Task Force and from Merit. Increasingly tools like the network time protocol (NTP) daemon are being deployed by NSPs and by NAPs which should facilitate the collection of certain statistics. We discuss emerging tools b--- title: "Internet Metrics & Tools" summary: headinfo: | ---

Table 1: Internet Metrics and Tools
some inital NLANR visualization projects
TypeApplicable whereRelevance to Internet SettlementsRelevance to Analysis of Network PerformanceMeasurement Tools
General Metrics:
- Access Capacity (bit/sec) CC charge for bit rate; equipment cost depends on bit rate
high
low
a priori
- Connect Time CC charge for connect time
high
low
CC metering
- Total Traffic (bytes) transit traffic settlement between ISPs
high
low
router or access server stats; TCP dump sampling, RTFM meters; etc.
- Peak travel (bit/sec sustained for n sec.) ISP/NSP overbooks trunks
moderate
moderate
router or access server stats; TCP dump sampling, RTFM meters; etc.
Routing:
- Announced Routes (#)at peering/exchange points & connection of subscribers to multiple subnets
moderate
moderate
TBD, analysis of routing tables, i.e. netaxs
- Route Flaps (#)at peering/exchange points & connection of multihomed networks
moderate
high
TBD (currently available if peering through RA)
- Stability, e.g. route uptime/downtime, route transitionsat peering/exchange points and connection of multihomed networks
low
moderate
(currently available if peering through RA
- Presence of more specific routes with less specific routesat peering/exchange points and connection of multihomed networks
moderate
moderate
TBD
- Number of reachable destinations (not just IP addresses) covered by a route at peering/exchange points & connection of multihomed networks
moderate
moderate
TBD
Path Metrics:
- Delay (milliseconds)Individual networks
moderate
high
TBD, ping
- Flow Capacity (bits/sec)everywhere (networks, routers, exchange points)
moderate
high
TBD, treno
- Mean Packet Loss Rate (%)everywhere
moderate
high
TBD, ping,
mapper
- Mean RTT (sec)everywhere
moderate
high
TBD, ping,
mapper
- HOP Counts/Congestioneverywhere
low
high
TBD, traceroutes
Other:
- Flow characteristics exchange points, multihomed networks
low
moderate
Reporting by ISPs
- Network outage information (remote host unreachable)Individual networks
low
moderate
Reporting by ISPs
- AS x AS matricesIndividual networks
moderate
low
Reporting by ISPs
- Information Sourceconnection of service provider (DNS or RR server); content provider (web server); info replicator (MBONE router & caches)
high
high
router or access server stats; tcpdump sampling, RTFM meters; etc.
Topology Visualization:
- MBONE Internet Infrastructure
moderate
moderate
TBD, some public tools
- Information caching hierarchyInternet Infrastructure/individual caches
moderate
moderate
TBD, public tools

    Sources: "Metrics for Internet Settlements", B. Carpenter (CERN), Internet Draft, May 1996; "A Common Format for the Recording and Interchange of Peering Point Utilization Statistics", K. Claffy (NLANR), D. Siegel (Tucson NAP), and B. Woodcock (PCH), presented at NANOG, May 30, 1996.

    Notes: Notes: CC - common carrier, ISP - internet service provider, NSP - national service provider, TBD - to be determined.

    Privacy and security considerations must be addressed during the measurement of any metrics related to Internet traffic flows. tcpdpriv is one program which addresses these issues; additional programs may need to be developed/deployed as measurements become more commonplace.

Many of the metrics above are inherently problematic for the Internet infrastructure, and still require critical research. At INet '96, Vern Paxson' paper on Towards a Framework for Defining Internet Performance Metrics (compressed postscript here) will provide a cogent discussion of the need for new tools for passive and active measurement of network traffic. As these tools are developed, particular attention needs to be devoted to privacy considerations (particularly with passive measurement tools) and to designing tools which are minimally invasive (particularly on active measurements where the tools themselves can potentially disrupt or distort traffic patterns). Additional attention should also be devoted to the problem of routers dropping packets and how to measure this phenomenon.

A list of some emerging tools / initiatives which should facilitate IP analysis is provided below:

  • Network Probe Daemon (NPD) (Vern Paxson, LBL). For end-to-end measurements across a representative cross-section of the Internet. Examines route stability, packet loss, queing levels, buffer sizes, and available bandwidth. Must be installed at remote nodes. Utilizes traceroute, TCP probe, and tcpdump.
  • Hop Congestion Tool (E. Hoffman, CAIDA) is a tool under development that incorporates traceroute and ping functionality to build a database of delay data along paths to a specified set of destinations, and visualizes the resulting data.
  • Treno (Matt Mathis, PSC) Uses ICMP and TCP packets to measure throughput and congestion. INet '96 paper at http://www.psc.edu/~mathis/papers/inet96.treno.html

    NetSCARF (Bill Norton, Merit) New project to develop a public domain network statistics collection and reporting facility that will evolve to meet the needs of ISPs. Funded by the Resource Allocation Committee, the first software release is expected in July 1996. Software will collect statistics necessary for individual networks to display graphs on total number of bits, system up time, and interfaces links. Current plans are to collect at 15-minute intervals, and display results by individual network. No near-term aggregation of data from multiple networks is expected. For information, contact Bill Norton (wbn@merit.edu) at Merit.

  • Mix (Dave Siegel, RTD Systems/Tucson NAP; Bill Woodcock, PCH; and K Claffy and Duane Wessels, NLANR) have developed a common format for recording and interchange of peering point utilization statistics. The current efforts are focused on inbound traffic collected at each router. However, there are also opportunities (and a potential demand) for statistics related to: packets and bytes in/out per participating organization; collisions on any shared-access media segment; discards on any packet-switched segment; cell-loss on any cell-switched segment; and mbone, ftp, telnet, http, nntp, dns, BGP, icmp and snmp as percentages of total throughput. For more information, contact either Siegel (dave@rtd.net) or Bill Woodcock (woody@zocalo.net).
  • tcpdpriv (Greg Minshall, Ipsilon) Takes packets captured by tcpdump and removes or scrambles data within the packet to protect privacy (e.g., source and destination hosts and ports). For more information, contact Minshall at (minshall@ipsilon.com).
  • Future data collection/analysis activities will likely stretch beyond the realms of a plain packet switching infrastructure, toward optimizing overall service quality via mechanisms such as information caching and multicast. Visualization is important for making sense of all the data sets described above, and is especially critical for developing and maintaining the efficiency of logically overlaying architectures, such as caching, multicast, mobile, IPsec, and IPv6 tunnel infrastructures. As examples, NLANR has created visualization prototypes for the mbone tunnel logical topology and the NLANR web caching hierarchy topology (Expired Link). For the latter they have also built a tool for automatically generating a nightly update of the caching hierarchy topology map. Such visualization could be of great assistance for improving the global efficiency of traffic flows.

  • At the May 1996 NANOG meeting and during the BOF on May 30th, several related topics were discussed, including those by Alan Hannan (Global Internet) associated with Inter-Provider Outage Notification; Craig Labovitz (Merit) on the ASExplorer and Other New RA Tools; and by Steve Crocker (CyberCash), who called for a consortium of users who would define requisite metrics and QOS and ensure collection of appropriate performance statistics.

ATM:

ATM performance is receiving increasing attention. Kansas University is planning a workshop on this topic on June 19-20, 1996. The event is hosted by Kansas University (contact Victor Foster at frost@tisl.ukans.edu), with sponsorship by DARPA and NCCOSC. For an excellent overview of the subject, see Don Endicott's slides (Expired Link) from this meeting.

Kansas University has also worked with the Department of Defense and Sprint to develop the Netspec tool for characterizing a range of interacting and independent loads on the ACTS ATM Internetwork (AAI) project's network. The Netspec web site also discusses the performance of this tool compared to other commonly used network performance testing tools, such as NetPerf, nettest, and ttcp.

A new RFC by Steven Berson (ISI) has just been released which provides guidelines for using ATM virtual circuits (Vcs) with QoS as part of an Integrated Services Internet. Other groups working on ATM tools include the ATM Forum (www.atmforum.com) and Bellcore.

5. The Need for Cooperation

Provider Consortium:

A limited exchange of statistics should have direct payback for ISPs in achieving their goals such as configuring and managing their networks, and even greater payback to the Internet community by enabling analyses which can strengthen the overall information infrastructure.

Market Pressures upon ISPs to participate in such a consortium concept include:

  • Providers' customers are increasingly relying on the Internet for "mission critical" applications and demanding that the Internet's infrastructure be strengthened
  • QoS / RSVP / Internet Settlements will require availability of authenticated and possibly confidential provider statistics across the Internet
  • No single company can do it all -- the nature of the Internet as a web of interconnected networks suggests that organizations should collaborate and cooperate if the underlying structure is to be improved. An excellent model for such collaboration is provided by the Multimedia Services Affiliate Forum (MSAF) which was formed in February 1996. MSAF provides a global forum for advanced inter-networked business services and consists of the world's largest telecommunications providers and multimedia service firms.

Business Constraints hindering such cooperation include:

  • the competitive Internet business environment
  • data privacy considerations
  • concerns about the appearance of industry collusion by major providers
  • the continuing perceived gap between the needs of providers' marketers/managers and network engineers
  • the lack of adequate pricing models and other mechanisms which could insert a sense of economic rationality into the process

Technology Constraints hindering the collection and analysis of Internet metrics include the facts that:

  • Measurement tools are in a nascent stage of development
  • New and emerging technologies, e.g. gigaswitches and ATM, significantly complicate data acquisition/analysis

Despite these challenges, the requirements for data collection, analysis, and distribution exist and will have to be addressed at some point in the near future. Collaboration toward this end is critical.

Next Steps:

Developing an effective provider consortium will require (minimally):

  • Participation by 3 or more of the major service providers, e.g., ANS, AT&T, BBN Planet, MCI, Netcom, PSI, Sprint, or UUNet.
  • Establishment of a neutral third party with sufficient technical skills to provide the core data collection and analysis capabilities required by the consortium.
  • Development of appropriate privacy agreements to protect the interests of members.
  • Agreement on which basic metrics to collect, collate/analyze, and present.
  • Agreement on which new tools need to be developed -- particularly those related to emerging infrastructures using ATM, gigaswitches, and other new technologies.

18 june 96, questions/comments Tracie Monk

Related Objects

See https://catalog.caida.org/paper/1996_metricsurvey/ to explore related objects to this document in the CAIDA Resource Catalog.