Internet Data Acquisition & Analysis

Status & Next Steps

Tracie Monk (tmonk@nlanr.net)
k claffy, (kc@caida.org)
University of California, San Diego /
National Laboratory for Applied Network Research (NLANR)

www.nlanr.net/               www.caida.org/


Abstract

Most large providers currently collect basic statistics on the performance of their own infrastructure, typically including measurements of utilization, availability, and possibly rudimentary assessments of delay and throughput. In today's commercial Internet, the only baseline against which these networks can evaluate performance is their past performance metrics. No data or even standard formats are available against which to compare performance with other networks or against some baseline. Nor are there reliable performance data that users can use to assess the performance of providers. Data characterization and traffic flow analysis are also virtually non-existent at this time, yet they remain essential to for understanding the internal dynamics of the Internet infrastructure.

Increasingly, both users and providers need information on end-to-end performance and traffic flows, beyond the realm of what is realistically controllable by individual networks or users. Path performance measurement tools enable users and providers to better evaluate and compare providers and to monitor service quality. Many of these tools treat the Internet as a black box, measuring end-to-end characteristics, e.g., response time and packet loss (ping) and reachability (traceroute), from points originating and terminating outside individual networks. Traffic flow characterization tools focus on the internal dynamics of individual networks and cross-provider traffic flows, enabling network architects to: better engineer and operate networks, better understand global traffic trends and behavior, and better adopt / respond to new technologies and protocols as they are introduced into the infrastructure.

This paper has three goals. We first provide background on the current Internet architecture and describe why measurements are a key element in the development of a robust and financially successful commercial Internet. We then discuss the current state of Internet metrics analysis and steps underway within various forums to encourage the development and deployment of Internet performance monitoring and workload characterization tools. Finally, we describe the rationale and near-term plans for the Cooperative Association for Internet Data Analysis (CAIDA).


Key Words:

measurement, statistics, metrics, performance, flow, tools, cooperation, ISP, CAIDA


Current Internet

The Internet architecture is in a perpetual state of transition. A decentralized, global mesh of several thousand autonomous systems (ASes), its providers are highly competitive, facing relatively low profit margins and few economic or business models by which they can differentiate themselves or their services.

The challenges inherent in Internet operational support, particularly given its underlying best effort protocol, fully consume the attention of these Internet Service Providers (ISPs). Given its absence from the list of critical ISP priorities, data collection across individual backbones and at peering points continues to languish, both for end-to-end data (which require measurement across IP clouds) and characterization of the actual data flows, e.g., by application (web, e-mail, real-audio, FTP...); origin/destination; packet size; and duration of flows.

Yet it is detailed traffic and performance measurement and analysis that has heretofore been essential to identifying and ameliorating network problems. Trend analysis and accurate network system monitoring permit network managers to identify hot spots (overloaded paths), predict problems before they occur, and avoid congestion and outages by efficient deployment of resources and optimization of network configurations. As the nation and world become increasingly dependent on the Internet, it is critical that we develop and make available mechanisms to enable infrastructure-wide planning and analysis and to promote continued efficient scaling of the Internet. User communities are also exerting pressure on providers for verifiable service guarantees that are not readily available under the current Internet. This is particularly true for users that view the Internet as mission critical. Most notable recently are the higher education/research community (via Internet-2) and the Automotive Industry Action Group (AIAG).1/

In October 1996, members of the higher education community announced Internet-2. 2/ With more than 70 of the larger U.S. universities participating, Internet-2 aims to:

  • create and sustain a leading edge network capability for the national research community;
  • enable a new generation of applications that fully exploit the capabilities of broadband networks, e.g., media integration, interactivity, real time collaboration; and
  • improve production Internet services for all members of the academic community through technology transfer and other means

AIAG is taking action to address similar service requirements of the major automobile manufactures and their suppliers. In January 1997, they tasked Bellcore to develop a strategy for:

  • certifying a small number of highly competent ISPs to interconnect the private networks of automotive trading partners;
  • monitoring providers' ongoing compliance with performance standards; and
  • enforcing strict security mechanisms to authenticate users and protect data, essentially offering a virtual private network to the auto industry.

Another important initiative, the U.S. government's Next Generation Internet (NGI), will similarly have very specific metric goals against which to measure and evaluate services. The NGI aims to connect research institutions with high-speed networks that are 100 to 1,000 times faster than today's Internet, promote experimentation with the next generation of networking technologies, and demonstrate new applications. Details on the NGI are available on the National Coordination Office (NCO) for Computing, Information, and Communications (CIC) home page at www.hpcc.gov/.

In order to achieve measurements that are readily comparable and relevant, a fundamental first step is the development of common definitions of IP metrics. The Internet Engineering Task Force (IETF) IPPM working group is working to provide a more rigorous theoretical framework and guidelines for designing measurement tools robust to the wide variety of disparate signal sources in the frictionful Internet. 3/ In late 1996, draft requests for comments (RFCs) were issued delineating metrics for connectivity [Mahdavi and Paxson], one-way delay [Almes and Kalidindi], and empirical bulk transfer capacity [Mathis]. A charter for continued development of the community's understanding of performance metrics is also now available at the IPPM home page www.advanced.org/IPPM.

However, although several efforts are underway, the community is still only at a very rudimentary stage with respect to actual tools that can isolate traffic bottlenecks, routing anomalies, and congestion points and visualize traffic flows. With support from the National Science Foundation's (NSF), 4/ for example, Merit [Labovitz] has taken extensive measurements of routing instabilities and path performance problems at and between the original NSF-chartered network access points (NAPs). The fact that these measurements are not yet well understood by the community and that they are still in their early stages of development has hindered widespread acceptance.

Also NSF-supported, the National Laboratory for Applied Network Research (NLANR) [Mathis/Mahdavi, PSC] is collaborating with Department of Energy's Lawrence Berkeley Laboratories (DOE/LBL), [Paxson] to define a scalable tool set for IPPM style metrics that could be run on a national (or international) Internet measurement infrastructure (NIMI).

Another group interested in deployment operational measurement infrastructure is the Common Solutions Group (CSG), a consortium of 23 universities that began to station probes at major universities and exchange points in mid-1997. This infrastructure will utilize both active and passive tests to collect data on path specific metrics such as one-way delay, packet loss, and throughput.

Finally, in April 1997, the Trans-European Research and Education Networking Association (Terena) Performance Working Group kicked off a two-year measurement initiative, supporting cross-European measurements of total traffic through specific links, mapping of reachable destinations covered by a route, delay measurements, flow capacity, and continental averages of hops, and packet loss rates.

Internet Traffic Performance Measurement

Well defined metrics of delay, packet loss, flow capacity, and availability are fundamental to measurement and comparison of path and network performance. Tools that measure in a statistically realistic way are disturbingly slow to emerge, but we are starting to see reasonable prototypes for measuring: TCP throughput (treno, Mathis&Mahdavi/PSC), dynamics indicative of misbehaving TCP imlementations (tcpanaly, Paxson/LBL), and end-to-end delay distributions such as NetNow (Labovitz/Merit), the Imeter (Intel) and detailed ping and trace route analysis (Cottrell/SLAC).5/ Tools to isolate traffic bottlenecks and congestion points are still generally not available. Merit is developing prototype tools measuring routing instabilities, but they are still only deployed at select exchange points. The network time protocol (NTP) addresses many of the time issues associated with measurement tools, but some metrics (e.g., one-way delay) will require synchronized infrastructure measurement probes and beacons deploying global positioning system (GPS) and similar timing devices.

Hopefully many of the emerging path performance tools will serve users quite well, both for self- diagnosis of problems they are experiencing and through users benefiting from lessons learned by others who are conducting measurements over the shared infrastructure. Further, potential customers would ideally be able to evaluate and compare alternative providers and monitor service qualities. Most of these path performance tools measure end-to-end characteristics from points originating and terminating outside individual networks, e.g. response time and packet loss (using ping) and reachability (using traceroutes).

In general, these users are most interested in metrics that provide an indication of the likelihood that their packets will be get to the destination in a timely manner. Therefore, estimates of past and expected performance for traffic across specific Internet paths, not simply measures of current performance, are important. Users are also increasingly concerned about path availability information, particularly as it affects the quality of Internet applications requiring higher bandwidth and lower latency/jitter, e.g., Internet Phone and videoconferencing. Availability of such data could help assist in scheduling online events, such as Internet-based distance education seminars, and also influence user willingness to purchase higher service quality and associated service guarantees.

The NLANR web site www.caida.org/outreach/info/ maintains a repository of links to key sites containing information on performance tools. NLANR, through the Cooperative Association for Internet Data Analysis - CAIDA (described below), is working on a taxonomy of measurement tools. This analysis will be complete in mid 1997 and will be available on both the NLANR and CAIDA web sites. Tools being reviewed in this Taxonomy are listed in Table 1 below.

Table 1. Performance Tools

[Link to CAIDA Tool Taxonomy
developed by Jon Kay, NLANR/CAIDA]

Internet Measurement

Name / Contact Object Measured / Summary
TReno
by Matt Mathis, Jamshid Mahdavi
TCP bandwidth / User-level TCP implementation
bing
by Pierre Beyssac
bottleneck bandwidth / Measures w/o filling link
{b|c}probe
by Bob Carter
bottleneck bandwidth / Measures w/o filling link

Internet Availability and Latency

ping
by BRL (now ARL)
availability / latency / pkt loss / The original ping
Nikhef ping (Expired Link)
by Eric Wassenaar
availability / latency / pkt loss / Many minor differences from orig ping
traceroute (Expired Link)
by LBL
routes / Each hop in path with per-hop latency
Nikhef traceroute (Expired Link)
by Eric Wassenaar
routes / Many minor differences from orig traceroute
MTrace
by Bill Fenner
multicast routes / Does for multicasts what traceroute does for unicasts
traceroute web servers traceroutes from odd places / Reverse traceroute from all over globe
wwping
by Jonathon Fletcher
web server availability / Tries single html query, returns server info

User-Oriented Internet Measurement Efforts

timeit
by Jeff Sedayao, Kotaro Akita, Cindy Bickerstaff
web performance / benchmark of HTTP query performance across Internet
Montreal Internet service providers
by Peter Burke Consulting
latency/packet loss to many Montreal ISPs / attempt to systematically rate Montreal ISPs

Internet Measurement Efforts


MIDS Internet Weather Report
by John S. Quarterman, Smoot Carl-Mitchell, Gretchen Phillips
global end-to-end latency / Latency distributions from Austin, TX to all over world
Internet Weather Report
by Clear Ink
large ISP latency / pkt loss / Latency/loss from Bay Area to large ISPs
Network Probe Daemon (NPD)
by Vern Paxson
route behavior / Traceroute data from various end hosts
NetNow
by Craig Labovitz
ISP backbone delay / packet loss / Taken from each NAP
IPMA
by Merit IPMA Project
backbone routing behavior / A study of backbone routing behavior and problems
Routing Arbiter stats tracking defunct / Replaced by IPMA
SLAC WAN Monitoring
by Les Cottrell, Connie Logg
latency/availability information to assorted sites / systematic pinging, sophisticated recording of results
MFS MAE Information
by MFS
link utilization and current connections / MAE status reports
looking glass
by Ed Kern
router stats / router debugging stats query web interface

High Performance Measurement Tools

netperf
by Rick Jones
min latency & max throughput / very thorough high performance benchmark, includes results db
ttcp
by BRL (now ARL)
max throughput / This archive includes most versions
nettest max throughput / Cray throughput measurement program
netspec
by Roel Jonkman
max throughput / throughput test scripting language

High Performance Measurement Efforts

vBNS Perf Sampling
by Von Welch
vBNS max throughput / vBNS max throughput between sample host pairs

Packet Trace Collectors

tcpdump
by LBL
Unix / Most common portable packet dump program
snoop Sun,SGI / bundled with Solaris, Irix
etherfind
by Sun
SunOS/packet dumper bundled with SunOS
Packetman
by Netman Group
LAN packet dumper
CflowD
by Daniel W. McRobb, John Hawkinson
Program to analyze Cisco flow-export packet dumps
OC3mon
by MCI/NLANR
PC with 144 Mbit ATM card / Fast ATM flow dumper
fs2flows
by NLANR
Unix / Extracts flows from packet dumps
NeTraMet
by Nevil Brownlee
DOS, Unix / flow monitoring / analysis for accounting

Statistics Collection

NetSCARF
by Merit NetSCARF Team
Collects, manages, and displays SNMP stats

Internet Traffic Flow Measurement

Traffic flow characterization measurements focus on the internal dynamics of individual networks and cross-provider traffic flows. They enable network architects to better:

  • engineer and operate networks,
  • understand global traffic trends and behavior, and
  • adopt / respond to new technologies and protocols as they are introduced into the infrastructure.

Data available from traffic flow tools include flow type (e.g., web, e-mail, FTP, real-audio, and CUSeeMe); sources/destinations of traffic; and distributions of packets sizes and duration. These measurement tools must be deployed within networks, particularly at border routers and peering points. Traffic flow characterization measurement therefore requires a higher degree of cooperation and involvement by service providers than do end-to-end performance oriented measurements. End-users or large institutional sites can also use these tools to monitor traffic, e.g., MCI has placed OC3mon flowmeters at vBNS nodes at each of the NSF-supported supercomputing centers. These devices provide detailed information on traffic flows and assist in analyzing usage and flagging anomolies.

Today's infrastructure is unprepared to deal with large aggregation of flows, particularly flows that are several orders of magnitude higher volume than rest, e.g., videoconferencing. Providers and users need mechanisms and tools to support more accurate accounting for resources/bandwidth consumed.

Flow Characterization Tools include the OC3mon traffic monitor (providing realtime monitoring of traffic at 155 Mbps speeds), developed by Joel Apisdorf and others within MCI's vBNS research team. MCI makes detailed flow data graphics publically available through the vBNS web site www.vbns.net. Figure 1 below represents a time series plot of flows across the vBNS node at the National Center for Supercomputing Applications (NCSA) from January 24-28, 1997. Other data on autonomous systems; country-specific flows; and distributions of packet sizes, flow volume, and flow duration are also available according to user-defined flow characteristics. 6/

[img]
Figure 1. Time series plot of packets across vBNS at NCSA

Nevil Brownlee of the University of Auckland, New Zealand, has also been working with the IETF Real Time Flow Meter (RTFM) working group to develop tools for accounting and related flow measurement.7/   He has developed the NetraMet / Nifty tool, most notably to support resource accounting in New Zealand. John Hawkinson (BBN Planet) and Daniel McRobb (ANS) are developing the Cflowd tool to augment and further analyze data provided by the netflow switching capability of Cisco routers. They presented preliminary results of their analyses at the February 1997 meeting of the North American Network Operators Group (NANOG).

Traffic Analysis / Visualization / Simulation / Modeling

Given the dynamic nature of the Internet environment, collected traffic data will be primarily of historical interest unless tangible improvements occur in our ability to analyze and predict network behavior. Without the necessary and fundamental understanding internetworking traffic modeling and simulation offers, practioners will continue their skepticism about the utility of empirical measurement studies.

Yet, there is little consensus currently on how to accomplish IP traffic modeling or how to incorporate real time statistics into such analyses. Telephony models developed at Bell Labs and elsewhere rely on queuing theories and other techniques that are not readily replicable to Internet style packet-switched networks. In particular, Erlang distributions, Poisson arrivals, and other tools for predicting call-blocking probabilities and other vital telephony service characteristics, typically do not apply to wide area internetworking technologies.

Internet measurement researchers face growing skepticism from practitioners (ISP engineers) who question the utility and relevance of traffic studies vis-a-vis the realities of instrumenting large Internet backbones. While gaps persist, the mutual inter-dependence of these communities and the growing requirements to assess Internet dynamics suggest a strong need for identifying common ground.

Visually depicting Internet traffic dynamics is the goal of collaborations between NLANR researchers at Stanford University, the University of California, San Diego, and Xerox PARC which are described below.

Traffic Visualizations - In late 1995, KC, Eric Hoffman, Ipsilon (now NLANR/UCSD); Tamara Munzner, Stanford University; and Bill Fenner of Xerox PARC began work on visually depicting Internet traffic components. Rather than tackle the Internet topology as a whole, they chose to experiment with visualization techniques using the smaller Mbone infrastructure.8/

[img]
Figure 2. European Mbone Traffic - illustrates European Mbone topology, characterized by a relatively more efficient star topology than seen in the United States Mbone structure, largely due to bandwidth scarcity that provides stronger incentive for more efficient configurations. Data from March 17, 1997


    To depict this traffic, Munzner et al. used the mrwatch utility developed by Atanu Ghosh at the University College London to generate Mbone data. They then translated this data into a geographic representation of the tunnel structure as arcs on a globe by resolving the latitude and longitude of the Mbone routers. The resulting visualizations provide a macro level review of textual data (hosts names, IP addresses, etc.) These visualizations permit a level of understanding of the global Mbone traffic structure that is unavailable from the data in its original form -- as lines of text with only hostnames and IP addresses. The representations are interactive and three dimensional, and permit analysts to define groupings and thresholds in order to isolate aspects of the Mbone topology. 
    

NLANR makes these maps publicly available as both still images and VRML objects, the latter for use with a VRML (virtual reality modeling language) browser.

[img]
Figure 3. Global Mbone traffic - illustrates the concentration of Mbone traffic in the Northern Hemisphere (US & Europe) - data from March 17, 1997


    NLANR staff is developing a network visualization tool (Anemone) that they have already used for tasks such as delineating relationships among Autonomous Systems (ASes). Figure 4 below depicts AS peering adjacencies, sampled from a BGP session on on May 1, 1996. Node sizes are proportional to the total number of BGP peering relationships in which the Autonomous System participates; line sizes are proportional to number of routes advertised across the corresponding adjacency.
    

[img]
Figure 4. AS Peering Relationships


    Figure 5 provides a 3D VRML view of AS peering versus the earlier planar view. This image depicts BGP peering relationships for all ASes who peer with at least seven other ASes.
    

[img]
Figure 5. 3D View of AS Peering Relationships


    Development of a prototype global web caching hierarchy is another focus area of NLANR. Under the direction of k claffy and Duane Wessels of UCSD/NCAR, NSF and Digital are sponsoring the deployment of root web caches at each of the NSF-supported supercomputing centers (SCCs). The SCCs, and hundreds of the caches that tie into these root caches, run the NLANR-developed Squid caching software, a publicly available package supported by community volunteers led by Duane Wessels. Details on this project are available at

www.nlanr.net/Cache.

    [img]
Figure 6. Cache traffic in Asia

As part of this Global Caching project, NLANR has developed a tool to visualize global caching traffic flows. Figure 6 shows a snapshot of Asian caching traffic patterns on January 19, 1997. The red colored flows indicate a high volume of traffic between the caches. NLANR software automatically performs daily updates these images, which have already proven useful in optimizing various caching topologies. In particular, mid 1996 analysis and visualization of caching logs helped to support the NLANR decision to implement access controls for the root caches in the United States to force coherence to a more sound hierarchical global structure.

Various tools used to depict and visualize Internet traffic flows are identified in Table 2 below. Those used to develop NLANR/CAIDA visualizations are described at https://catalog.caida.org/search?query=types=software.

Table 2. Internet Measurement Visualization

from the CAIDA Tool Taxonomy by
Jon Kay, NLANR/CAIDA
Name / Contact Summary
Link congestion visualization
by NLANR
Plot of latency variance on routes to various hosts
Mbone visualization
by NLANR
Mbone geographic visualization, updated daily
Web cache visualization
by NLANR
Squid cache hierarchy geographic visualization,
updated daily
A Map of the Mbone
(Expired Link)
by Elan Amir
A map of the Mbone
ASExplorer
br Merit IPMA Project
NAP route map
pubnetmap
by Dave Jevans
Visualization of all Internet links and latencies
Etherman (Expired Link)
by Netman Group
LAN traffic monitor
Interman (Expired Link)
by Netman Group
LAN connectivity monitor
Geotraceman (Expired Link)
by Netman Group
Geographical traceroute
Hostname to Lat/Long Useful subroutine for Internet mappers
xplot
by Tim Shepard
making tcp plots
MIDS
by John Quarterman & Co.
Internet cartographers extraordinaire
mview
by
Thaler
Mbone status visualization
Simulation Tools/Models - In addition to a lack of data on Internet traffic flows and performance, a similar dearth exists in quality analysis, modeling and simulation tools, particularly for those capable of addressing Internet scalability. The absence of these tools hinders the ability of networking engineers and architects to reasonably plan and execute the introduction of new technologies and protocols. Developing and improving the quality of these tools necessitates that forums be established to improve the level of cooperation between researchers and commercial firms. Illustrative current tools include:
  • NETSYS Advisor (tool focused on managing the network change life cycle from planning to problem identification and repair) - Netsys/Cisco
  • REAL (network simulator which tests congestion and flow control) - by S. Keshav, Cornell
  • VINT (Virtual InterNetwork Testbed) - a collaboration between USC/ISI, Xerox PARC, and LBL which builds on ns v.1, an event-driven simulator developed by S. McCanne, S. Floyd, and K. Fall at Lawrence Berkeley Labs (LBL)
  • Commercial off-the-shelf software such as OPNET (an environment for modeling, simulating, analyzing network performance) by MIL 3 and BONeSr (facilitating the design/analysis of system architecture, networks, protocols, and traffic) by the Alta Group.

    

CAIDA

ISPs and users alike are in need of improved end-to-end traffic monitoring and analysis capabilities. The development of business and economic models, differentiated qualities of service, not to mention continued scaling of the global Internet infrastructure require more accurate data/analyses than currently available. The meshed nature of the global Internet, however, dictates that no single entity can do it alone.

Toward this end, NLANR/UCSD is creating the Cooperative Association for Internet Data Analysis (CAIDA). CAIDA is meant to be inclusive -- building upon existing NLANR measurement collaborations with supercomputing centers, ISPs, universities, vendors, and government. CAIDA is also responsive and is designed to meet evolving and future needs of Internet by encouraging continued innovation by the R&E community, tempered by the realities of the commercial Internet infrastructure.

CAIDA is a collaborative undertaking to promote greater cooperation in the engineering and maintenance of a robust, scalable global Internet infrastructure. It will address problems of Internet traffic measurement and performance and of inter-provider communication and cooperation within the Internet service industry.   It will provide a neutral framework to support these cooperative endeavors.  Tasks are defined in conjunction with participating CAIDA organizations, and are either...

  • solicited by a single or set of sponsoring CAIDA members
  • jointly proposed by CAIDA member(s) and collaborating CAIDA researcher
  • proposed by CAIDA researchers themselves to an individual or set of CAIDA sponsoring organizations

CAIDA's initial goals include:

  • creating a set of Internet performance metrics (in collaboration with IETF/IPPM and other organizations); and working with industry, consumer, regulatory, and other representatives to assure their utility and universal acceptance;
  • creating a collaborative research environment in which commercial providers can share performance and engineering data confidentially, or in desensitized form; and
  • fostering the development of advanced networking technologies such as...
    - Multicast and the MBONE - Web Caching Protocols/Hierachies
    - Traffic Performance and Flow Characterization Tools - Bandwidth Reservation and Related QoS
    - Traffic Visualizations, Simulations and Analyses - BGP Instability Diagnosis
    - "Next Generation" Protocols / Technologies, e.g. IPv6

The goal is to have both government and industry participate in CAIDA, for the benefit of the larger Internet environment.

Inherent in CAIDA's creation are fundamental precepts covering the acqusition and use of data and the public availability of CAIDA's tools. Specifically, CAIDA members participants will determine...

  • what data to contribute - CAIDA members will set targets specifying the type and format of data to make available to participants
  • what data can be released - CAIDA receives data on specific ISP networks under NDA (non-disclosure) agreements, for use in research. Over time, CAIDA members themselves will set targets specifying the type and format of data to make available both internally to CAIDA members, semi-externally to customers, as well as externally to the public. Aggregate, general data will be available to the public whenever possible. NLANR and MCI currently make data on flows through the FIX-West and across the vBNS backbone publically available. In the case of the FIX-West flows, data are filtered through tcpriv and anomymized to protect user privacy before making the traces publically available for research and trace-driven simulations. Such data provide a wealth of information on traffic flows. As importantly, CAIDA's tools and Source code will be publicly available.

Currently, CAIDA is a project of the National Laboratory for Applied Network Research (NLANR) within the University of California, San Diego. In May 1997, NLANR/CAIDA will host its second in a series of Internet Statistic and Metrics Analysis (ISMA) workshops. During 1997, UCSD/NLANR personnel will work with participating companies to define the goals, priorities, and desired membership of CAIDA. By late 1997, CAIDA will be registered as a non-profit organization.

UCSD has submitted a proposal to the National Science Foundation to help seed the CAIDA effort.   Complementing an industry-wide effort with government support will promote balance among the needs of the various communities (private, research, government, and users) and facilitate the near term development and deployment of critical measurement technology and techniques.

For more information on the status of CAIDA and information on its tools and associated analyses, visit...

CAIDA

https://www.caida.org


    See also:

Cooperation in Internet Data Acquisition and Analysis
A Survey of Internet Statistics and Metrics Analysis Activities

Footnotes

  1. The AIAG has identified several metrics it views as critical to the AIX industry and its future monitoring efforts, including: performance metrics such as latency, packet/cell loss, link utilization, throughput, and routing configuration/peering arrangements; reliability metrics such as physical route diversity; routing protocol convergence times; disaster recovery plans; backbone, exchange point, and access circuit availability; and MTTR (mean time to repair).

    Information on the automotive industry's telecommunications initiatives is at the Automotive Industry Action Group's (AIAG) Telecommunications working group page [no longer available]. Robert Moskowitz (Chrysler) also described some of these activities at NLANR's ISMA workshop in February 1996: https://www.caida.org/workshops/isma/9602/positions/moskowitz.html

  2. Preliminary engineering specifications are available on the Internet-2 web site (www.internet2.edu). Ultimately, these specifications are to include definable and measurable qualities of service, including: latency and jitter specifications; bandwidth interrogation and reservation capabilities; and packet delivery guarantees.
  3. Vern Paxson' paper, entitled Towards a Framework for Defining Internet Performance Metrics (at ) provides a cogent discussion of the need for new tools for passive and active measurement of network traffic. As these tools are developed, particular attention needs to be devoted to privacy considerations (particularly with passive measurement tools) and to designing tools which are minimally invasive (particularly on active measurements where the tools themselves can potentially disrupt or distort traffic patterns).

    The IETF's IP Performance Metrics working group are addressing these and related measurement issues: www.advanced.org/IPPM.

  4. The National Science Foundation (NSF) Division of Network, Computing and Research on Infrastructure (DNCRI) has supported several recent projects and events related to Internet traffic analysis. NLANR efforts include:
    • The Internet Statistics and Metrics Analysis (ISMA) workshop (February 1996 and May 1997): https://www.caida.org/workshops/isma
    • Traffic measurements at the FIX West facility and across the vBNS: http://www.nlanr.net/NA
    • Vizualizations of caching traffic and mbone topologies, as well as bgp peering relationships among autonomous systems: https://www.caida.org/outreach/info/
    • Development of software using modified pings to assess end-to-end performance: http://www.nlanr.net/Viz/End2end
    • Summaries of available provider and NAP statistics: https://www.caida.org/outreach/info/ and other sources of relevant statistics/metrics information.
  5. DOE's ESnet established a "State of the Internet" working group in May 1996. The working group and ESnet's Network Monitoring Task Force (NMTF) are are collaborating with other organizations to implement WAN statistics collection/analysis across the Internet infrastructure of the global high energy physics community. See the paper by Les Cottrell and Connie Logg entitled Network Monitoring for the LAN and WAN, presented at ORNL, June 24, 1996 at http://www.slac.stanford.edu/~cottrell/tcom/ornl.htm.
  6. 6. MCI/vBNS and NLANR have collaborated on development and deployment of the OC3mon flow tool for operational use on the vBNS. OC3mon is a PC-based monitor with two ATM network interface cards (NICs) that connect to both directions of an OC3 link, each funneling 5% of the light through the fiber to the processor that examines the first ATM cell of each packet. The hardware costs less than $5,000 and the freely distributed by MCI/NLANR (ftp://ftp.nlanr.net/Software/Oc3mon/). By mid-1997, the monitor will be available for OC-12 speeds. Efforts are also underway to port the tool to multiple platforms and to develop a prototype running on PPP over Sonet. For more information on the tool, see: www.nlanr.net/NA/Oc3mon/.
  7. Details on the NetraMet flowmeter and Real Time Flow Meters (RTFM) working group are available at http://www.auckland.ac.nz/net/Internet/rtfm/TOP.html.

    NANOG minutes, including the February Cflowd presentation by Daniel McRobb (ANS) and John Hawkinson (BBN Planet) are available at www.nanog.org.

  8. See also "Visualizing the Global Topology of the Mbone," by Tamara Munzner and Eric Hoffman and K. Claffy and Bill Fenner. Proceedings of the 1996 IEEE Symposium on Information Visualization, pp. 85-92, October 28-29 1996, San Francisco, CA, 1996. http://www-graphics.stanford.edu/papers/mbone/

  9. Last updated 21 March 1997

    Inquiries should be sent to tmonk@nlanr.net


Related Objects

See https://catalog.caida.org/paper/1997_data_inet97/ to explore related objects to this document in the CAIDA Resource Catalog.