ISMA Viz Report

Report from the
ISMA Network Visualization Workshop


Introduction and Goals

On April 15-16, 1999 CAIDA hosted an Internet Statistics and Metrics Analysis (ISMA) workshop on Network Visualization. The meeting engaged researchers and practitioners experienced in both the visualization and networking fields in discussions aimed at identifying:

  • current priorities and areas of commonality in the visualization and networking field, both research and operational networking;
  • the kinds of visualizations that are most useful to specific networking groups, including researchers, hardware/software vendors, Internet service providers, and users;
  • areas for data aggregation and analysis that can enhance the utility of visualizations of networking data; and
  • critical next steps associated with visualizing network topologies, connectivity, routing, performance, and other traffic data.

Participation at the workshop was by invitation-only based on individuals' current involvement in these topics. Approximately 55% of attendees were from the research and education fields; the remainder represented Internet service providers (ISPs), network access points (NAPs), or vendors. The meeting was held at the San Diego Supercomputer Center (SDSC) on the campus of the University of California, San Diego (UCSD), and was sponsored by the Cooperative Association for Internet Data Analysis (CAIDA) and funded by the National Science Foundation (NSF) under grant number ANI-9996248.

 

Findings and Conclusions

Visualization techniques, while mature in some disciplines, are only recently making inroads into the dynamic and complex Internet environment. Examples of techniques from other disciplines, including physical and biological sciences, cartography, and air traffic and utility control sectors, offer promising techniques to address visual analysis requirements of Internet operators, researchers, and users. Visualization tools currently available in the Internet sector tend to be in their nascent stages of development and deployment, sometimes with an unfortunate gap between visualization R&D and the needs of ISPs. Most current tools are home-grown by ISP staff and employ simple plotting and graphing utilities for display of trend and related data. Flashier visualization tools that do not directly address a problem perceived by the ISPs are not used.

For example, the Multi-Router Traffic Grapher (MRTG) tool was cited as a useful visualization tool for ISPs. Its simplicity, cross platform functionality, and web interface, and the fact that it is free, are key factors to its popularity. Perhaps most attractive is its flexibility: it can be used to graph utilization of network segments, but it is just as useful for graphing NNTP transfer error occurrences on a news server. These sorts of powerful, narrowly-scoped tools that can be adapted to many purposes are really what ISPs, and perhaps IS organizations in general, can really use. Commercial offerings, such as Hewlett Packard's OpenView, are generally expensive often fail to address backbone ISP requirements for flexibility and comprehensive network/data coverage. According to ISP and NAP representatives at the workshop, the visualization tools that they require would include functionality to:

  • depict simple traffic activity on a network or at a NAP. In Network Operation Centers (NOCs), where technicians serve as the front-line team responding to customer inquiries, fault alerts and apparent traffic anomalies, simple visualizations of real-time traffic flows might be useful if they could facilitate problem assessment and response and result in fewer problems that require escalation to the engineering staff (i.e., cost savings) as well as quicker response time (i.e., happier customers).
  • serve as front-ends linking to other forms of data, reports, or analyses. Participants agreed that visualization tools are most useful when they link various types of data and include "drill down" features. For example, after some alert/alarm is triggered in the NOC, a decent visualization tool should be able to help localize the problem, identify specific hardware, correlate with peering and routing data, and evaluate against trend data for that link, can significantly enhance the quality and timeliness of problem resolution.

    An example of a tool of this sort that does not yet exist is a pared-down C implementation of a (probably receive-only) BGP stub. This tool would run on a host, establish peering sessions to each router for each outbound peer-groups, and get a real-time feed of the sets of routes that each of my routers were advertising to each of their peers. Such a tool could alarm problematic changes in a simple way. But the point is that ISPs do not need a polished package with a fixed and limited set of high-level features as much as building-blocks that they could integrate into tools of their own (just like MRTG).

  • Participants concluded that different communities have different needs for visualization of networking data. Attendees identified several critical visual data types required by specific groups, including:

    • NOCs - real-time visualizations of the entire network can assist in fault and problem identification and front-line problem resolution; geographic visualizations of large networks assist NOC personnel in locating and responding to problems and customer inquiries.
    • Engineers - visualization tools serve as important front-ends for accessing detailed engineering, traffic performance, routing and other details; visualizations of logical topologies are of greatest benefit.
    • Managers - analysts preparing status and trend reports for management need tools to summarize various link utilization and traffic performance details and varying levels of granularity over time.
    • Network Planners - network architects require details on peering and traffic to:from specific autonomous systems (ASes), and information on the nature of the traffic, e.g., port and protocol matrices; modeling and simulation tools, particularly those capable of incorporating real traffic traces and that operate across router/switch hardware, are also important.
    • Researchers - network researchers require visualization tools similar to engineers and network planners, as well as tools for characterizing traffic details, such as interarrival times, protocol and application behavior, distribution among traffic types, service qualities.
    • Customers - details on traffic performance, particularly latencies and packet loss, are important for demonstrating qualities of service for customers; building upon traditional mapping techniques can enhance the relevance of these data for laymen.
    • Utilities - geographic maps showing locations of buried fiber could assist in minimizing cable damage by backhoes.
    • System and Network Administrators - logical infrastructural overlays, such as a web cache mesh, the Mbone, the IPv6bone, benefit from both logical and geographic depictions of peers, parent, child and sibling relationships.
    • Policy Makers - policy, economic, and regulatory decision-making can benefit from visual depictions of networks, peering, traffic flows, commerce, and other interactions among competing interests in this sector.

    While logical depictions of networks are generally of the greatest use to engineers and researchers alike, geographic structure can be helpful. Several participants cited examples of logical depictions that were questioned by management and others based on simple layout features such as placement of East coast nodes on the left side of topology maps. Cartographic principles have a long standing in human interpretation of visual mappings. Incorporating cartographic features, where appropriate, can assist in making Internet traffic more comprehensible to laypersons. Being able to navigate between logical and geographic depictions, as exemplified by the Plankton tool's depiction of the global cache hierarchy, can serve similar purposes.

    Researchers and network planners are in particular need of tools to facilitate modeling and simulation of various real-world and theoretical data sets. Graph layout, fast rendering, and correlation/drill down features are of great importance to these groups. Examples of these kinds of tools include nam and MakeSys' products. Several participants noted that they are licensing Tom Sawyer's graph layout code for their tools. Visual diagnostic tools, such as VisualRoute and MakeRoute, also have promise, however the general unavailability of latitude and longitude data in the DNS records of Internet routers and hosts [RFC1876] undermines the accuracy or usefulness of these tools.

    The adequacy of existing visual metaphors in describing Internet networks and traffic behavior merits continued evaluation by the community. However, participants agreed that the near-term priority will continue to be (1) developing better means of gathering and analyzing data and (2) better mapping of this information to existing visual metaphors and better linkage between alternative visualizations (maps, layouts, plots, histograms, 3-D images, etc.) and summary data. Key data sources of these visualizations currently include:

    • MIB counters and SNMP polling (e.g., interface-level data, including input and output octet, packet, and error counters). ISPs noted that these counters are often inaccurate and frequently are poorly implemented by hardware vendors (and such vendors really need to fix this...). Note the information reported by these counters is also not always sufficient for engineers' needs (but when inaccurate they are particularly insufficient).

    • Circuit accounting - virtual circuit utilization and performance information as represented by data from vendor-specific switch accounting processes.
    • Ping measurements - as indications of reachability.
    • Flow export data, currently available only for Cisco routers (OC3 and below) and for stand-alone monitors (OC12 and below). Acquisition of these data by ISPs still tends to be limited, based in part on the limited availability of analysis software.
    • Performance measurements (e.g., as derived through various tools such as ping, traceroute, Treno, mping, and others, see https://www.caida.org/tools/taxonomy) which provide path specific information used in performance monitoring and problem diagnosis.

    Several participants described the usefulness of the MBONE for testing and validating both analytic methods and visualization techniques due to its relatively small size (compared to the global Internet), as well as the heterogeneous nature of its data sources, multitude of administrative domains, and its experimental character. The user-defined topology of the Mbone is also somewhat more conducive to data collection without requiring coordination/cooperation with ISPs.

    Based on suggestions from participants, CAIDA staff agreed to host a website that will include various datasets for experimentation and visualization. Organizations using the data will be encouraged to share results of their visualizations with others in the community. Initial datasets will be posted at https://www.caida.org/analysis/ and will include:

    • Multicast data from UCSB measurements
    • Connectivity data (traceroute information) from Bill Cheswick's measurements of 90,000 nodes (routers and hosts) on the global Internet
    • SNMP data from MCI Worldcom's very high performance Backbone Network Service (vBNS)
    • Flow measurements from the National Laboratory for Applied Network Research (NLANR) measurements of traffic on the vBNS
    • Active measurements from NLANR's performance measurements of traffic across the vBNS
    • Active performance and routing measurements from CAIDA's skitter measurements of approximately 35,000 hosts on the Internet infrastructure

    Workshop participants identified the following topics as meriting attention by users, researchers and/or vendors:

    • further delineation of network parameters most useful to display graphically in response to specific queries by NOCs, engineers, researchers, or other users;
    • enhancements in graph theory and mathematical models designed to facilitate abstraction and analysis of key network details prior to their display;
    • improvements in automatic techniques for clustering and aggregation, such as by autonomous systems or other computable abstraction, that will reduce complexity; this is particularly important given the growing size of networks where core, customer and dial-up routers of an ISP often measure in the tens of thousand of data points;
    • insights on how data visualizations are applied in other industry sectors and their possible applicability to the Internet;
    • insights into what visual metaphors to use with different communities, ranging from engineers to researchers to managers to policy-makers;
    • development of new visualization tools for depicting information on traffic performance (particularly latency and packet loss), location and importance of critical network equipment and exchange points (e.g., routers and switches with extraordinarily high degree of fanout), geographic layouts of networks, and AS connectivity;
    • better mapping of IPv4 addresses to routers, geography, ASes, countries, etc;
    • better methods of depicting dynamic routing behavior on the Internet;
    • better means of monitoring, characterizing, and visualizing actual traffic patterns, e.g., the evolution of Internet traffic (web vs. ftp vs. real audio vs. games etc.) and how traffic communication patterns are changing, e.g., volume of foreign traffic sent via the U.S. to a 3rd country;
    • new means of visualizing Internet connectivity and traffic in terms of their influence on public policy, money flows, bandwidth flows, traffic flows, and commerce flows, etc.; and
    • better insights as to the appropriateness of different techniques and distortion methods for specific uses, e.g., proportional vs. spanning tree vs. geographic layouts, 2D vs. 3D, static vs. interactive visuals,

     

    Sessions and Presentations

    The sections that follow describe highlights from the workshop and individual presentations. Recurring themes discussed by these individuals and other participants included: visualizing and scaling large datasets, management of IP networks, enhancing existing tools, and developing new tools/analysis in specific areas. Participants discussed problems faced by ISPs, what tools are in use now, what is currently working or not working, and where they think visualization should go in the future.

    Scientific Visualization 

    Bernard Pailthorpe (SDSC) described examples of scientific visualization techniques developed at the University of Sydney Visualization Lab in Australia and SDSC. These included various techniques for weather modeling, San Diego Bay modeling and mapping, HIV modeling, adaptive radar array imagery, and Visible Human visualizations. Challenges facing researchers in the National Partnership for Advanced Computational Infrastructure (NPACI) led by SDSC include rendering of extremely large datasets (terabytes in size), developing visualization and related interactive environments toolkits for researchers, and developing tools that help identify the relevant information of the large datasets.

    Mike Bailey (SDSC) explained that, in his opinion, we are "losing the data wars" -- faster computers are producing more data, creating immense databases for analysis, but visualization software cannot keep up with those data sizes. Filtering and culling of data are increasingly critical for all disciplines. Feature detection is critical to determining the importance of select datasets since it assists in prioritorizing regions of interest. According to Bailey, an adjacency pixel map scales well with large datasets and may provide better info and point out interesting results. Gaussian curvature analysis is another technique used to partition volumes (or find interesting aspects of the data) and can be further manipulated to include additional parameter dimensions.

    Graph Layout

    Bill Cheswick (Lucent/Bell Labs) is actively monitoring and mapping the connectivity of the Internet. Every morning he scans 10% of a 90,000 node dataset; once a month he scans the entire dataset. In his presentation, there were visualizations of outgoing traceroutes to these nodes (roughly 160 top-level domains) using gradient descent techniques. These maps, oriented from the perspective of the source host, demonstrate concentrations of certain networks and changes in connectivity over time.

    Stephen North (AT&T Research) described methods of graph layout utilized within AT&T. His group focuses on interactive 3D maps and developing novel visualization metaphors and algorithms for large graphs. Graph layouts using hierarchical trees, dags, orthogonal layouts, and circular layouts are effective because they control eye motion, avoid artifacts, emphasize regularity and symmetry, and use data pixels efficiently. According to North, overlays and crossovers are bad because they tend to suggest connections that may not exist. Portable, modular tools that do specific things well are important for the research field. Finally, he considers the priority research tasks to be: identifying what data needs to be shown, collecting that data efficiently, and maintaing a focus on tools rather then vertical applications, scale, and metaphors.

    Evolving Visual Metaphors 

    Carl Malamud (Invisible Worlds) is exploring means of communicating visual information on the Internet to end users, including topology and performance information. People think in spatial terms, he explained, therefore we should develop means of portraying physical and behavioral aspects of the Internet in ways that are meaningful to users. ISPs in turn will benefit from a more informed customer community.

    Martin Dodge (UCL) discussed the history of cartography and its applicability to the Internet. Dodge provided specific examines of maps used in describing physical infrastructures, and the application of old and new visual metaphors to descriptions of the Internet infrustructure. Specific examples were draw from the Cybergeography website.

     

    ISPs and NAPs 

    Steve Feldman (MCI Worldcom) elaborated on the need for concise, usable visualizations of traffic data in managing NAPs. First, developers and researchers must determine who needs the visualization. Engineers use tools to monitor network performance, analyze trends, create topologies, and run simulations, operators for monitoring net status and fault isolation, and marketing departments for marketing the ISP. Different tools are needed for the different roles in running a network; there's no one-size-fits-all tool.

    Second, what happens in a NOC helps determine which tools are useful. Technicians wait for something to break by watching status displys or waiting for customer complaints, then they try to fix the problem using diagnostic tools; if they understand the problem, they fix it; otherwise the page the next level of support. Currently, useful tools for network management include commercial packages like Netcool (an alarm consolidation/display tool) freely available building blocks such as MRTG, and powerful scripting languages like scotty/tkined and perl. There was a discussion of pricing and availability for viz tools (and network management tools in general), which included concern that commercial vendors don't build ISP-specific tools because the market is too small to recover development costs. One participant noted that ISPs already pay large sums for HP Openview and similar tools, prices that are still in the noise compared to equipment and bandwidth costs.

    In summary, Steve emphasized the two-sided challenge: for both researchers/developers and engineers/NOC technicians to communicate with eachother about what is really needed before tools are built. One interesting conflict was that a lot of ISPs actually still want Unix rather than PC/Windows platforms, in contrast to the emphasis of most tool development efforts represented in the room.

    Linda Leibengood (UUNet Technologies, previously ANS) described what UUNet measures, how the data is presented and used, and the value of traffic visualizations. Packet loss, RTT measurements, and SNMP utilization statistics are the critical data fields. Real-time alerts are reported in text for high impact loss, daily text reports aggregate metrics, loss and RTT are shown in matrices, while weekly plots of aggregate metrics are used for trending information. From an ISP perspective, they need tools that scale from 50 nodes to the upper bound of 1000-2000 nodes. Tools are needed both for work in real-time and recent trending (1-3 months) to study RTT, latency and packet loss. Visualization tools would be helpful to highlight the sources, help with loss patterns due to topology, and help see beyond individual data points. ISPs tend to avoid using network visualization tools because of support, rather than install, costs (or because they run on specialized high end hardware platforms that are not typical in NOC infrastructure).

    Shankar Rao (Qwest) described the need for a link between the network visualization tools and their contribution to the conduct of business at an ISP. Higher order services such as voice over IP and multicast are illustrative of services that will require more advanced tools to monitor metrics such as jitter and packet loss. ISPs need enhanced tools to support increasingly sophisticated networks. Visualization tools for peering and performance monitoring could also help ISPs communicate information to end users, particular those engaged in Service Level Agreements (SLA).

    Bill Woodcock (Zocalo) (not present at workshop but involved in discussions and mailing list) noted that one difference in expectations that ISPs encounter is between users who are primarily serving data, who want to aggregate a large number of streams of user data together into one pipe, and keep it as full as possible, versus users who are primarily consuming data, who want the full availability of their pipe on an instantaneous basis for a single stream. It is difficult for ISPs to determine which type of availability each customer is looking for at any time, and it is difficult to distinguish between single-stream performance and aggregated-stream performance, under some circumstances. This issue will eventually manifest itself in terms of QoS differentiation.

    Tool Developers 

    Arne Frick (Tom Sawyer) discussed graph layout tools and their layout styles. The network visualization tool offered by Tom Sawyer is an automatic graph layout package with good scaling and interactive graph navigation properties. The toolkit also showcases four layout styles: circular networks, hierarchical dependencies, orthogonal modeling and symmetric communications. There are no restrictions on graph topology and the packet supports flexible integration of layout techniques that do not hinder the speed of the program.

    Stuart Levy (NCSA) gave an operational perspective on what tools would be useful for visualizing network parameters. His group focuses on 3-D interactive graphics tools and believes that some metaphors would work well in that environment. The data analysis involved is largely comprised of bandwidth, capacity, delay, loss, derived data and simulations over time.

    Greg Staple (TeleGeography) is developing tools that are engaging to the user, but is still trying to figure out the relevant applications for them. At a time when we are still deciding if the Internet should be public or private, visual images could be an excellent resource lending support to specific arguments. In his opinion, we cannot get enough bandwidth, a majority of the bandwidth is yet to be laid, and the last-mile details are the areas in which we should be focusing our time and energy. His company is interested in access to tools that visualize bandwidth usage and flow analysis, since he considers flows to be determining policy, (i.e. money flows, e-commerce flows, etc.). There is also an interest in how traffic flows map onto commercial flows, and whether or not there is any correlation or influence among them.

    Stephen Eick from Visual Insights gave a preview of the commercial Advizor visualization toolkit featuring several components that can be linked to each other, including circular graph layouts, scatterplots, bar and line charts, histograms, and parallel coordinates. Rob Rice briefly described the network planning tool from Make Systems. Jerry Jongerius , developer of VisualRoute, a tool that uses pings to perform trace routes. Bryan Christianson of IHUG, has a tool that runs on a Mac OS for trace routes called WhatRoute.

     

    Researchers

    kc claffy (CAIDA) focused on four areas of Internet measurement: topology, workload characterization, performance evaluation and routing. Many macroscopic infrastructure data sets can be visualized using CAIDA tools, such as skitter, otter, and manta. According to Claffy, the research priorities for topology visualization are being able to depict latency, identify key routers or networks, aggregate into AS granularity (i.e, graph inbound and outbound traffic volume as function of source/destination AS and of entry/exit point from local network), and render geographic representations of data. Obstacles to visualization priorities are the inability to map with strong precision IP addresses to any useful entity, e.g., router, geography, AS, ISP. Claffy emphasized the importance of researchers' need for flexibility in both the data collection tools and visualization tools.

    Tamara Munzner (Stanford) gave an overview of the tool H3 that she is developing for network visualization. Munzner chose the hyperbolic sphere to display real networking data for several reasons. The distortion layout and navigation creates a hyperbolic metric space in which the entire volume of data is visible from an external viewpoint. There are a few trade-offs when using this format compared to other metaphors for visualization. Possible future work includes incremental layout alternate circle packing schemes, and disk-based support for processing graphs too large for main memory.

    John Heidemann (ISI) presented images of packet traffic using the network simulator/animator package ns/nam. Ns/nam are useful for protocol debugging, visual recognition of patterns and communication of transport protocol concepts. They are easy to use, supporting actions that are repeatable and therefore easy to analyze, and do not require much disk space. These tools are targeted for people designing new protocols. Future directions include providing converters from other packet trace formats and support for larger graphs.

    Prashant Rajvaida from UCSB showed the network visualization tool, Mantra. This tool monitors multicast traffic and depicts global trends by collecting routing table information once an hour. In his opinion, the Mbone is a perfect domain for applying visualization tools because it is outside the realm of ISP's internal infrastructure, therefore offering easier access to data, and involves a lot of comprehensive data confined to one environment, but not so much to render geographic depictions prohibitive. The Mbone is also a domain that is in desperate need of decent management tools, visually-based or otherwise.

    Notes:

    [RFC1876] C. Davis, P. Vixie, T. Goodwin, I. Dickinson, "A Means for Expressing Location Information in the DNS", http://www.kei.com/homepages/ckd/dns-loc/, RFC 1876, Jan-15-1996

     


    last edited 26 may 1999

Related Objects

See https://catalog.caida.org/paper/1999_isma9904/ to explore related objects to this document in the CAIDA Resource Catalog.