The contents of this legacy page are no longer maintained nor supported, and are made available only for historical purposes.

What Researchers Would Like to Learn from the DITL Project: The Top Questions and Data Types

The following questions were contributed by researchers during discussion of the Day in the Life of the Internet (DITL) project at the January 2008 CAIDA/WIDE workshop. The list serves as inspiration for DITL participation, it includes questions that require data not currently, but we hope eventually, included in DITL collections.

A slideset, "Day In The Life of the Internet 2008 Data Collection Event", is available as an overview and summary of the collection event.

A summary of the March 18-19, 2008 Collection Event is available also.

Please send your contributions and comments to for us to integrate into the list.

To participate in the 2008 Day in the Life of the Internet collection event, please send a message to with a description of the data you planned to collect and index.

I. Top DITL Questions

A. The Role of Locality in Internet Usage

  1. What are the traffic patterns and connectivity in different geographic regions?
  2. What is the distribution of DNS query subjects by TLD vs. the geographic origin of query sources?
  3. For ISPs appearing in different geographic regions around the world, do peering relationships change depending on the location?

B. Workload, Traffic, and Performance

  1. What is the mix of application and transport protocols on typical trunks? How has the introduction of P2P applications changed this mix?
  2. Construct and analyze traffic matrices: which ASes are exchanging how much traffic with which others at public IXes and private IXes?
  3. What observable behavior is attributable to botnets?
  4. How can we identify applications (web, VoIP, video, p2p), and estimate their share of traffic?
  5. Do IPv6 traffic characteristics differ from IPv4?
  6. How are flow and packet size distributions changing, including bandwidth symmetry?
  7. Is latency and jitter on the Internet increasing or decreasing?
  8. How can we analyze TCP performance characteristics:
    • the penetration of new versions of TCP/IP
    • the prevalence of TCP reset flags and TCP retransmissions
    • increase in buffer sizes
    • application specific characteristics of TCP flows
    • responsiveness of modern applications (games, streaming) to congestion
  9. How different is Internet2 traffic from the real world?
  10. How much web data is unnecessarily uncachable?
  11. How is R&E traffic different from commercial traffic?


  1. Who is generating invalid traffic to the root servers? Why are the number of queries and garbage at the roots inversely proportional?
  2. Who is querying records for unallocated and unassigned address space? How many of these queries do the roots receive?
  3. What does root server data suggest about trends in IPv6, DNSSEC, DNS packet sizes, prevalence of TCP-based DNS queries?
  4. Can we characterize workload and performance of IDN deployments?
  5. Why are millions of clients querying old IP addresses of roots?
  6. How prevalent are misconfigurations, e.g., lame delegations?

D. Addressing, Topology, and Routing

  1. What is the (distribution of the) distribution of hosts per subnet, and subnets per AS? (intranet topology)
  2. What are the convergence properties of the current routing protocols?
  3. Which ASes control how much of the Internet address space?
  4. What percent of Internet links block ICMP or other probing traffic?
  5. Can we characterize the distribution of hosts hidden behind NAT?
  6. What percentage of users on public wireless networks uses VPNs?
  7. How many four-byte ASes exist?
  8. How much allocated but "unused" IPv4 space remains?
  9. For ISPs appearing in different geographic regions around the world, do peering relationships change depending on the location?

E. Measurement Methodology and Experimental Design

  1. How can we measure host-to-host clock skew and NTP pool drift characteristics?
  2. How can we probe IPv4/IPv6 in a better way?
  3. How many measuring points do we need?
  4. How much storage will be required?
  5. How can I determine whether a cable modem acts as a bridge or a router? Do probes stop at the modem or make it through to a device behind the modem?
  6. How dynamic are dynamic address assignments? What is the distribution of the time that a dynamically assigned address remains assigned to a single customer?
  7. What are appropriate guidelines for measurement, data sharing, and data analysis, to minimize impact on the network and privacy?
  8. How do we evaluate the scalability of a measurement system?
  9. How do we anonymize data while still preserving the maximum utility possible for research?
  10. What can/should we measure from the edge?
  11. What incentives would increase participation in data sharing?

F. Social

The following questions pertain to the ever increasing role of the Internet in the modern society. While not immediately answerable with the available and even expected data, these questions extend the scope of future Internet research efforts and tie its technical foundation with the core interests of its human users.

  1. What are generational differences in Internet use?
  2. How will high-speed broadband effect consumer behavior?
  3. What is the language distribution of content?
  4. How many people object to government or ISPs sniffing traffic?
  5. How much of Internet infrastructure is under control of organized crime?
  6. How much email is unsolicited spam?

II. Types of Data Needed for the Above Questions

  1. DNS query packet traces and/or logs from various places (roots, TLDs, IN-ADDR.ARPAs, ISP resolvers).
  2. Active DNS measurements such as open resolver surveys.
  3. IPv4/IPv6 topology probing data.
  4. BGP feeds/updates, including simultaneous with topology probes.
  5. Web cache logs.
  6. Anonymized reports, e.g, coralreef, netflow-based.
  7. Large router-level topology (anonymized), w up/down time log per link.
  8. Consistent macroscopic ping data over years.
  9. Packet traces from the core, the edge, close to customers all appropriately anonymized.
  10. Traces collected on end-user machines, e.g. NETI@home

III. Example Data Access Policies

The following list provides examples of potential data access policies and structures that might allow network researchers to gain access to otherwise unavailable data.

  1. Unrestricted: Anonymized versions w/o payload publicly available.
  2. Restricted: Access via Access Agreement requires that the data and analysis must remain on specific servers.
  3. Restricted: Contact via email for access.
  4. Restricted: Access requests accepted for collaborative agreements to share analysis, implementation code, and results.
  5. Restricted: Researchers may submit analysis code for staff to run on data.
  6. Restricted: Available to academic, government and non-profit researchers and members upon request.
Last Modified