DNS project - Year 2 Report

This report is excerpt from CAIDA Annual Report 2006.

Sponsored by:
National Science Foundation (NSF)

Principal Investigator: kc claffy

Funding source:  OAC-0427144 Period of performance: September 15, 2004 - August 31, 2009.



Domain Name System (DNS)

Improving the Integrity of Domain Name System (DNS) Monitoring and Protection

Goals

The main goal of our DNS project is to supply the research community with the best available, operationally relevant and methodologically sound DNS measurement data. We also develop tools, models, and analysis methodologies for use by DNS operators and researchers.

Activities

The DNS project spent its 2nd year focused on acquiring operationally relevant DNS data and making that data available to the research community.

  1. DNS Root Servers Trace Collection

    We conducted two large scale experiments collecting data from the DNS root servers. In September 2005, anycast instances of the C, E, and F-root servers participated in the first "Day in the Life of the DNS Root Servers" data collection event. The second collection took place in January 2006 and involved anycast instances of the C, F, and K root servers. The total volume of data (in compressed format) is 360 GB. This unique data set represents the most comprehensive measurements of the root servers and affords researchers an unprecedented insight into the specifics of root server operations.

    Analysis of the collected DNS traffic showed that the current method for limiting the catchment areas of local instances appears to be successful. To determine whether clients actually use the instance closest to them, we examined client locations for each root instance, and the geographic distances between a server and its clients. We found that frequently the choice, entirely determined by BGP routing, is not the geographically closest one. Since additional distance between a client and a server causes higher latency, such clients might benefit from examination and optimization of routing for their local DNS root server instances.

    Instance selection by BGP is highly stable. Over a two-day period <2% of both C-root and F-root clients and <5% of K-root clients experienced a change in instance of the anycast node they were using. Most DNS traffic is still using mostly the stateless UDP protocol for connections, so that the vast majority of clients would not be harmed by such changes, apart from the unavoidable delay created by BGP convergence. However, emerging DNS technologies to support security and other extensions to the original DNS architecture will require the use of TCP (stateful) connections, which could encounter problems in the face of anycast instability.

    Overall, anycasting the DNS root nameservers not only extended the architectural limit of thirteen (13) DNS root servers, but also appears work well to improve the global robustness, resilience, and performance of the DNS.

  2. Influence map of DNS root servers

    Based on available data from anycast instances of the C, F, and K root servers, we developed a visualization that illustrates the geographical diversity of DNS root server clients and differences in deployment strategies of different root server nodes.

    The visualization consists of two world maps for each root, both of which project the world as viewed from the North Pole. Each individual anycast instance is placed on the map at the 'center of influence' of its observed clients. To determine this location, we consider each of a server's clients as a point with mass=1 and use those client locations to compute the centroid. Thus, anycast root nodes that serve a primarily local clientele remain closer to their actual geographic locations on our map, while nodes that tend to serve a more geographically distant client set are displaced on our maps toward the regions where they have the most clients. In both maps, the location of each node reflects the centroid (center of influence) of its clients' geographic locations rather than the actual geographic location of the server.

    Wedges fanning out from each server in the larger Location Map indicate the direction, distance, and number of clients observed within the bounding angle of the wedge. A smaller Displacement Map depicts servers whose actual location and centroid are markedly different.

  3. RFC1918 updates

    We completed analysis of the properties and sources of spurious RFC1918 updates that the root servers deflect to a specially created protective system of name servers known as AS112. (The so called RFC1918 or private addresses are intended strictly for use inside networks and should not leak to the outside world.) We conducted laboratory experiments to verify the behavior of the most recent Windows versions.

    Analysis of update logs collected at two independent AS112 servers showed that leakage of DNS updates is a global network pollution problem, with update volume exceeding millions per hour. We applied three signature techniques to two short packet traces and found that over 97% of DNS updates come from various types of Windows systems and over 99% of unique IP addresses in our traces have at least one Windows machine at or behind it. Windows 2000 accounts for the majority of the update packets while Windows XP is more stringent in sending private DNS updates. Our long term data did not reveal an obvious decreasing trend in RFC1918 update rates due to the evolution of operating system. Users, software vendors, and system administrators should all take responsibility for the behavior of their systems and mitigate polluting behavior with appropriate actions

  4. DNS Statistics Collector (dsc)

    Under subcontract to CAIDA, Duane Wessels of The Measurement Factory developed the DNS Statistics Collector (dsc) tool during the first year of this project; dsc runs on selected instances of the C, E and F root servers, and is used by some of the OARC members to measure their own DNS infrastructure. With support from this project OARC is making 7-days delayed dsc statistics from the F-root publicly available.

    We also installed a dsc collector and presenter on our primary nameserver (ns1.caida.org), a single processor Pentium-4 class server running FreeBSD 5.4. It serves about 5 domains and about 40 local machines. Using the default collector settings, we monitor what types of clients our nameserver has, where those clients are located, and what types of queries the server processes.

  5. Open resolvers surveys

    We initiated an ongoing survey that looks for open DNS resolvers. Open resolvers threaten the normal DNS functioning since they allow resource squatting, are easy to poison, and can be used in widespread Distributed Denial of Service (DDoS) attacks. Through a subcontract The Measurement Factory developed a web interface for immediate real-time testing of individually entered addresses. We probe each of our target IP addresses no more than once every three days. Since June 2006 we are archiving daily reports showing the number of open resolvers for each autonomous system number. As an example of this data, the report from Friday, June 22, 2007 includes 10,948 autonomous systems with 300,511 open resolvers.

  6. Longitudinal DNS RTT data

    Using NeTraMet traffic meters installed at various locations in the US, New Zealand, and Japan, we continued monitoring requests to, paired with responses from, root/gTLD servers generated by campus (i.e. large enterprise) networks. The resulting dataset is available for researchers. It contains information useful for evaluating performance conditions and trends on the global Internet. DNS RTTs are influenced by several factors, including remote server load, network congestion, route instability, and local effects such as link or equipment failures.

  7. Anycast Simulation

    In collaboration with Prof. G. Riley (Georgia Tech University), we began laboratory simulations to study the effectiveness of anycast deployment for DNS root servers. The goal is to determine the impact of BGP (mainly BGP convergence) on anycast deployment as well as possible effects from scaling up anycast deployment on DNS performance.

    In our experiments, we simulate a realistic AS level Internet topology using data from CAIDA and the RouteViews project. We apply BGP++'s configuration generator to convert the inferred AS relationships to configuration files for our simulation. Next we model two kinds of failures.

    • Silent link failure: Silent link failures can be detected by triggering BGP holdtime timers. Although such failures could occur on any segment of the chosen best AS path, we restrict the current simulation to a single failure of the last hop AS link. In this failure mode, the client would still communicate with the same server (if it is up) through a different path.
    • Explicit withdrawal of an anycast prefix by an advertising AS: This failure makes the server unreachable which would cause the client to switch to a different anycast server node in this scenario.

The Georgia Tech team has initial results obtained for a small-sized AS topology for the BGP-ANYCAST simulations. In the link down case, the distribution of requests among server nodes changed insignificantly: the clients could still reach the same servers through other links. In the explicit prefix withdrawal case, the network quickly converged to a new state since the simulated graph was small and well-connected. The requests were re-distributed to other nodes, with only one flip for affected clients.

Major Milestones

Student Involvement

Ziqian Liu, a visiting scientist from China, collaborated with CAIDA researchers in analysis of 48-hour tcpdump traces from 53 anycast root servers (4 from c-root, 33 from f-root, 16 from k-root). He examined geographical and topological coverage from each instance and analyzed the stability of client-to-server affinity.

Two graduate students from Georgia Tech, Sunitha Beeram and Talal Jaafar, worked on laboratory simulations of the anycast deployment effectiveness.

Published
Last Modified