path and round trip time measurements: path and round trip time measurements Next Previous Contents

1. path and round trip time measurements

1.1 project focus

  • large scale infrastructure-wide measurements
  • round trip time and path to thousands of destinations
  • packet loss
  • connectivity (characteristics of directed graph from a source)
  • visibility, frequency, effects of routing changes
  • dynamically discover and focus on `interesting' routers
  • correlate path performance with proximate events
  • use passive measurement data to guide direction of active measurements

1.2 why not one-way trip time measurements?

  • goal: large scale measurement of network topology,
    with correlation across many paths (tens of thousands)
  • no such infrastructure for one-way transit time measurements
  • another project doing one-way transit time measurements
    (Stephen Donnelly from U. Waikato, NZ; SDSC)

1.3 measurement methodology

  • parallel ICMP probe daemon
  • ICMP echo request packets of 52 total bytes
  • kernel timestamping in ICMP payload for RTT to destinations
  • path data: new ARTS data object:
    • source, dst IP addresses
    • IP addresses of hops in forward path to destination
    • round trip time from source to destination and back


HEADER
        magic: 57264 (0xdfb0)
        identifier: 12288 (0x3000)
        version: 0 (0x0)
        flags: 0 (0x0)
        num_attributes: 1 (0x1)
        attr_length: 12 (0xc)
        data_length: 87 (0x57)

ATTRIBUTE
        creation: 06/05/1998 02:28:47 (0x3577901f)

IPPATH OBJECT DATA
        Src: 204.212.46.3     (0x32ed4cc)
        Dst: 198.96.1.1       (0x10160c6)
        Rtt: 83.134 ms
        HopDistance: 18 (0x12)
        IsComplete: true
        NumHops: 17 (0x11)
                HopNum:   1 IpAddr: 204.212.46.1    (0x12ed4cc)
                HopNum:   2 IpAddr: 204.212.45.13   (0xd2dd4cc)
                HopNum:   3 IpAddr: 205.238.52.1    (0x134eecd)
                HopNum:   4 IpAddr: 205.238.56.149  (0x9538eecd)
                HopNum:   5 IpAddr: 205.238.56.21   (0x1538eecd)
                HopNum:   6 IpAddr: 205.238.56.110  (0x6e38eecd)
                HopNum:   7 IpAddr: 205.238.56.114  (0x7238eecd)
                HopNum:   8 IpAddr: 205.238.56.126  (0x7e38eecd)
                HopNum:   9 IpAddr: 204.70.1.9      (0x90146cc)
                HopNum:  10 IpAddr: 204.70.4.205    (0xcd0446cc)
                HopNum:  11 IpAddr: 204.70.1.93     (0x5d0146cc)
                HopNum:  12 IpAddr: 204.70.3.84     (0x540346cc)
                HopNum:  13 IpAddr: 204.70.185.122  (0x7ab946cc)
                HopNum:  14 IpAddr: 205.207.238.141 (0x8deecfcd)
                HopNum:  15 IpAddr: 192.68.55.102   (0x663744c0)
                HopNum:  16 IpAddr: 130.185.15.12   (0xc0fb982)
                HopNum:  17 IpAddr: 130.185.1.162   (0xa201b982)

1.4 initial observations (hop distance distribution)

1.5 initial observations (frequency of IP addresses in paths)

key routers play huge role in global connectivity from a given source

example, measured from CAIDA in San Diego (Aug 21, 1998):

  • 1 hour poll cycle
  • 22,159 destinations, mostly WWW servers
  • IP addresses spread across advertised IPv4 space
  • 30,163 intermediate IP addresses visited
  • number of routers touched is indeterminate
  • (multiple IP addresses can map to same router)
  • only 154 IP addresses appaer in more than 1% of paths

note log-log scale

1.6 observations: visible outdegree (indicates many next hops/peerings)

from San Diego



         IP address          outdegree
         -----------------   ---------
         134.24.29.94               70
         198.32.128.12              65
         154.32.3.14                61
         194.69.226.10              55
         204.70.1.197               54
         194.69.226.6               50
         192.106.7.130              41
         151.99.49.115              36
         194.179.3.130              35
         194.204.128.2              34
         194.179.3.98               32
         151.99.49.123              32
         204.70.1.209               30
         

    

1.7 scope of CAIDA measurements (12 mo projection)

  • about 20 sources spread throughout network (commercial locations where possible)
  • time granularity dictated by pps rate at each source
  • data archival sufficient for trend analysis (several months)

1.8 architecture

  • measurement hosts (FreeBSD 2.2.x on Intel)
  • data archive hosts (raid5 disk array, HPSS)
  • storage format: ARTS extensions (ARTS licensed from ANS)
  • data analysis/presentation hosts (graph layout, SCCs)

1.9 visualization efforts skping: real-time single destination RTT measurement/display

skpath: real-time single @@dwm path dynamics monitor

skpath caida.mae.net to nms1.san-francisco.ans.net

1.10 data visualization: RTT candle plots

historical order statistics in candle plots (aggregation of large datasets)

NOTE: heavy-tailed distributions.

1.11 Cisco with prefix cache

skping: real-time single destination RTT measurement/display

  • Cisco routers with a high outdegree make bad targets
  • periodic tasks very evident
  • routers with prefix cache have statistically
    significant problems (prefix cache ager)
  • Ciscos running CEF seem okay

1.12 Cisco running CEF

1.13 data viz: RTT candle plots, to aggregate larger data sets

simple forward path analysis

  • Wrong way: ping each hop along a path (ala mtr).
  • forward path to destination may differ from forward path to any intermediate hop along the way
  • direct packets at final destination and increment TTL (just like traceroute)

the naive approach taken by some tools can be misleading....

NOTE: you can do both, but presumably you'll want to know the difference in the paths and should hence just add the hops as individual targets which are measured in the same manner as the final destination.

1.14 further RTT visualizations

  • other periodic behavior of various frequencies,
  • spectraal analysis to characterize such periodicity, and perhaps capturing congestion collapse phenomena

1.15 macroscopic topology visualization

  • drawing directed graphs difficult due to scale but not impossible
    • 2D layouts in Euclidean space are insufficient for large graphs:
    • number of nodes grows exponentially from root while circumference of circle grows only polynomially. Result: clutter
    • example of 2D layout using 'otter'
    • 2D layout algorithms converge slowly for large graphs (SCCs)

1.16 next steps

  • 3D visualizations based on Tamara Munzner's work
  • enhancement and porting of ARTS
  • deployment of additional measurement hosts
  • correlation with passive measurements
  • trend analysis, identification of further measurements

22 aug 98, kc, info@caida.org


Next Previous Contents

Related Objects

See https://catalog.caida.org/media/1998_iepg_9808/ to explore related objects to this document in the CAIDA Resource Catalog.