Skip to Content
[CAIDA - Center for Applied Internet Data Analysis logo]
Center for Applied Internet Data Analysis
www.caida.org > workshops : isma : 0312 : report.xml
Bandwidth Estimation ISMA Workshop Final Report
On December 9-10, 2003, CAIDA hosted the first Bandwidth Estimation workshop (BEst) supported by CAIDA, IMRG, and the DOE Office of Science. The Workshop brought together the most active researchers in this research area. The participants discussed recent advancements in this field, resolved the terminology and metrics definition issues, and identified open problems requiring further research. This page presents the BEst final report.
|   ISMA Home    BEst 2003 Workshop    Agenda & Presentations    Final Report    Comments/Questions   |
Bandwidth Estimation ISMA Workshop Final Report

December 9-10, 2003

In the text below we attempt to summarize the main points of the presentations and discussions at the workshop. We aim for sufficient detail to capture the atmosphere of the discourse, but sufficient abstraction to capture the main points. In addition, the title of each presentation is a link to the corresponding abstract.


  • Background session: micro-tutorials

    1. Constantinos Dovrolis (Georgia Tech), Bandwidth metrics: definitions and terminology

      Bottleneck is an imprecise and thus confusing term in this community. Instead, we standardize on the following three terms:

      1. capacity of a path is determined by the link with the minimum capacity (narrow link)
        - typically constant at layer-2
      2. available bandwidth of a path is determined by the link with the minimum unused capacity (tight link)
        - it is the complement of average utilization over time
      3. bulk-transfer capacity (BTC) is long-term TCP throughput. BTC, which can be derived mathematically, and depends on:
        - exact TCP implementation
        - available bandwidth
        - link buffer sizes
        - cross traffic responsiveness (elasticity)


    2. Darryl Veitch (University of Melbourne and Sprint), (two talks:)
      Stationarity versus time-scale dependence
      and
      Timing and bandwidth issues in active measurement

      Accurate analysis of active probes requires that timing errors be distinguished from delay variations. The measurement quality of active probes for bandwidth estimation depends on the measured metric and the time scale chosen for analysis. Even simple end-to-end delay measurements are affected by the relative clock offset at each end host. Bandwidth estimation depends even more on host characteristics, since it relies on measuring delay variation as well as packet interarrival time characteristics, both of which depend on the probe rate and buffering characteristics of end hosts.

      Accurate clocks are necessary to support either offset or probe rate synchronization. The standard operating system clock is based on two underlying oscillators with large skews. The Network Time Protocol (NTP) uses a protocol, a network of servers, and a set of algorithms to correct clock skews. Under optimal conditions, the offset is only bounded to approximately 1ms while the error is bounded by the round-trip-time and system noise. Probe rates of up to 500 probes/min may be required to control the offset over large (10000 s) time scales.

      Using a TSC (timestamp counter) based clock provides some improvement, offering 1ns resolution and register timestamping < 50ns with only .1 probes/minute required over useful analysis timescales. Use of a TSC timestamper in the NIC driver with RT-Linux can counter system noise that otherwise introduces uncontrolled scheduling delays as well as hardware interrupt latencies. When and how timestamps occur affects accuracy. Hardware timestamps, e.g., in DAG cards, are one way to improve clock accuracy. Alternatively, TSC combined with accurate remote calibration can counter clock skew. TSC used in conjunction with NTP primary server timestamps can counter clock offsets along shorter timescales.

      High bandwidth measurements have to deal with the following limitations:
      - accuracy of timestamps becomes too demanding
      - clock synchronization becomes insufficient
      - hardware capabilities in low speed switches are too coarse to capture fine details

      Comments:
      - Can we quantify the accuracy of our measurements by looking at how we do timestamps?
      - Ratio of network speed to host speed. CPU clock speed can impact measurement
      - SIOG timestamp and socket timestamps make a big improvement. (used by Pathchirp, and Darryl V)
      - Paul Barford's Harpoon may resolve some timestamp and IAT issues.

  • Session I: Available bandwidth estimation

    1. Vinay Ribeiro (Rice University), Spatio-temporal available bandwidth estimation for high-speed networks
      URL: http://spin.rice.edu

      Pathchirp is a new packet dispersion based active probing scheme for available bandwidth estimation. It overcomes system I/O bandwidth limitations on high-speed networks and locates the tight link in space and in time.

      Pathchirp uses packet tailgating technique, where small packets with no TTL limit immediately follow large packets with TTL = m. The large packets exit the path midway while the small packets continue on to the end host receiver and capture important timing information. (This method was previously employed in capacity estimation measurements.) If the system I/O rate limits probing rate and the tight link exceeds system bandwidth, then multiple senders can generate probing trains with a higher combined probing rate. This setup requires careful machine clock synchronization.

      Verification experiments:

      1. In an ns-2 simulation containing heterogeneous sources (e.g., web server farm, web clients, constant bit rate traffic, large ftp file transfer) and where the tight link changed over time, Pathchirp accurately tracked the tight link.
      2. The author ran Pathchirp over two Internet paths (UIUC -> Rice) and (SLAC -> Rice) known to share 4 common links. Pathchirp returned the same tight link measurement for both links, but the determined tight link differed from MRTG data by one hop. Possibly, the router at the tight link does not decrement the TTL.


      Comment: This sounds like Harpoon.

    2. Attila Pazstor (Ericsson), On evaluating a new class of available bandwidth methods

      Previous bandwidth estimation methods examined either packet spacing effects or packet accumulation effects. The first of these two methods assumes that probes occur in the same busy period at the link of interest. Then, the bottleneck spacing determines the interarrival time to the receiver. Spacing effect based tools utilize packet pairs and packet trains. The second method assumes that probes occur in different busy periods at all hops and considers the dependence of service time on the packet size. Accumulation effect based tools ("pathchar-like") employ well separated, independent, one-packet probes. Note that both these methods treat cross-traffic as noise.

      In contrast, methods of the interaction class treat cross-traffic as signal. Interaction of probes and cross traffic determines which probes of the pattern join the same busy period. New tools such as TOPP by Melander, pathload by Dovrolis, and Pathchirp by Ribeiro are examples. They are based on sending packet trains with controlled inter-packet spacing. However, the principles of the analysis performed make them fundamentally different from the packet train based methods relying on the spacing effect. These tools are able to produce accurate results even in situations when a highly loaded high capacity link follows lightly loaded low capacity links. (The previously mentioned methods failed in such situation because they assumed that the bottleneck link is the one limiting the available bandwidth.

      In order to evaluate different "interaction class" tools under reproducible experimental conditions, Attila developed a light-weight simulation package, PSIM. The main task of this tool is to generate detailed information corresponding to what would occur during an active probing experiment - both the observable, and the unobservable aspects. PSIM uses the same input/output formats as the real active probing monitors, allowing users to apply the same analysis tools to process both simulation and real experimental results. It also supports the use of cross traffic traces collected from real networks as background traffic. A key capability of the simulator is that it provides (extensible) output statistics from each hop of the route, as if we were experimenting on a fully instrumented route, with monitors available at each hop.

      Attila reported some results from a PSIM experiment comparing pathload to their own chirp pattern based method on a two hop route consisting of a 100 Mbps link followed by a 3 Mbps link. The results obtained by both methods are consistent. Future plans include testing of another tool, Pathchirp, and further study of the general properties of "interaction class" tools. The authors also want to develop a new stand alone tool for available bandwidth estimation.

    3. NingNing Hu (CMU), Towards tunable measurement techniques for available bandwidth

      NingNing previously proposed two available bandwidth measurement algorithms: the Initial Gap Increasing (IGI) and the Packet Transmission Rate (PTR). He adapted PTR for use during TCP SLow Start phase thus demonstrating the idea of a tunable measurement tool, Paced Start (PaSt).

      Since no single technique works best in all network environments, the measurement algorithms should be adapted to the execution environment which includes both the network and the application requirements. Tunable measurement techniques may then improve both the measurement accuracy (depending on network properties) and efficiency (based on application needs).

      Tunability is the key challenge for the deployment of current techniques for available bandwidth measurement:
      - achieve single-end control of probing
      - enable application to configure the tradeoff between accuracy and probing overhead
      - prepare to deal with the environment of the future (very high speed networks, wireless networks)

    4. Jin Guojun (LBNL), Available bandwidth measurement and sampling

      Jin suggested that sample rate and probe burst lengths or choice of TCP window size affect available bandwidth measurement. He stated that available bandwidth is time-sensitive and presented graphs to show why tool comparisons do not work.

      Constantinos Dovrolis points out that everything Jin presents has more to do with basic statistics and sampling a random process than it does with measurement of available bandwidth.
    5. Samarth Shah (University of Illinois Urbana-Champaign), Available bandwidth estimation in IEEE 802.11-based wireless networks

      In wireless networks, the available bandwidth varies on a fast time-scale due to channel fading and error from physical obstacles. It also varies with the number of hosts contending for the channel. Samarth developed a per-neighbor available bandwidth estimation scheme for IEEE 802.11-based wireless networks. Within the device driver of the wireless interface, he measures running average transmission and decay of successfully transmitted MAC frames and calculates the throughput normalized to a pre-defined packet size. The presence of more contention or more channel bit errors corresponds to smaller available bandwidth. This method is feasible and robust. The aim of available bandwidth estimation is to serve as a basis for admission control and rate control of flows sharing the network.

      Samarth showed simulation results with 64b - 640b packets on a half-duplex wireless network. The raw throughput depends on the packet size. Using the normalized throughput to represent the bandwidth of a wireless link filters out the noise introduced by measuring packets of different sizes.

      Matthew Luckie incidentally showed a graph illustrating the impact of half duplex links on end-to-end path measurements.

    Session Discussion
    Loki Jorgenson sees progress on fundamental vocabulary problems, but we now need better context. What users think networks are (a mental model) is more important than what networks really are. We need better and more adaptive measurement systems.

    Terry Shaw represents some of the folks that buy tools made by Loki, tools can "get over" the technology. There is a specific need to deploy, monitor, and track utilization of video streaming servers. Terry wants to look at aggregate data types on networks. He wants insight into the application mix. Basic measurements (empirical data) filtered by economic ramifications could aid strategy, e.g. help structure his backbone costs, and find tight links.

    Bruce Luekenhoff mentioned that Cisco QoS can use instrumentation within the router to take into account how deep the queue is.

    Mark Pucci notes that big network providers are also looking for some way to measure the e2e experience.

  • Session II: Capacity estimation

    1. Dina Katabi (MIT), Cross-traffic: noise or data?

      Dina shows how filtering the errors added by cross-traffic to nettimer capacity estimates can provide a probability distribution of cross-traffic bursts and therefore passively infer the capacity of multiple congested links.

      MultiQ, a new tool, measures packet interarrival times at a receiver to reflect the sizes of cross-traffic bursts at congested routers. She presents a that illustrates modes corresponding to 10Mb/s and 100Mb/s. The probability density is not in the mode. The more congested links there are on the path, the harder it is to see these modes.

    2. Matthew Luckie (University of Waikato), Segmentation of Internet paths for capacity estimation

      The furthest hops in a network topology experience the most additive noise. L2 store-and-forward devices also interfere with forward path measurements. Segmenting a path could remove additive error, but is difficult to do. Idea: Insert timestamps at each hop using IP Measurement Protocol (IPMP) to modify the forwarding packet along the path. Then use packet tailgating (as implemented in nettimer) to send a large first packet and a smaller second packet that has to queue behind as they pass through the network. The capacity of a particular network segment can be estimated by the difference in time between the last bit of the first packet and the last bit of the second packet.

      Matthew conducted a simulation study on WAND Emulation Network. Using his cross-traffic-from-trace (ctft) tool, he randomly sent combinations of packet pairs using sizes and delays that matched the Auckland trace profile. He plots 3D graphs where x and y are the size of 1st and 2nd packet (in bytes), and z is the minimum separation time (in µsec). The graphs indicate that as long as the 2nd packet is smaller than the 1st packet, one can use the minimum separation time to measure the capacity of a given segment.

      Problem: If the outbound link is n times faster than the inbound link, the size of packet 2 must be no larger than the S1/n (where S1 is the size of packet 1) if the pair is to remain together. Therefore, measuring a 100 Mbps hop immediately after a 10 Mbps hop is problematic. However, cross-traffic can help here: if the 1st packet has to queue behind other packets at a router after the separation, this delay may allow the 2nd packet to regain its position behind the 1st one.

    3. Mathieu Goutelle (ENS-Lyon, France), Study of a non-intrusive method for measuring the hop-by-hop capacity of a path

      Tracerate runs on one end node and uses a packet pair method but assumes no concurrent traffic. Tracerate is based on tcptraceroute and is robust until loads are high. Basically he increases hop-by-hop packet pair measurements until detecting the occurrence of a mode and then decreases the probe rate. Mathieu looks for four characteristics: maximum mode, noise area, new mode, previous mode. He has used his tool in 100 ns-2 simulations with a variable utilization rate from 0 to 100%, comparing the simulated capacity and the measured capacity at each hop.

    4. Phoemphun Oothongsap (North Carolina State University), The difficulties of bandwidth estimation in high speed networks

      SABUL is a hybrid TCP protocol that has a congestion control algorithm. It has been proposed as TCP/IP alternative to achieve high bandwidth utilization in high speed networks.

      Phoemphun conducted a series of experiments to check if the SABUL throughput at the application layer matches the physical link rate. He found serious problems with SABUL estimates of both the number of packets sent and the time interval between SYN packets. He also experienced trouble when attempting to use tcpdump to verify SABUL throughput since tcpdump may not capture all packets. Software sniffer requires high amount of system resources. Switching to a hardware sniffer (Adtech 400) also did not help since it can capture only about 32 K of 1500 byte packets which is equivalent to only seconds of capture time on high speed links. Thousands of seconds might be needed to recognize some protocol behaviors.

      The authors plan to experiment with more powerful hardware sniffers. Presently, SABUL performance verification remains problematic.


  • Session III: Evaluation of Tools and Techniques

    1. Margaret Murray (CAIDA), The CAIDA bandwidth estimation testbed

      CAIDA's bandwidth estimation test lab currently consists of a 4-hop OC-48 and gigE path between 2 end hosts through three routers from different manufacturers. All routers are connected to a Spirent SmartBits6000 performance test unit, enabling generation of a variety of cross-traffic. A third host taps the test lab backbone and runs NeTraMet to independently verify traffic rates and characteristics. CAIDA is attempting to offer remote access to this resource to developers wishing to test their tools against known cross-traffic scenarios.

    2. Mark Santcroos (RIPE), Evaluation of existing bandwidth measurement tools for large scale deployment

      RIPE's TTM performance measurement infrastructure must be reliable, consistent, and non-intrusive with respect to other measurement activity. Mark focuses on delay, and measures it between:
      - OS Packet send and DAG NIC
      - DAG NIC and bsd Packet Filter (BPF)
      - BPF and receiver

      The tests show that pathload overestimates and Pathchirp underestimates. Plans are:
      - make more careful logging of all results
      - make delay measurements at different rates
      - analyze real traffic to get more insight about available bandwidth properties, and
      - improve the sending process.

      Discussion: What level of available bandwidth error would be acceptable? 20% may be sufficient, and getting beyond 10% error is likely to take 90% of the work. While ISPs do collect SNMP data, production concerns may limit or preclude access to it, so there is a real need for real time available bandwidth tools.

    3. Jiri Navratil (SLAC), What we have learned from developing and running AbwE

      Based on discussions at this workshop as well as additional testing, Jiri concluded that AbwE does not work. He is now developing another tool called abing.

    4. Mark Pucci (Telcordia Technologies), Accuracy and expressiveness in adaptive bandwidth measurements

      Bandwidth estimation techniques are more sensitive than basic delay/loss/jitter measurements, requiring feedback control to adapt algorithmic behavior. A single estimator is unlikely to be sufficient for all link speeds, all applications, and all types of cross traffic. Telcordia's Internet monitoring Platform (IMP) focuses on achieving the highest accuracy. After examining different cross traffic generators (e.g., constant bit rate, Poisson, actual traces), the developers decided to use MPLS + Diffserv + traffic shapers and policers.

      The P-SPEC language specifies packet characteristics and includes synchronization commands to allow two sides (sender/receiver) to communicate. P-SPEC parsing generates enough packet descriptors to keep the packet generator busy. P-SPEC uses Pathchirp style probe generation that is exponential but not Poisson. Development to date occurred using a 3Com 3C905 series NIC.



      Application software generates highly variant packets of up to 122 µsec granularity using a standard OS. They also considered to: use a kernel interrupt driven approach; exploit NIC characteristics using individual NIC drivers; modify NIC firmware; and use dedicated hardware (may be the only way to monitor 10G.)

      Their graphs show undesirable measurement variance even on 10-100M links.

      In the future they want to:
      - process packet probes the same regardless of packet generation technique
      - control probes at the receiver
      - improve the expressiveness of the required measurement
      - allow a single packet emitter to generate multiple forms of packet probe profiles
      - programmatically specify packet probes without overhead and variance

    5. Ravi Prasad (Georgia Tech), Evaluating Pathrate and Pathload with realistic cross-traffic

      Ravi presented two bandwidth estimation tools. Pathrate estimates path capacity based on packet pair/train dispersion. Pathload estimates path available bandwidth based on one-way delay trend of periodic streams (and reports a range). He evaluated the accuracy of these tools using wide range of cross-traffic load, realistic cross-traffic and fully monitored testbed.

      Type of cross-traffic used in tests is critical for bandwidth estimation tools. Both pathrate and pathload perform well with realistic (random) cross-traffic, but show inaccurate results if simulated traffic is too deterministic. Properly simulated traffic should closely reproduce packet size distribution, interarrival time distribution, and correlation structure of real traffic.

    6. Medy Sanadidi (UCLA), CapProbe: Inexpensive and accurate Estimation of Narrow Link Capacity

      Both expansion and compression of dispersion involves queuing due to cross traffic. Dispersion expansion happens when the 2nd packet of a pair is queued. Dispersion compression results from the 1st packet being queued. Hence, the idea of a tool is: a packet pair with minimal end-to-end delay sum is likely to be dispersed corresponding to narrow link capacity. Looking for packet pair with minimal delay sum is inexpensive and the method appears accurate in most of conducted experiments, simulations, and measurements. It fails under heavy (>= 75%) utilization by non-responsive (UDP) cross traffic.

      CapProbe is work in progress, these results have not yet been published. Medy used dummynet software and wants to test the tool on Abilene and NLANR testbeds. CapProbe works at the user level (no sudo required).



    December 10, 2004

  • Session IV: Applications of Bandwidth Estimation

    1. Andrew Odlyzko (University of Minnesota), Some applications of bandwidth estimation

      An important application of bandwidth estimation is in understanding the evolution and economics of the Internet. Service providers have been almost uniformly unwilling to release any solid statistics about the capacity or traffic on their networks. This secretiveness, combined with some deliberate misinformation, was among the leading causes of the telecom crash, as the industry assumed that unrealistically high growth rates were the norm. Unfortunately, the crash has not significantly changed attitudes among carriers. It would therefore be advantageous to obtain a set of independent measurements of carrier networks. It would serve as a check on claims made by service providers, and hopefully would serve to persuade them to be more open, as they learn that many of their supposed secrets can be obtained by outsiders.

      Good estimates of traffic and capacity would serve to provide guidelines for the supplier sector. They would also help in projecting likely roles of various technologies (QoS, optical Internet, etc.).

      For several years Andrew has been using a variety of snippets of information about capacity and traffic on many networks to estimate what is happening. He published a number of papers summarizing these studies. In pursuit of more systematic data, Andrew is currently developing (with the assistance of a graduate student) tools for monitoring traffic on a variety of sites. He would like to augment those tools with bandwidth estimation techniques.

    2. Jasleen Kaur (University of North Carolina at Chapel Hill), Identifying bottleneck links using distributed end-to-end available bandwidth measurements

      Jasleen and her student Alok Shriram are perform tomography using distributed measurements of end-to-end available bandwidth on multiple paths to identify bottleneck links on each path.

      Jasleen takes a maximum of minimum values of available bandwidth on every shared link from multiple different end-to-end probes. This research addresses strategies for finding multiple bottlenecks on end-to-end paths by applying the following inference rules:
      Rule 1: for each path, links with minimum available bandwidth are potential bottlenecks. This rule can lead to false positives.
      Rule 2: if the end-to-end available bandwidth on 2 paths is equal, it is highly likely that the bottleneck links of each path are in the shared portion of their paths. This rule can lead to false negatives.

      Initial experiments with 4 Planetlab end hosts using pathload as the probing component detected at least one bottleneck in half of paths and detected not more than three bottlenecks for 97% of the paths. About 71% of potential bottleneck links occur within 3 hops from the source.

      Remaining challenges:
      1. probing tool inconsistencies limit the ability to distinguish between bottleneck links
      2. careful probe scheduling is required so that paths that share links are not probed concurrently. Do link-sharing probes in separate steps while minimizing the total number of steps.
      3. determine limits imposed by topology. For example, how long does it take to complete one probing cycle? Probing tool run-time limits the number of participating end hosts.
      The ideal probing tool to face these challenges would: exhibit high accuracy and consistency (within 1Mbps); run fast (<1 s); and not interfere with cross-traffic or concurrent probing.

    3. Andreas Johnsson (Malardalens Hogskola, Sweden), Bandwidth measurements from a consumer perspective - a measurement infrastructure in Sweden

      TPTEST is a consumer motivated measurement infrastructure to evaluate and compare broadband connections based on UDP and TCP throughput. Any end user can probe one of more than ten government and ISP/provider placed test servers.

      Consumer perspective: keep it simple. Users complain that TPTEST metrics are hard to interpret. However, they are getting more educated through the project.
      Government perspective: test servers are provided by governmental agencies. The results must be trustworthy since in Sweden people trust the government more than they trust single companies.
      ISP/operator perspective: TPTEST is useful to make the infrastructure grow and to make consumers interested. Operators trust TPTEST developers and sponsors and use these measurements both to evaluate customer connections and to find errors within their own networks. Concerns:
      - accuracy is very important
      - who is responsible if comparisons are unfair?
      - how critical are test servers' locations?

    4. Andre Broido (CAIDA), Radon spectroscopy of inter-packet delay

      Radon spectroscopy is defined as the study of quantized data periods, frequencies, or delays. As we perform rate limiting analysis, we notice that bit rate estimation is related to bandwidth estimation.

      ATM is no longer the predominant infrastructure in the backbone. But ATM is in every neighborhood because DSL uses ATM. That is why analyzing cell time as a delay quantum may reveal information about edge behavior.


    5. Roundtable Discussion

      Terry Shaw (CableLabs), Life on the Access Network
      Terry initiated the discussion with his slides. DOCSYS Labs controls/manages 24 different stream combinations from 300K to 7M. When you mix and match upstream and downstream directions this yields 48 possible combinations. Cable has a maximum length of 100 Km, so signals are periodically amplified.

    Jasleen Kaur: bandwidth can change within small durations. Tools should specify the time scale at which they work.
    k claffy: reminder: the path can change during the measurement. Traceroute may not return an actual path.
    Constantinos Dovrolis: if you decrease the measurement time this increases the variance of what you measure. You can not decrease the length of the measurement arbitrarily. Tools do not allow users to specify the measurement rate because tools can become too intrusive.

    Attila Pazstor:do we want to look at queuing behavior or "real" available bandwidth? Packet pair methods are not well suited to avail bandwidth measurements (but ok for capacity).

    Tony McGregor: in wireless you are dealing with ugly things about RF propagation. The half duplex nature of wireless introduces new problems for cross-traffic, You do not get back2back packets.
    Matt Zekauskas: power saving modes have a big impact because it can cause things to bunch up.
    Constantinos Dovrolis: CapProbe will work well for wireless capacity.

    Kevin Thompson: tool evaluation testbeds are important and we need more complex testbeds. We need to extend the testbed environment toward realistic IAT and packet size distributions. Steven Low: We are building a WANinaLab that works with EmuLab, Planetlab. It has optical transport and will not emulate real delay. It will be hooked up to the high speed physics network. We hope to have public addresses by next summer.
    Mark Allman: labs are never as good as real world measurements.
    Constantinos Dovrolis concerned about tool evaluation as a competition, but most are written by grad students. Making them industrial strength probably needs to happen in industry.
    Matt Zekauskas: I2 is trying to build operational stuff. It has Inet Observatory: space in router nodes for things that researchers might want to do. Get some equipment and funding and run an experiment.
    k claffy: getting a real world path is hard. Folks here are your best bet. Also, tool developers need feedback. Get a research person together with an infrastructure person. The best approach is 3rd party evaluation of tools. But note: that's not pure research so hard to get funded. There is no obvious funding model.
    Andre Broido: we need a common API, a modular toolkit.

    k claffy: how do we leverage stronger links between bwest and TCP?
    Steven Low: fairness problem: if you want to use bwest for TCP control, are RTTs fast enough to track? But bwest would be very useful for slow-start situations.
    Medy Sanadidi: TCP should measure eligible rate equilibrium. If you do packet pair measurements, you tend to just discover asymptotic dispersion rate. Westwood does something different. Estimation of the backlog of a connection (level of congestion) produces rate estimates. It is aggressive when congestion is low and conservative when congestion is high. Coexistence of Westwood with another protocol is possible.
    David Wei: is there a tool to give me upper and lower bounds of capacity? One side error is needed. We currently only have two side error.
    Matt Zekauskas: bwest is just getting started, but we probably need to let avail bandwidth handle slow start and TCP handle equilibrium.
    Medy Sanadidi: it is important for rate estimation method (like with TCP) to reflect temporary vs sustained connections.

    Terry Shaw: it would be useful to standardize or identify the evaluation methodologies (tools, methods, confidence level)
    - technique: summary; list of techniques with description
    - application: where could a tool be used? What do you really get out of it?
    - means of evaluation: What standard statistical techniques are used? What is the confidence level?
    - pros: What are the strengths?
    - cons: What are the weaknesses? what are the Heisenbugs? (A bug that alters its behavior when you attempt to isolate it.)
    Loki Jorgenson: most what has been discussed addresses the net engineering. But what ultimately matters is application performance. This step up from measurement to performance evaluation is important, but difficult.



Action items
  • Cementing accepted language into an RFC
  • Workshop report (Margaret Murray)
  • CAIDA sabbaticals, test lab, data access (kc claffy)
  • Other output and follow on activities (kc claffy, Constantinos Dovrolis)
  • IRTF mailing list setup in order to nucleate this community (Mark Allman)
  Last Modified: Wed Mar-27-2019 22:23:24 PDT
  Page URL: http://www.caida.org/workshops/isma/0312/report.xml