Talk Abstracts: AIMS 2018: Workshop on Active Internet Measurements

This page contains names, talk abstracts (if presenting), and topics the the participants are interested in discussing, as well as any related URLs. Participants are encouraged to read these ahead of time to anticipate workshop discussion.


Dates: March 13 (Tue) - March 15 (Thu), 2018
Place: Auditorium B210E/B211E Meeting Room,
San Diego Supercomputer Center, UCSD Campus, La Jolla, CA


Participant Abstracts

NameAbstract
kc claffy (CAIDA) Talk Title: Introduction / Platform for Applied Network Data Analysis

Interested in Discussing: PANDA, policy, measurement

Kirill Levchenko (UC San Diego) Talk Title: PacketLab: A Universal Measurement Endpoint Interface

Talk Abstract: The right vantage point is critical to the success of any active measurement. However, most research groups cannot afford to design, deploy, and maintain their own network of measurement endpoints, and thus rely on measurement infrastructure shared by others. Unfortunately, the mechanism by which we share access to measurement endpoints today is not frictionless; indeed, issues of compatibility, trust, and a lack of incentives get in the way of efficiently sharing measurement infrastructure.

We propose PacketLab, a universal measurement endpoint interface that lowers the barriers faced by experimenters and measurement endpoint operators. PacketLab is built on two key ideas: It moves the measurement logic out of the endpoint to a separate experiment control server, making each endpoint a lightweight packet source/sink. At the same time, it provides a way to delegate access to measurement endpoints while retaining fine-grained control over how one's endpoints are used by others, allowing research groups to share measurement infrastructure with each other with little overhead. By making the endpoint interface simple, we also make it easier to deploy measurement endpoints on any device anywhere, for any period of time the owner chooses. We offer PacketLab as a candidate measurement interface that can accommodate the research community's demand for future global-scale Internet measurement.

David Choffnes (Northeastern University) Talk Title: Auditing Net Neutrality for France

Talk Abstract: You've heard from me before about how we have developed tools to detect DPI-based traffic differentiation, exposed problematic ISP policies, and even deployed countermeasures to work around them. In December, 2017, my team signed a contract with ARCEP (France's telecom regulator) to put our research to the test as a product that subscribers of French ISPs can use to audit whether they violate net neutrality. Unlike in the US, such net neutrality violations are illegal in France. I'll talk about the road to deployment and lessons learned along the way. Furthermore, I'd like to discuss the Reverse Traceroute tool, something I hope the community will increasingly incorporate in their path-measurement toolbox.

James Deaton (Great Plains Network) Talk Title: Schools receive over $4 billion per year for "high performance" broadband. Do (we/they) know what that means?

Talk Abstract: Providing high performance broadband Internet access to schools is considered essential to modern education. Connections to schools are paid for with tax dollars, usually involving Federal E-Rate subsidies combined with local funds. A number of groups including Education Superhighway and the State Educational Technical Directors Association have proposed guidelines for bandwidth at schools. In light of the proliferation of CDNs and cloud computing and with requirements for firewalls and filtering (to ensure CIPA compliance) what constitutes "high performance"? What should be measured? How do these measures relate to performance that may be available at home where students do homework? Is bandwidth alone a good proxy for more sophisticated measures?

Interested in Discussing: In order to move public policy we need simple, easy to understand metrics that are meaningful and evidence-based. What new metrics should be considered and quantified and how do we tackle them?

Paul Schmitt (Princeton University) Talk Title: Correlating Network Congestion with Video QoE Degradation -- A Last-Mile Perspective

Talk Abstract: Despite the steady increase in home broadband speeds, watching a video streamed over the Internet can still be a frustrating experience. Overloaded servers, network congestion, and poor home wifi quality are just a few of the potential root causes that can hamper video Quality of Experience (QoE). Without the knowledge of where the problems might be located and in the hope of improving their experience, users often opt to pay higher fees for increased access capacities. As this is only one of the potential bottlenecks that might affect video quality, the ability to monitor and analyze how QoE impairments correlate to the root cause generating them becomes key. This is particularly challenging though, as content providers relying on Dynamic Adaptive Streaming over HTTP (DASH) are quickly transitioning to encrypted protocols (HTTPS/QUIC) to deploy their services, leaving the application logic as the sole vantage point capable of having full information of when these impairments affect the experience.

In order to address these challenges, we propose a novel lightweight system running at the home getaway that integrates active and passive measurements to correlate video QoE with congestion events. The system first separates video flows from other sources of traffic by mapping DNS requests to subsequent TCP flows, which allows the identification of active services, including those that utilize HTTPS encryption. It then tracks traffic patterns of video streams. Our goal is to infer key video QoE metrics such as average bitrate and re-buffering events solely from network traffic. Moreover, the system exploits novel algorithms that use simple probing techniques, i.e. lightweight pings and traceroutes, to take advantage of the home network vantage point to pinpoint where potential root causes hampering the streaming process might be located.

In this work we present the developed system and preliminary results showcasing its potential. We have developed this system for Raspberry Pi and Odroid and we are currently deploying these boxes in homes of volunteer users. We have already deployed 7 boxes in France and 13 in the US. We exploit ground truth collected via a browser extension that plugs into HTML video elements extracting the metrics as seen by the application logic to validate the algorithms inferring QoE from network flows. Results from controlled experiments in laboratory are then used evaluate the congestion algorithms. These results showcase how the system achieve its goals without affecting the home network quality even when running on low cost multi purpose hardware.

Marinho Barcellos (UFRGS) Talk Title: How Far Have We Come? Using IXPs to Measure Improvements of Source Address Validation Filtering in Inter-Domain Traffic

Talk Abstract: It's well-known that IP does not include built-in source address validation, allowing spoofing. This, in turn, enables a series of threats, including redirection, amplification, and anonymity in DDoS attacks. Recent work shows these threats cause significant impact to the Internet infrastructure. Many approaches have been proposed to tackle IP spoofing. Advances put forward include the analysis of the deployment of filtering best practices, detection of spoofed traffic in passive trace analysis, inference of source address validation (via traceroute) and the ability to detect whether an individual network is vulnerable to spoofing through active measurements. While the research community has developed many architectures, the networking industry came up with the scrubbing centers, and the IETF introduced best common practices (BCPs 38 and 84) to protect against source spoofing. Yet, most architectures are not implemented by device vendors, and without relying on 3rd-party scrubbing centers, the only available anti-spoofing tool in practice is ingress/egress filtering. Although the BCPs are effective, for them to work properly, there has to be global coordination between the network operators. In other words, none of the approaches have eradicated the problem, and any solution will require a joint community effort (see MANRS).

We will contribute to this effort by increasing the visibility on networks lacking source address validation (SAV). We propose a methodology and corresponding tools to detect spoofed traffic in network traces, enabling SAV compliance tests for IXP networks. Our approach leverages the uniqueness, both in members and traffic, of the IXPs. Our goals are threefold: (i) investigate the prevalence, causes, and impact of IP source spoofing on the Brazilian IXP substrate; (ii) create a tool that enables IXPs to run compliance tests on source address validation; and (iii) measure the trends on mitigation improvements in inter-domain traffic after we deployed our tool. We will take advantage of continuous flow data and information from the largest IXPs from the Brazilian IX.br ecosystem, to which we have privileged access. This ground truth information will allow us to apply our methods and tools and then validate the accuracy of our results. Finally, we will also use the results to analyze traffic dynamics and assist the IXP operators to implement source address validation filtering.

It is essential to match and explore vulnerabilities originated from the control plane via BGP routes with the data plane traffic. Identifying malicious ASes can provide new insights to strengthen our methodology. The correlation with malicious activity logs (e.g., botnets, DNS attacks, spam IP-blacklists) can help understand how spoofed traffic has been used on the Internet. We can thus improve our defenses through reliable mechanisms.

Going forward, we will encourage IXP operators to deploy and run our tool to identify participating ASes who should verify their SAV filtering configurations. Finally, with this study, our data may inform best practices, encourage adoption, reveal topics worthy of study, and provide the basis for measuring mitigation improvements and effectiveness of source address validation.

Kevin Vermeulen (Sorbonne University) Talk Title: Adding Multipath Route Tracing to RIPE Atlas

Talk Abstract: We present the challenges in deploying resource-intensive multi-path traceroute probing on resource-limited RIPE Atlas probes.

Timur Friedman (Sorbonne University) Talk Title: Adding Multipath Route Tracing to RIPE Atlas

Talk Abstract: We present the challenges in deploying resource-intensive multipath traceroute probing on resource-limited RIPE Atlas probes.

Stephen Strowes (RIPE NCC) Talk Title: Multipath Route Tracing with RIPE Atlas

Talk Abstract: In collaboration with Sorbonne University and RIPE NCC, we are investigating the data collected by recent RIPE Atlas traceroute measurements in terms of multipath discovery.

The RIPE Atlas traceroute implementation currently modifies flow IDs on each iteration of ongoing measurements: by default a cycle of 16 iterations, one every 15 minutes, suggesting that once every four hours the Atlas platform completes an ersatz MDA traceroute measurement.

We intend to analyse the data gathered by long-running traceroute measurements to understand the path variability made visible by Atlas without any additional modifications to the measurements or the platform, and equate these to full MDA traceroutes performed by researchers at Sorbonne University.

We will present some initial analysis as a prompt for further discussion on what modifications to our traceroute measurements we ought to consider implementing, in addition to discussion on how we might improve the results format for traceroute measurements.

Interested in Discussing: IPv4/IPv6, BGP, active measurement and the Atlas platform

Yves Vanaubel (University of Liège) Talk Title: PANDA with Augmented IP Level Data

Talk Abstract: Researches on Internet topology based on traceroute are now ongoing for nearly twenty years. Since the early years, the network measurement mechanism applied is always the same: sending traceroute probes towards multiple destinations and collect the returning ICMP time_exceeded messages. Nevertheless, recent researches have shown how to improve basic traceroute, in particular by labeling IP addresses collected with new and relevant information. This presentation will discuss which improvements can be included into PANDA [1], leading so to an augmented IP level traces dataset.

We believe that the following information must be included:

MPLS data, as MPLS discovery mechanisms [2,4] have been implemented in scamper (cooperation between ULiège and CAIDA). This should lead, among others, to a better understanding of the Internet topology, in particular removing the "blackhole" created by MPLS tunnels that do not decrement IP TTLs.

middleboxes data, as tracebox [6] has been implemented in scamper, allowing so a longitudinal study of middleboxes in the wild [8]. In addition, one can imagine to link PANDA with any existing infrastructure maintaining information about middleboxes, such as the Path Transparency Observatory [7].

subnetworks, by porting TreeNET in scamper [9]. The advantage here is that TreeNET may highlight the potential presence of Layer-2 devices, in addition to routers and subnets.

improved alias resolution through network fingerprinting [5, 10]. Indeed, several IP addresses with different network fingerprinting cannot be considered as aliases.

Finally, the PANDA web interface may also be improved with additional mechanisms for querying this new dataset, e.g., traffic engineering policies applied by operators [3] or ASes middleboxes usage [8].

References:
[1] Center for Applied Internet Data Analysis. DIBBs: Integrated Platform for Applied Data Analysis (PANDA). See https://www.caida.org/funding/dibbs-panda/dibbs_proposal
[2] B. Donnet, M. Luckie, P. Mérindol, J.-J. Pansiot. Revealing MPLS Tunnels Obscured from Traceroute. In ACM SIGCOMM Computer Communication Review. 42(2), pp. 87-93. April 2012
[3] Y. Vanaubel, P. Mérindol, J.-J. Pansiot, B. Donnet. MPLS Under the Microscope: Revealing Actual Transit Path Diversity. In Proc. ACM Internet Measurement Conference. October 2015.
[4] Y. Vanaubel, P. Mérindol, J.-J. Pansiot, B. Donnet. Through the Wormhole: Tracking Invisible MPLS Tunnels. In Proc. ACM Internet Measurement Conference (IMC). November 2017.
[5] Y. Vanaubel, J.-J. Pansiot, P. Mérindol, B. Donnet. Network Fingerprinting: TTL-Based Router Signature. In Proc. ACM Internet Measurement Conference (IMC). October 2013.
[6] G. Detal, B. Hesmans, O. Bonaventure, Y. Vanaubel, B. Donnet. Revealing Middlebox Interference with Tracebox. In Proc. ACM Internet Measurement Conference (IMC). October 2013. See scamper implementation: https://github.com/mami-project/tracebox
[7] Measurement and Architecture for a Middlebox Internet (MAMI) Project. Path Transparency Observatory. See https://observatory.mami-project.eu
[8] K. Edeline, B. Donnet. A First Look at the Prevalence and Persistence of Middleboxes in the Wild. In Proc. International Teletraffic Congress (ITC). September 2017.
[9] J. F. Grailet, F. Tarissan, B. Donnet. TreeNET: Discovering and Connecting Subnets. In Proc. Traffic Monitoring and Analysis Workshop (TMA). April 2016.
[10] J. F. Grailet, B. Donnet. Towards a Renewed Alias Resolution with Space Search Reduction and IP Fingerprinting. In Proc. Network Traffic Measurement and Analysis Conference (TMA). June 2017.

Rocky Chang (The Hong Kong Polytechnic University) Talk Title: Crowdsourcing Mobile User's Fine-Grained Network Performance: Experience and Directions

Talk Abstract: Recently we have developed and deployed an Android app (MopEye) that can measure the latency of any app running on the phone. The app can capture and recognize the traffic from all apps (TCP and DNS for the time being) and use an external network socket to relay them and to measure the round-trip delay at the same time. The app is designed not to capture any user privacy information inside the packets and the URLs entered by users in web apps. There are already over 10K installs from all over the world, and the measurement data have the potential of answering many questions about the mobile users' network experience.

There are two objectives to this talk. The first one is to summarize the information we could glean from the measurement data so far, including
o Overall characteristics of per-app network performance
o Comparison between WiFi and cellular performance
o Identifying poor-performance ISPs
o Mobile DNS performance
o Network unfriendly app behaviour
o Types of connection failures

The second objective is to solicit feedback from the community ideas of how to further this work. There are several possible directions:
o How to share the data with the scientific community?
o How to use the data to enhance our understanding of the real-world mobile network performance?
o How to use the data to enhance the mobile user experience?
o How to incentivize users to install the app?
o How to sustain users' interest in using the app?

Darryl Veitch (University of Technology Sydney) Talk Title: Timing Verification as a Service

Talk Abstract: One of the great challenges inhibiting the coordination of distinct measurement efforts, and their ability for their data to be reused by other measurement programs, and beyond, is their highly specific as well as technical nature. Although at first glance data collected from a given platform may seem to answer the needs of other applications or even policy, when digging deeper the data is almost always inappropriate and incomplete (and highly so) for the new purpose. By encapsulating the measurement expertise and infrastructure behind a convenient interface, the idea of measurement-as-a-service can help to greatly reduce the need for technical expertise, alleviate the problem of the timeliness of data, and improve data relevance through available parameter and configuration settings. However, the underlying infrastructure and analysis limitations make it likely that in the end, a new measurement need can still be only very partially satisfied by an existing service.

We argue that there is an important exception to the inherent problems above, in the area of timing. Timing is special first of all in its ubiquity: essentially all measurement efforts require time information, whether it be simply knowing the dates over which data was collected at one extreme, or the need for billions of event timestamps accurate to the microsecond at the other. Timing is also special because of its universality: there are not myriads of different metrics of potential interest, but only a small finite number of time related `data types' in common to all applications. The key questions for each of these is likewise universal: are my timestamps reliable? how accurate are they? A means to answer these questions by calling upon expertise via timing-verification-as-a-service has therefore the potential to be very widely applicable.

In our talk we will describe the different time `data types' and how they differ. We will flesh out a model for how a timing verification service could work, the kind of services it could potentially provide and how, and their limitations. This includes separating the problems at the client side from those at the time-server present in the network, on which the client timekeeping generally depends. An example service is the following: an application could issue a "server audit request", which would provide continuous external monitoring of a timeserver(s) used throughout a measurement campaign. This monitoring data could be used subsequently for application timestamp quality validation, for troubleshooting, or simply be stored to act as an authoritative timing audit trial if required. To make the discussion concrete we will summarize recent work on the automated detection of timeserver anomalies which would underpin many of the proposed services.

Interested in Discussing: What kinds of timing services do you want?

Ramakrishna Padmanabhan (University of Maryland) Talk Title: Measuring and Inferring Weather's Effect on Residential Link Failures

Talk Abstract: As users increasingly depend on Internet connectivity, the reliability of residential links becomes all the more crucial. While there are many manmade causes of Internet outages, such as bugs and misconfigurations, little is known about natural causes of Internet outages. In particular, weather can have profound impact on residential links: severe storms can cause power outages, wind can blow trees onto overhead wire, and heat can cause equipment to fail.

In this talk, I will present a study on the role that weather plays in causing residential links to fail. We present a new year-long dataset comprising over 4 billion pings to 3.6 million IP addresses throughout the United States before, during, and after periods of severe weather forecast by the National Weather Service. We analyze these data and introduce new techniques to (1) measure how weather correlates with failures across different geographic areas, link types, and providers; (2) detect correlated failures that are caused, for instance, by power outages and network outages.

Ahmed Elmokashfi (SIMETRIC/Simula) Talk Title: fling: A Flexible Ping for Middlebox Measurements

Talk Abstract: Middleboxes have been known to change packets in many ways, making it hard to design protocol extensions that work for the large majority of Internet users. Addressing the need to know what such middleboxes do, we present a tool called fling ("flexible ping"). Fling is a client server tool that allows to update the measurements with a server-side change and it can detect which device along the path changed or dropped a packet. We also present some measurement results to highlight the utility of fling. In particular, we use fling to verify whether DSCP code points can be used by WebRTC for signaling its QoS expectations. We also use fling to measure whether various transport protocols and their options are usable in an end to end fashion. These results are based on measurements from large number of geographically spread ARK, PlanetLab and NorNet probes.

Reza Rejaie (University of Oregon) Talk Title: Assessing Techniques for Mapping Internet Interconnections

Talk Abstract: Mapping Internet interconnections between pairs of ASes is a critical first step in understanding the evolving nature of Internet AS-level structure. The notion of "mapping interconnections" refers to first inferring their existence and type, and then geolocating (or pinning) them to the target colocation facility. Two recently-developed techniques, namely bdrmap and MAP-IT, have focused exclusively on the inference aspect of interconnections for a specific setting without geolocating them while another technique, CFS, only focused on pinning. We have developed another technique, called mi2, that infers interconnections at a target colocation facility and pin them to inside (or outside) of that facility. However, all these techniques not only have a number of limitations but also have not been extensively validated.

We present the strengths and weaknesses of these techniques, discuss challenges for evaluating, validating and comparing these techniques, and examine ways that they could be improved or merged to accurately and efficiently map a specific set of Internet interconnections. Finally, we argue that the interconnectivity in today's Internet is in the midst of a significant shift, with far-reaching implications for understanding the rapidly changing interconnecivity fabric at the edge of the network. We report on the outward signs of this drastic shift (e.g., new types of networks using new types of (virtual) interconnections in the newly emerging infrastructures called cloud exchanges) and describe the impact of this shift on the Internet's interconnection ecosystem in general. We also discuss why this drastic shift indicates the deficiencies of all existing techniques for inferring and/or mapping the interconnections in today's Internet.

Hrishikesh Bhatt Acharya (Rochester Institute of Technology) Talk Title: Held Hostage? Mapping the influence of major providers and CDNs on the Internet

Talk Abstract: The Internet has, over time, grown top-heavy, and a great deal of power is concentrated in the hands of a few major ASes (Level-3, Cogent, TeliaSonera), content providers (Google, Amazon, Facebook) and CDNs (Akamai).

1) There is a need to map out the extent and the influence of CDNs. We propose taking a broad sample of target websites (such as Alexa top- 100k). Then from a large number of probe points, the path is traced (using traceroute) to the actual server that responds to requests for the website, and thereby its hosting AS. (Various heuristics -- reverse DNS lookups, network and geographic location of adjacent routers etc. -- should be used to check the results; standard IP whois is not sufficiently accurate.)

Collection of this data would help identify the endpoints of major CDNs such as Akamai, who do not publicly share details of their infrastructure. Local caches of major CDNs would show up repeatedly, as end points of traces to a large number of unrelated websites. To identify which cache belongs to which CDN, we propose to search for publicly available records where sites share which CDN they employ.

This study would indirectly provide a metric for the relative importance of CDNs and the public Internet. For example, if traffic to Google actually went to the main server in Mountain View, the paths would be very long; the CDN allows paths to be short. Observing average path length to a site, from vantage points around the world, we can estimate how extensively the site uses CDN. Finally, this data would serve as a way to verify the importance of the "Internet backbone". From BGP routing tables, it appears that most paths in the Internet pass through a few core ASes. Whether this is actually true, when we consider that traffic to a site is sent to its CDN (and not necessarily the site's own primary server), remains to be tested.

2) The repeal of legally-enforced Net Neutrality, in the US, is expected to have serious impacts worldwide. Our preliminary results show that over 80% of Internet paths pass through ASes in the United States. (This finding should be verified using probes from various positions in the Internet, as in (1) above.) To detect whether, in fact, throttling by US providers is responsible for poor performance of a site, we need a baseline. We propose to compare multiple sets of similar sites - i.e. located in the same AS, and with similar importance (according to Alexa). From the collected data, we aim to identify cases where one site consistently gets much worse Internet service than its peers, as measured by the tool abget. The position of the bottleneck can be located with the tool pathneck. The responsible AS, along with its nationality, can then be identified using the CAIDA IP-to-ASN map.

Interested in Discussing: Impact of filtering and throttling by major ASes and content providers, on Internet consumers around the world.

Alexander Marder (University of Pennsylvania) Talk Title: Classifying Internet Congestion

Talk Abstract: Internet congestion occurs when the demand for a link's bandwidth exceeds the bandwidth available at that link. Typically, this results in latency increases as packets wait in the queue, and eventually packet loss when the queue fills completely. Identifying congestion by detecting the increased latency, especially at AS interconnections, has recently been of interest to the public policy community, and the focus of recent work by AIMS participants.

However, the mere existence of congestion does not necessarily indicate a problem. In fact, for every TCP flow there exists a bottleneck. When the bottleneck is the bandwidth available to the flow at a network link, loss-based TCP congestion control algorithms, such as CUBIC and (New)Reno, increase their sending rate until they experience packet loss, filling the queue and inducing congestion in the process. These techniques increase the TCP flow's sending rate until packet loss occurs, decrease the sending rate in response, and repeat those two steps for the duration of the connection. Essentially, when using loss-based congestion control, TCP causes the number of packets in the queue to continually increase and decrease.

This behavior produces an observable, but fluctuating, increase in the link latency, which occurs regardless of the bandwidth available to each TCP flow - it could be 5 Mbps or 900 Mbps. Complicating matters, the measured latency, even when averaged over large numbers of probes, is not necessarily a result of the bandwidth available to the bottleneck TCP flows, often resulting from the mix of congestion control algorithms instead. Consequently, an increase in the observable latency does not necessarily indicate an increase in congestion. However, being able to determine the available bandwidth could provide valuable insight, such whether the congestion is a result of an acceptably provisioned bottleneck link, undesirable behavior by Internet networks, or a DDoS attack.

In this presentation, I discuss our ongoing research into the nontrivial relationship between observations from the Internet edge - such as the RTTs and the drop rate of our probes - and the severity of congestion. Using a small dedicated network, we run highly controlled experiments to determine what we can, and cannot, learn from edge probing. Our work has two primary goals: (1) classifying congestion based on the bandwidth available to each TCP flow, and (2) exploring the possibility of identifying and classifying congestion for links subsequent to a congested link.

Matthew Luckie (University of Waikato) Talk Title: Alias Resolution with DNS hostnames

Talk Abstract: I will discuss the use of DNS hostnames to support alias resolution, ongoing work involving Brad Huffaker, Rob Beverly, and kc.

Ricky Mok (CAIDA/UCSD) Talk Title: QUINCE: A gamified crowdsourcing QoE assessment framework

Interested in Discussing: QoE, crowdsourcing

Roya Ensafi (University of Michigan) Talk Title: Censored Planet: Measuring Internet Censorship Globally and Continuously

Talk Abstract: Internet stakeholders such as ISPs and governments are increasingly interfering with users' online activities, through behaviors that range from censorship and surveillance to content injection, traffic throttling, and violations of net neutrality. My research aims to safeguard users from adversarial network interference by building tools to measure, understand, and defend against it. In this talk, I will present Censored Planet, a system for continuously monitoring global Internet censorship. Censored Planet uses novel measurement techniques that remotely detect instances of interference almost anywhere on the Internet. Compared to previous approaches--which relied on having volunteers in censored regions deploy special hardware or software--this results in significantly better coverage, lower costs, and reduced ethical risk. This system allows us to continuously monitor the deployment of network interference technologies, track policy changes in censoring nations, and better understand the targets of interference. Making opaque censorship practices more transparent at a global scale could help counter the proliferation of these growing restrictions to online freedom.

Interested in Discussing: Detect and defend against network interference at all layers.

John Heidemann (USC/ISI) Talk Title: Clustering Internet Outages: from Leaves to Trees

Talk Abstract: Multiple groups are looking at Internet outages with different kinds of active probing, passive observation, and combinations. How do we get from individual observations to aggregate results? This talk will look at a new clustering algorithm that scales to large datasets (millions of of blocks by thousands of observations) to identify clusters that respond similarly. We successfully applied it to service outages and anycast catchment changes during DDoS. We believe clustering is one step to find the "forest" in our trees and leaves of observations.

Interested in Discussing: I'd love to discuss the pros and cons of research via data download vs. web-based access.

Ginga Kawaguti (NTT) Talk Title: QoE measurement in the field

Talk Abstract: When we try a crowd sourcing based QoE experiment, the meaning of QoE measurement becomes completely different from an ordinary QoE experiment. However, crowdsourcing experiment can be a novel way to figure out the "true experience" score of actual services. Discussion about the crowdsourcing experiment and QoE are welcome.

Interested in Discussing: mobile(LTE) measurement, QoE measurement, video quality, web site access performance

Ya Chang (Google Inc.) Talk Title: A better schema for paris-traceroute

Talk Abstract: We have not, as a community, collectively decided on the right way to represent path data in SQL databases, and the situation gets even more confusing when we look at paris-traceroute data. In this talk, we describe some of the use cases we are interested in supporting and give a few proposed schema options to start the discussion. Because users of M-Labs PT data will be in the room, along with the PT developers, we expect a short talk and a long Q&A and discussion to kick off the design process.

Peter Boothe (Google, M-Lab) Talk Title: Deriving modern metrics from historical measurements

Talk Abstract: We too often conflate the measurements of our tools and our metrics of interest. If we define our metrics of interest independently of our tools, then we might be able to reconstruct modern metrics of interest from historical measurement data. Doing so might enable work like https://blog.caida.org/best_available_data/2018/02/06/tcp-congestion-signatures/on a broad scale, and could help enable data exchange between projects, joining historical data and modern measurements into a single cohesive record about Internet performance.

Mattijs Jonker (University of Twente) Talk Title: TBD
Published
Last Modified