Talk Abstracts: The 9th Workshop on Internet Economics (WIE 2018)

This page contains names, talk abstracts (if presenting), and topic keywords for workshop participants as they were submitted. Participants are encouraged to read these ahead of time to anticipate workshop discussion.

Participant Abstracts

NameAbstract
Dan Geer (IQT) Talk Title: Mandatory Reporting of Cybersecurity Incidents

Related Proposal: For Good Measure - Stress Analysis

1) what is the policy goal or fear you're addressing?

Failure to learn from errors increases the probability of further errors. Where those errors are not random bad luck but rather the effort of sentient opponents, recurrence of error has non-zero probability of becoming more virulent. Where pre-conditions for error can be described, failure to learn from their prevalence is just as dangerous as failing to learn from their occurrence.

2) what (specific measurements) data is needed to measure progress toward/away from this goal fear?

As detailed in http://geer.tinho.net/ieee/ieee.sp.geer.1701.pdf, the proposal is two-fold. For errors above some threshold of seriousness, reporting to a civil authority of competent jurisdiction is mandatory. For errors below that threshold, the provision of a membership-based entity that curates voluntary reports of lesser error and shares them with contributors and only contributors. The latter is be operated by a civil authority of competent jurisdiction distance from the first.

3) what (specific) methods do you propose (or are) being used to gather such data?

A difficult political decision on where the threshold is immediately necessary. For analogy, NTSB authority over transportation system errors above a threshold of severity is well understood. Below the threshold for mandatory invocation of NTSB's powers, there is the (NASA run) Aviation Safety Reporting System for receiving anonymized reports of near misses, which are as safety-informative as are NTSB reports. As a second analogy, three dozen communicable diseases have mandatory reporting, pre-empting an individual right to medical privacy in the name of public health. Quite obviously, there already exists one limited form of mandatory reporting in the various breach notification duties that began with California SB1386.

4) who/how should such methods be executed, and the data shared, or not shared?

The threshold for reportability begins with untoward events of a certain minimum size, where "size" means a negotiated, definitive description of how extensive or serious an event must be for its reporting to civil authority to be a lawful duty.

Digital risk is transitive, and ever the more so as interdependence increases. As such, some class of preconditions for untoward events also require reporting. These might include levels of effort to effect intentional downside risk, the degree to which a definable class of failure is inherently silent, and an extension of the definition of merchantability,

As a result of the 2008, et seq., financial crisis, mandatory yet inherently probabilistic reporting, as embodied in the Comprehensive Capital Analysis and Review reports commonly called stress tests, became a US mandate along with similar forms in other parts of the world. They exist to shine a light on structural risks capable of igniting cascade failure without having to wait for the cascade failure to occur. The digital sphere, having more mechanisms for cascade failure than financial services, needs stress tests for its systemically important digital institutions. See http://geer.tinho.net/fgm/fgm.geer.1412.pdf for more.

please also see:

Mike Lloyd (RedSeal) Talk Title: Resilience via Measurement

Talk Abstract: There are two major ways to measure the technical network security posture of individual organizations - looking at their external surface they present, or deploying measurements internally. This raises a series of questions:

  1. permission
  2. comparability
  3. standardization of scoring
  4. who defines "best practices"?

Different commercial organizations are pursuing external or internal measures. This talk will describe one such approach (from an internal measurement perspective) as framing for a discussion of the pros and cons.

Interested in Discussing: What data can effectively be gathered to assess security readiness, or defensive strength, of Internet-visible entities?
Can this data be shared?
Can metrics be standardized?

Josephine Wolff (Rochester Institute of Technology) Talk Title: Collecting Data for Assessing Computer Security "Best Practices"

Talk Abstract: There is no shortage of lists and catalogs recommending best practices for organizations and individuals looking to protect their computer networks and user accounts, but there is surprisingly little empirical evidence of how effective these practices are to reinforce the notion that they are, indeed, the best ones. For instance, companies that have failed to implement multi-factor authentication and encrypt sensitive data are routinely cited in class action lawsuits and Federal Trade Commission complaints related to data breaches for failing to adhere to industry-standard best practices because they did not implement these two types of security controls. Since 2013, these two controls have all but replaced the security best practices--largely centered on password requirements and IP address filtering--that were cited in earlier legal proceedings and government investigations. That's in part due to the fact that the threat landscape has changed, and in part because we now know that some of the commonly accepted "best practices" for passwords, such as requiring that they be changed at regular intervals, did not in fact correlate with positive security outcomes (Cranor, 2016).

Despite knowing that commonly accepted wisdom does not always turn out to yield the most effective security practices, very little measurement work has been done to assess the supposed best practices for security that are now being touted in court rooms and government agencies as the standard for avoiding legal liability. In part, that's because this measurement work is often difficult--and requires bringing together data sets from multiple different parties and identifying concrete, desired security outcomes. But it's also true that we have surprisingly little data on practices like multi-factor authentication, and most of what has been studied has focused on the usability and user experience associated with these controls, rather than their effectiveness at preventing compromises (Colnago et al., 2018). For instance, Google, which has had a two-factor authentication option for users for more than a decade, has never released any data about how many people use it or whether it reduces the risk of account hijacking. (They have, however, said that since switching to physical tokens as a second factor for all of their employees they have not experienced any internal account compromises--a statement that suggests their previous employee policy, which allowed for a smartphone notification or call to serve as a second factor, did not meet that criteria.) From these occasional, voluntary statements, organizations are left to guess at how effective these systems will be even as they spend hundreds of thousands of dollars to purchase them.

The first challenge when it comes to measuring the effectiveness of security controls is figuring out what actual desired outcome--or outcomes--an organization is hoping to bring about by implementing that control. For instance, in interviews with IT teams at universities that are implementing multi-factor authentication, employees identified several different goals, ranging from: driving down the number of compromised accounts and reducing the effectiveness of phishing, to driving down the financial losses caused by compromised payroll and billing accounts, to preventing espionage directed at on-campus research efforts. The second challenge is trying to find appropriate proxies for measuring these various goals that yield meaningful (i.e., statistically significant) results. This talk will review some of this work, done in conjunction with multi-factor authentication provider Duo and several university partners, and the institutional data-sharing obstacles to trying to measure the impacts of multi-factor authentication, as well as some proposals for what better data collection might look like in this space. Though the empirical work I have done in this space mostly centers on multi-factor authentication, I will also discuss more broadly the implications of this project for trying to assess the impact of other types of security best practices and what types of data collection mechanisms might facilitate that work. Finally, I'll tie this topic back to policy--both in the form of legal cases and FTC investigations which rely heavily on vaguely defined notions of industry best practice, and also its relevance to pending legislation like the Internet of Things Cybersecurity Improvement Act, which would mandate certain security practices and controls without including any mechanism for measuring their effectiveness.

Jessica Colnago, Summer Devlin, Maggie Oates, Chelse Swoopes, Lujo Bauer, Lorrie Cranor, and Nicolas Christin. 2018. "It's not actually that horrible": Exploring Adoption of Two-Factor Authentication at a University. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Paper 456, 11 pages. DOI: https://doi.org/10.1145/3173574.3174030.

Lorrie Cranor, "Time to rethink mandatory password changes," Federal Trade Commission blog. March 2, 2016. Available from https://www.ftc.gov/news-events/blogs/techftc/2016/03/time-rethink-mandatory-password-changes

Interested in Discussing: How do we build measurement mechanisms into policies that mandate specific computer security controls or outcomes?

Andrew Odlyzko (University of Minnesota) Talk Title: Trying to understand the nature of the evolving ICT world

Talk Abstract: FCC Computer Inquiries were remarkably effective. This was largely because in their days it was possible to separate transmission from information processing in at least moderately clean ways. What we have today is a much more complicated world that is evolving rapidly, and becoming increasingly opaque. The public Internet, which is where so much of the open research has been concentrated, is becoming less and less important. Much of the value (although still only a small fraction of the volume) of traffic is carried by wireless companies, and we get only limited insights into their operations. Even more important, what matters to people is their total experience, and that depends increasingly on what happens inside the Cloud. This applies even to the most valuable of all functions, namely person-to-person connectivity, which is also increasingly mediated by the Cloud. As the Cloud providers (and, more visibly, but in a less important way the content providers) bring their networks closer to the customer, the role of the traditional service providers, which were the focus of monitoring and regulation, is shrinking.

A first step towards getting a grasp on what is happening might be the creation of research projects that would provide in an open, verifiable, and criticizable way the information that companies like Alexa, Cisco, Comscore, and Sandvine do. Just what are people doing with their connected devices? What services are they using, what connections are they utilizing, what kind of traffic are they generating, and how valuable is it to them?

Marvin Sirbu (Carnegie Mellon University) Talk Title: Measuring internet traffic in a world of proliferating private networks, both real and virtual

Talk Abstract: The public Internet is only one of the communications services that rides over the world's communications infrastructure. Three decades ago, the Internet accounted for only a fraction of the bits carried on SONET links, with the bulk carrying voice traffic. Two decades ago, a fiber cable installed by Level 3 might have had individual strands committed as IRUs to several different ISPs; Enron became a major ISP while laying little of its own fiber. Today, a transoceanic fiber cable may have the bulk of its strands/wavelengths devoted to carrying intra-corporate traffic for large cloud services providers, with only a minority committed to ISPs that serve the larger public. In the near future, packet platform providers, especially the 5G wireless operators, will be selling numerous network slices to large groups of firms, such as the electric utility industry, or hospital associations; to resellers (think today's MVNO's); and even to individual corporate customers.

Today, we have little clarity into how much of a nation's communications infrastructure is providing "common carriage" vs "private carriage" services. If, as appears to be the case, an increasing fraction of traffic is moving off the public Internet to cloud provider internal networks, and network slicing increases the number of "non-public-internet" networks, measurement of traffic moving over Internet ISPs will represent less and less of the real communications activity--most of it still TCP/IP protocol based--that is traveling over the nation's infrastructure. Forecasting the growth of TCP/IP traffic, important for equipment vendors, and infrastructure builders, requires counting both public Internet and non-public intranet traffic. Yet the latter traffic is increasingly invisible to national authorities.

From a public policy perspective, one fear is that the growth of private carriage (sometimes referred to as specialized networks) will crimp the capacity available to the public Internet, reducing its performance. How will we determine whether the proliferation of slices on a future 5G network is constraining the public Internet? While such a proliferation may advance the goals of innovation, trustworthiness and evolution, will it undermine ubiquity and generality?

The measurement challenge, as I see it, is how, in the future, to measure the full scope of TCP/IP communications activity. What is the total amount of TCP/IP traffic in the world? How is it split between public and private carriage? How do we avoid double counting traffic carried on resellers and traffic carried by facilities-based providers? How do the statistics of individual flows differ on intra-cloud networks from those of traffic on the public Internet? Are the statistics of private carriage predictive of future public Internet traffic, or are these inherently different? How do we develop an infrastructure that recognizes and can properly measure and identify sources of congestion in a virtualized network environment with many slices?

At this stage, I have questions, not answers. The underlying problem is how do we develop a measurement infrastructure that captures the full breadth of (small "I") internet traffic? The workshop seems like an ideal venue to explore these questions.

Interested in Discussing: With the proliferation of gigabit consumer Internet access, the ratio of a user's peak to average traffic increases dramatically. What effect does this have on the frequency of network congestion and on network planning?

James Miller (Federal Communications Commission) Talk Title: A Five Year View of Internet Related Data Collection: Towards Data-Driven Policy Making for the Future

Talk Abstract: This talk addresses concerns that a new set of legislative and regulatory priorities may emerge in the coming five years that require strong empirical basis for developing the new legislative and regulatory agendas. The paper proposes that data collected using both passive and active, self-reported and consumer or third-party discovered, and simulated and actual methods should be collected to produce a set of indexes including: Privacy Index, Consolidations Index, Managed and "Internet" Traffic Index, Prioritized Services Index, Domestic Traffic Index. These indices and supporting metrics will be collected from economic relationships, engineering data, consumer use, public documents and statements and other sources and collated from private and public sources to produce the a set of publicly shareable datasets to support the indices. The effort will likely require the combined effort of academic, industry, public interest and government actors under an academic or government banner. The result of the effort will support the critical decision making that may occur as policy space evolves in the future.

The history of the recent ten years of use of Internet Measurement in Internet policy reflect the conditions of consumer broadband and early focus on congestion, advertisements of data rate speeds, and management of traffic from sources competing with broadband incumbents. While these tools are likely to continue to serve valuable public policy aims, a new class of measurement questions are posed by evolution of digital economy. Today data services, APIs, content management systems, service availability and "neutrality" concerns that echo network related neutrality concerns continue to pose important concerns that are unresolved in the current regulatory environment.

Network neutrality and the Open Internet regulatory landscape has batted back and forth through this period with the only result that no resolution of concerns posed by any side of the debate are settled. The call from all sides for new statutory law to govern these and other evolving problems continues to rise. This talk will explore what measurement questions may or should emerge as legislators crafting new statutory law and regulators struggling under the yoke of imperfect basic laws may find valuable and explore.

The goal of these efforts would be to provide empirical baselines that would be necessary to answer questions critical in the development of new statutory and regulatory approaches that may emerge. Following the approach adopted in the MBA program, collections might rely on a combination of a core developer or internal staff team that leads public-private collaborations with stakeholders, and rely on crowdsourcing and stakeholders to provide passive and active data. The goal of such effort would be to provide robust observational datasets that could be shared publicly to support policy analysis, conclusions and proposals.

The variety of problems in the broader digital economy that remain largely unaddressed suggest a mix of the following datasets might be useful:

  • Privacy Index, a mix of text corpus analysis of statements by digital economy actors that might be processed in machine readable and webscraped form to produce an index of weightings of various actors;
  • Consolidations Index, a view of centralization and consolidations of various technical infrastructue including virtual and physical network assets (akin to the DHS NCS reports of past), DNS Cloud and other service infrastructure redundancy, collocation concentrations geographically;
  • Managed and "Internet" Traffic Index, describing the variety and volumes of traffic carried over managed, hybrid, or traditionally globally accessible routed paths;
  • Priortized Services Index, addressing the varieties and volumes of priortized services on networks;
  • Domestic Traffic Index, providing a view of service breakdowns of data, search and other application layer features by source, path and other features.

Interested in Discussing: How do we know when the individual Internet "entities" are too big? What metrics would be relevant for establishing a answer to the question? What harms should be focused on? What techniques are appropriate for remedying risks and separately harms?

Rob Frieden (Penn State University) Talk Title: Two-sided Internet Markets and the Need to Assess Both Upstream and Downstream Impacts: Lessons from Legacy Credit Card Platforms

Talk Abstract: This paper examines how Internet ventures operate as intermediaries serving both upstream sources of content and applications as well as downstream consumers. Alibaba, Baidu, Amazon, Facebook, Google, Netflix, Tencent and other Internet "unicorns" have exploited "winner take all" networking externalities quickly accruing billion-dollar valuations. Courts and regulatory agencies acknowledge the substantial market shares these ventures have acquired, but most refrain from imposing sanctions on grounds that consumers accrue ample and immediate benefits when platform operators use upstream revenues to subsidize downstream services. Consumers also gain when intermediaries eschew short term profits to acquire greater market share and "shelf space."

Broadband platform operators have thrived in a largely deregulated marketplace with prospective regulation largely preempted by the view that consumers have benefitted without the need for government oversight. However, the court of public opinion may have begun to shift from the view that platform operators present a universally positive value proposition.

A proper assessment of consumer welfare balances downstream enhancements through convenience, cost savings, free-rider opportunities and innovation with upstream costs including the value of uncompensated consumer data collection, analysis and sale. The policy goal promoted by the paper suggests that digital platforms have the potential for both great benefits and harms. Without a proper mode of analysis and assessment, one may over- or under-estimate impacts.

The paper determines that many of the platform intermediaries most likely to harm consumers and competition have benefitted by a reluctance of government agencies to examine upstream impacts. The paper uses consumer welfare gains and losses for measuring impacts, while acknowledging the difficulty in accurate quantification. A recent Supreme Court case provides an opportunity to assess consumer costs stemming from contractual language prohibiting merchants from "steering" consumers to a credit card offering lower processing fees. One can estimate the higher cost to consumers in terms of higher product and service costs resulting from higher credit card processing fees. However, offsetting consumer benefits are more difficulty to quantify, e.g., the value of credit card company provided travel advice and cardholder opportunities to buy Broadway theater tickets at face value.

The paper concludes that two-sided markets require assessments of potential competitive and consumer harm occurring on both sides. Accordingly, courts and other decision makers can use estimates of monetary impacts to consumers as a measurement of costs and benefits that should constitute part of their evaluations.

To clarify, addressing the four questions:

1) what is the policy goal or fear you're addressing?

"The paper determines that many of the platform intermediaries most likely to harm consumers and competition have benefitted by a reluctance of government agencies to examine upstream impacts." The policy goal is to improve consumer and marketplace (antitrust) safeguards. Without the upstream assessment of harm, election meddlers, trolls, predators, character assassins et al inflict damage with impunity.

The policy goal is better calibrated government oversight. The fears include threats to the rule of law, participatory democracy and the ability of consumers to capture welfare gains by paying prices below their maximum tolerance (nearly eliminated by dynamic and surge pricing).

2) what data is needed to measure progress toward/away from this goal fear?

"The paper uses consumer welfare gains and losses for measuring impacts, while acknowledging the difficulty in accurate quantification."

I am suggesting a "follow the money" approach. For example, using the Supreme Court credit card case, consumers have to pay more for products and services, because they cannot be steered to one with a lower vendor swipe fee. There is some quantification and "accounting science" applicable here in the forensic assessment of lost consumer welfare and higher costs.

But on the other side of the market (the upstream side), there may be somewhat quantifiable, offsetting benefits. In the abstract, I acknowledge that the quantification process might be difficult and impercise:

"One can estimate the higher cost to consumers in terms of higher product and service costs resulting from higher credit card processing fees. However, offsetting consumer benefits are more difficult to quantify, e.g., the value of credit card company provided travel advice and cardholder opportunities to buy Broadway theater tickets at face value."

3) what methods do you propose (or are) being used to gather such data?

"The paper uses consumer welfare gains and losses for measuring impacts, while acknowledging the difficulty in accurate quantification."

I am proposing forensic accounting to quantify consumer costs, e.g., higher prices that incoporate swipe fee costs, and consumer benefits, e.g., being able to pay face value for a Broadway play ticket instead of the legal, gray or black market higher, scalped prices.

4) who/how should such methods be executed, and the data shared, or not shared?

"Accordingly, courts and other decision makers can use estimates of monetary impacts to consumers as a measurement of costs and benefits that should constitute part of their evaluations."

I am recommending that courts, regulatory agencies and other decision and policy makers conduct cost/benefit quantifications to identify net impacts to consumers and competitors.

Interested in Discussing: How do platforms and intermediaries operate in the Internet ecosystem?
How do two-sided markets operate?
Please explain the 2018 Supreme Court decision in Ohio v. American Express.

Harold Feld (Public Knowledge) Talk Title: The Primary Question of Policy and Research: Why Do We care?

Talk Abstract: Increasingly, governments look to large social media platforms to engage in pro-active content moderation to take down and limit dissemination of content deemed harmful for a variety of reasons, such as hate speech, and to detect and disrupt efforts to organize harmful activities. opponents of such "content moderation" policies argue that they threaten legitimate controversial speech or that such policies can be manipulated to suppress legitimate speech. Proponents of content moderation complain that social media platforms are generally insufficiently responsive to harmful content and the damage it causes. In an effort to address these concerns, companies often publish "transparency reports" detailing the number of complaints received, number of actions taken, and other statistics. Germany's recently passed NetzDG, which requires social media companies to take down content illegal under German law within 24 hours of notification, and to publish a report detailing the number of complaints, number of takedowns, number of appeals, and number of reinstatement following appeals.

I argue that the information contained in the transparency reports is largely useless for determining the critical questions of public policy with regard to content moderation by platforms, i.e., do they actually work to curtail harmful speech, and if so at what cost to legitimate speech? Are these platforms subject to manipulation by those seeking to supress speech for financial or ideological reasons? Are particular populations disproportionately impacted, or particular viewpoints more likely to be flagged as harmful. Focusing on NetzDG, I explain why the metrics demanded by the report are accurate but irrelevant to the policy questions these laws seek to address and what information should be collected.

Interested in Discussing: Content moderation, or refusal to moderate content, both have costs. How can we measure the appropriate trade offs?

Shane Greenstein (Harvard Business School) Talk Title: Digital Dark Matter

Talk Abstract:

  1. what is the policy goal or fear you're addressing?

    I am interested in a pervasive mismeasurement problem in IT infrastructure. All developed economies, US included, have poor measures of the value of activities that are unpriced. This includes "open source" activities with unpriced inputs and outputs, such Linux, Apache, Ngninx, Wikipedia, and Github. There are others. These are inputs into production, often without limit, and they generate gains to the user. Yet, they play no role in GDP, neither flows (e.g., revenues per year) nor stocks (e.g., wealth held by firms or households). These type of activities are particularly important for the internet and IT infrastructure. There are many standard government institutions that simply ignore these activities because they cannot be summarized in economic terms, i.e., dollars and cents. What kind of problem results? For example, it results in simple oversights, such as underestimates of the value of the benefits from government R&D, which generates as outcomes an unpriced good (e.g., the HTTPd server). After more than several decades of this oversight, if I have a fear, it is this: I fear we have introduced many omission and attribution biases into US economic policy, and US economic statistics.

  2. what data is needed to measure progress toward/away from this goal fear?

    This is the open research question. I do not believe there is any general approach yet. Indeed, I have not observed much in the way of any general approach in any of the places one might expect to find it, such as the Bureau of Economic Analysis, the Bureau of Labor Statistics, the Census Bureau, the FCC, the NTIA, or the OECD.

  3. what methods do you propose (or are) being used to gather such data?

    One approach, which Frank Nagle and I have pioneered, is to use "near market goods" to try estimate the value of flows and stocks. See attached papers [1, 2]. This is borrowed from the methods to estimate the value of public parks. Bill Nordhaus (the recent recipient of the economics Nobel) pioneered these methods. It is easier to explain in person. There may be other methods that could work as well or better. It is an open research question.

  4. who/how should such methods be executed, and the data shared, or not shared?

    That is an open question. There are many examples that have these traits -- no input prices and no output prices and make a major contribution to activity. There are also many examples with just part of these features -- no input prices and/or no output prices.

Interested in Discussing: Measurement of unpriced goods and services that make up important elements of the Internet ecosystem.

Scott Jordan (University of California Irvine) Talk Title: Interconnection measurement for policy and for research

Talk Abstract: Policy goal -- develop methods to illustrate when Internet interconnection agreements may be used in anti-competitive and discriminatory manners.

Research abstract - This project will construct a modern model of Internet interconnection that incorporates both the relevant technical and economic factors. The model will be used to explain and guide new forms of Internet interconnection arrangements. The project will also develop methods to illustrate when Internet interconnection agreements may be used in anti-competitive and discriminatory manners. In the development of models of Internet interconnection, the project will investigate the cost factors that should affect interconnection arrangements, including the destinations to which a network operator will route traffic, the volume of traffic, the network cost along that path, and any transit payments required. The project will also investigate the value factors that may affect interconnection arrangements, including the destinations to which an interconnection partner will route traffic, the volume of traffic, and the value of that traffic based on the application. The project will then analyze the variation of the feasible ranges of prices for paid peering with network costs, the maximum that a party is willing to pay, routing and traffic ratios, and competitive pressures.

Helpful measurements -

Traffic matrices:

  1. The complete traffic matrix for an indirect connection between a content provider and an ISP, containing the traffic over a specified time period during the peak usage period, as a function of
    1. the originating content provider,
    2. the IXP at which the traffic enters a specific transit provider's network,
    3. the IXP at which the traffic enters a specific ISP's network, and
    4. the closest IXP to the destination customer.
  2. The complete traffic matrix for a direct connection between a content provider and an ISP, containing the traffic over a specified time period during the peak usage period, as a function of:
    1. the originating content provider,
    2. the IXP at which the traffic enters a specific ISP's network, and
    3. the closest IXP to the destination customer.

Performance matrices:

  1. A performance matrix for an indirect connection between a content provider and an ISP, containing the one-way average delay, over a specified time period during the peak usage period:
    1. from the border router in a specific content provider's network, through a specific IXP,
    2. passing through a specific transit provider's network, to
    3. the border router at a specific IXP in the destination's ISP's network
  2. A performance matrix for a direct connection between a content provider and an ISP, containing the one-way average delay, over a specified time period during the peak usage period:
    1. from the border router in a specific content provider's network, through a specific IXP, to
    2. the border router at a specific IXP in the destination's ISP's network
Constantine Dovrolis (Georgia Institute of Technology) Talk Title: Efficient and fair paid-peering

Talk Abstract: The current framework of Internet interconnections, based on transit and settlement-free peering relations, has systemic problems that often cause peering disputes. We propose a new techno-economic interconnection framework called Nash-Peering, which is based on the principles of Nash Bargaining in game theory and economics. Nash-Peering constitutes a radical departure from current interconnection practices, providing a broader and more economically efficient set of interdomain relations. In particular, the direction of payment is not determined by the direction of traffic or by rigid customer-provider relationships but based on which AS benefits more from the interconnection. We argue that Nash-Peering can address the root cause of various types of peering disputes.

Interested in Discussing: 1) what is the policy goal or fear you are addressing? - Our goal is to address/avoid the policy disputes that are often associated with paid-peering.

2) what (specific measurements) data is needed to measure progress toward/away from this goal fear? - Ideally, we would like to have information about how ISPs and Content Providers estimate the costs and benefits of a peering interconnection for different capacity levels and traffic volumes.

3) what (specific) methods do you propose (or are) being used to gather such data? - I doubt that most ISPs or Content Providers would make this information publicly available. However, there are non-profit ASes (such as Research Networks) that may be interested in working with us in quantifying the costs and benefits of peering interconnections.

4) who/how should such methods be executed, and the data shared, or not shared? - I think that CAIDA can play an important role in this effort, first by identifying interested parties, and second by working with those ASes in measuring (or estimating -- see Amogh's earlier work on transit cost estimation) the costs and benefits of a small number of peering interconnections. The resulting data should be made publicly available of course.

Jon Peha (Carnegie Mellon University) Talk Title: Measuring Resilience to Disasters of Telecom Systems

Talk Abstract: Every community is at risk from some type of disaster, whether its a hurricane, earthquake, tornado, or act of terrorism. When this happens, functioning telephone, cellular and Internet services can save lives. People use their phones to call 911 for help, and to reconnect with missing family members. They use the Internet to find emergency shelters, and evacuation routes. We need resilient telecommunications systems that can operate to the extent possible during a disaster, and that can be restored quickly after a disaster. Effective disaster response requires knowledge of what is working and what isn't. Moreover, sufficiently resilient systems will only emerge if market forces and/or government regulations provide incentives to telecommunications service providers, which is extremely unlikely as long as customers and policymakers have little understanding of just how resilient today's systems are.

Some telecommunications providers do make reports to government agencies such as the Federal Communications Commission (FCC), but some of that reporting is voluntary. The FCC releases reports that aggregate the limited information that the agency has, further reducing what the public can learn about how well telecom infrastructure withstood the disaster. Perhaps worst of all, the outage information presented in official government reports can simply be wrong, as I observed in my work on telecom in Puerto Rico after Hurricane Maria [1]. Misinformation can be worse than no information.

We need to do a better job of measuring and reporting outage information after disasters for every provider of telecommunications services. This information could include

  • the number of subscribers who lost service,
  • the specific services that were down, if not all (e.g. phone vs. text vs. Internet access),
  • whether quality of service was severely degraded (e.g. very low data rates for Internet, or very high call termination rates for telephone),
  • the geographic areas in which service was lost,
  • the geographic areas in which service was severely degraded,
  • the time until service was restored for different subscriber groups and different regions,
  • the applications and content blocked or throttled in response to the disaster,
  • any failures that specifically affected critical applications, such as 911 and emergency alerts.

Ideally, this data should come in part through self-reporting, and in part through third parties, as neither source is sufficient on its own. FCC regulations should mandate that outage data be reported to the FCC within 60 days of a disaster, and operators should be encouraged to provide this data in real time to the extent possible, with the ability to subsequently amend without penalty within the 60 day window. Other organizations in industry and academia with the capability to provide useful measurements should be encouraged to report results to the FCC.

Tony Tauber (Comcast) Talk Title: Measuring the Performance Implications and Centralization Risks of DNS over HTTPS (DoH)

Talk Abstract: For several years, academics have studied the effects of greater centralization of Internet traffic to an ever-smaller set of players, from CDNs to edge providers. This trend has influenced everything from the measurement and understanding of interconnection to data collection and privacy debates. In response, people like Vint Cerf, Tim Berners-Lee, and Brewster Kahle, have worked to counter this architectural trend by championing the concept of "re-decentralizing the Web". In the midst of this debate over the future architectural direction of the Internet and the structure of the Internet ecosystem, a new standard called DNS over HTTPS (DoH) has been developed.

DoH, if implemented as some browser developers have suggested, would dramatically centralize the most widely distributed protocol on the Internet -- DNS --concentrating all recursive DNS query traffic on three or four global mega platform providers. This effort is continuing apace with little apparent appreciation of the broad security and stability risks to the DNS and Internet at large -- including from state-level surveillance and malicious attack. In addition, the performance measurements conducted to date to assess such a dramatic change have been limited in scope and scale, suffer from measurement design and experiment deficiencies, and fail to properly measure the real underlying user-centric measurement of how fast they get access to underlying content. These measurements have also only focused on performance, and have not seemed to focus meaningfully on the potential long-tail of broken DNS-related functionality (such as enterprise split DNS).

This presentation will:

  1. Educate attendees about what DoH is, how it works, and the problem it was designed to address
  2. Explain the likely manner that DoH will be operationally deployed
  3. Outline the security, stability, and privacy risks inherent in DoH
  4. Review measurements conducted to date, and the deficiencies of those measurement efforts
  5. Explain the need for a true user-centric measurement, and outline how measurement experiments may be designed and conducted

Geoff Huston (APNIC) Talk Title: Why don't we have a Secure and Trusted Inter-Domain Routing System?

Talk Abstract: The topic of how to operate a secure and stable routing system in the Internet is about as old as the Internet itself. Why have the various efforts to introduce trust and integrity in the Internet's inter-AS routing domain been so unsuccessful?

It is unclear if there are technical solutions to this problem that would obviate the need for some for of overriding regulatory constraint, or if we need to impose some form of constraint on network operators that incent operators to adopt more careful routing practices by imposing some form of penalty or sanction on poor routing behaviour. So the goal is to understand if there are natural incentives for "good" routing practices or if some form of external intervention is necessary.

We need better data on the extent to which the routing system is being abused, both deliberately and inadvertently. The current BGP incident measurement frameworks are piecemeal in nature and are not well instrumented to show a larger picture of routing abnormality.

There are only two public large scale route collectors and both appear to be somewhat atrophied at present (Route Views and the RIPE NCC RIS project). They are constrained by attempting to be both a long term data archive and a real time data feed. We probably need to look at a larger system that collects a larger set of eBGP viewpoints and dispense with long term data archival if we want to gain a larger view of the entire Internet. There is no systematic collection and archival of route registry data, nor of ROA generation, and it may be a fruitful direction of investigation to look at the correlation between the routing system and the ROA set against the operation of the BGP system to understand the potential effectiveness of any proposed mitigation.

It is necessary that such a data collection be maintained as an open collection so that the various methods for anomaly detection and mitigation can be tested by any interested researcher, in a manner similar to the existing BGP-only data sets.

David Reed (CU Boulder) Talk Title: Comparison of 477 and California Broadband Map Data

Talk Abstract: As noted by Clark and Claffy (2015), the shaping of the physical layer has depended upon encouraging the presence of multiple broadband providers in any given area in order to realize the benefits of facilities-based competition. Websites describing broadband maps have been established by federal and state governments as a means to measure and communicate the number of broadband providers across geographic regions. Limitations in the data collection process, however, complicate the accuracy of this data collection process. In particular, the use of the census data block as the smallest service area for reporting in the broadband maps can lead to over-estimating the number of providers competing against each other in the census block since it is often the case, particularly in rural areas, that service providers only provide service to a fraction of the census block area. This situation can also lead to inaccuracies regarding the broadband speed coverage as well.

The difference in collection processes between the federal government and the states provides an opportunity to better understand the impact of using census block level data for broadband decisions. For the federal government, the Federal Communications Commission (FCC) uses Form 477. At the physical layer, the FCC collects broadband data and the census block level that identifies the broadband provider and, among other things, the service speed and access technology used (e.g., fiber, DOCSIS by version, DSL, fixed wireless).

At the state level, California also collects broadband data from providers in the state. The California Public Utility Commission (CPUC) gathers the same data as the FCC on Form 477. An important additional step taken by California, however, is verification of coverage of fixed wireless providers using propagation models (using EDX software) to estimate service coverage using submitted tower, antenna and radio information. Based upon the results of these propagation models, it is possible to see the degree of coverage provided by each fixed wireless service provider throughout the census block, as well as the overlap in coverage between fixed wireless providers within the census block. This more local coverage data should indicate the degree to which variations in competition or speed occur when looking at more granular data below the census block level.

This analysis is its early stages. By the WIE workshop in December, there should be some early data to discuss (based upon analysis of a small number of census blocks). Key questions or hypothesis for the research to address include:

  1. Is competition in rural areas being over-estimated due to very large census blocks and relatively small coverage areas of fixed wireless providers?
  2. Are the number of broadband subscribers meeting certain broadband thresholds being overestimated in rural areas as well within these large census blocks?
  3. Are there a number of fixed wireless providers in higher density suburban and urban census blocks, and if so, what is the impact on speed and number of providers in these locations?
  4. What alternatives are there for broadband data collection to address any shortcomings revealed by the data analysis. For example, what would be the costs and benefits of street level data collection, and could increasing use of GIS system data make this more feasible?
  5. Given the increasing reliance of society on upon broadband on the one hand, and the increasing maturity of broadband deployment and the proprietary nature of deployment footprints on the other, how long does collection of this data make sense?

If accepted, part of the motivation for this discussion will be to provide feedback on ideas to help sharpen or suggestions for additional analysis using this unique data.

(Note I am collaborating with David Espinoza, CSU Chico, on this research)

Henning Schulzrinne (Columbia University) Talk Title: Broadband Deployment Data - Moving Beyond Form 477

Talk Abstract: Almost all high-income countries (and many others) claim providing universal broadband Internet access as a high-priority economic development goal. For example, the United States has been pursuing universal service for broadband, primarily in high-cost rural areas, for more than a decade. However, evaluating the state of deployment, affordability and the efficiency and effectiveness of subsidy programs is difficult. For example, some suspect that subsidies are primarily used to build out in places that would get broadband anyway, even without subsidies, or that they establish service that mainly locks out other providers that might offer cheaper and faster service. For the US, the principal means of data collection, via the FCC Form 477, has numerous limitations that are by now well-known: the data is self-reported by carriers, a single served location makes the whole census block appear served, systems may not have the capacity to serve additional location in a block, and the speed tiers reported often do not correspond to actual speeds, particularly for DSL.

Interested in Discussing: broadband availability and actual speed; economic measures (pricing)

Sascha Meinrath (Penn State University) Talk Title: Mapping Broadband in PA: An Overview

Talk Abstract: Policy goals: To improve broadband access and mitigate digital divides for underserved populations and rural areas. To increase the rationality of policy and investment decisions aiming at improving Internet connectivity by generating accurate and granular information about the technical and socio-demographic conditions of broadband access.

Data needed: Effective policies will require better data on the technical and socio-demographic conditions of access. Current official broadband data collection efforts use procedures and standards that often result in inaccurate results vis-a-vis on-the-ground conditions. Deficits exist in both the granularity of measurement (e.g., the use of data such as FCC Form 477 filings) and the over-reliance on Internet Service Providers (ISPs) self-reporting as the sole source of these official measures (usually without any independent verification or spot-checking for accuracy). Policy makers seeking to address pressing problems of insufficient access or matters such as the homework gap will need information beyond connectivity metrics, such as the number of school aged children in a household, types of equipment/services/applications used, prices, and topological/climatic considerations for broadband service options. Not only will such granular data allow assessing the state of broadband more accurately, they will allow to design better interventions and to assess their effectiveness more systematically.

Methods of data collection: This information is partially available from projects such as M-Lab and the Census data collected every ten years. Linking Census data stores with broadband accessibility and speed data is cumbersome and time-consuming (and often out of date within the first few years of that 10-year cycle). Better and more effective methodologies are needed to overcome the lack of accurate data at both the network level as well as associated socio-demographic information. Several initiatives have started to experiment with novel ways of generating more accurate data, including SpeedUp Louisville, a pilot for the State of Pennsylvania, and a pilot for the State of Michigan. All these projects have in common that they combine network-harvested data with surveys based on web applications. Our team is building on these initiatives and is developing a standardized approach combining network and survey data using approaches that are rigorous and scientifically sound.

Implementation: We envision data collection via user-friendly apps and complementary surveys to obtain data from non-connected individuals and households. A first generation of apps are deployed in the field. We are working on a second generation that will refine and further standardize the collection of socio-demographic data. Particularly interesting are types of access, pricing of access, number of school-aged children in a household, household income. We propose to make the collected information available in the public domain and to visualize these data in ways that will help decision-makers in public, non-profit, and commercial organizations. Privacy and confidentiality issues need to be managed in appropriate ways; but the utility of these decision-making resources is substantial. The data from these initiatives should be shared publicly, as far as possible, given privacy concerns. We will share the details of the data-collection approaches currently being utilized and illustrate how it can be used to answer pressing social scientific and policy questions with the goal of illustrating current efforts, sharing existing tools and methodologies, and soliciting feedback to improve future research endeavors.

Interested in Discussing: How can we create a best practice, standardized method and instrumentation to collect reliable access and use information in an environment offering an increasing diversity of services and service plans.

Richard Clarke (AT&T) Talk Title: Improving FCC Data Reporting and Mapping to Address Rural Broadband Deficiencies

Talk Abstract: Getting the Full Picture with a Cooperative Address-Based Broadband Deployment Database

  • We need better tools to identify and then remedy the lack of broadband in rural areas.
  • Mapping where broadband has been deployed does not provide actionable intelligence on specific unserved locations. Maps show areas without broadband as empty spaces with little to no data on where any homes or businesses within these areas are actually located.
  • It is critical to also have detailed data on the locations of homes and businesses in unserved areas in order to accurately estimate the cost of deployment, design efficient networks, and assess when adequate deployment has been achieved.
  • The quality and completeness of commercially available location data for rural areas lags dramatically behind what is available for urban and suburban areas.
  • How can we improve the data for these areas? AT&T has proposed a cooperative effort between broadband providers, government agencies, and the public to create a comprehensive, foundational database of all addresses in the country . Once this database is complete, wireline and wireless broadband providers would overlay the addresses with data on where they provide service.
  • The result would be an address-based broadband deployment data tool that enables the FCC and other policy makers to accurately target and direct funding to the communities and locations that do not have broadband.
  • Creating this tool will take a four-step cooperative process:
  • Step 1: Leverage the reach of the public telecommunications network by asking all Form 477 filers to submit the location addresses of all their current and, when possible, former customers. No service information would be provided at this point; just addresses. In addition, draw in any publicly available address database sources (e.g., National Address Database), as well as other cooperative entities (e.g., state NG911 databases)
  • Step 2: To ensure efficiency and consistency, a centralized entity such as the FCC or their vendor, would conform submitted addresses and remove duplicates before using a single geocoding methodology to assign latitude and longitudes to each address.
  • Step 3: Opening the database to crowdsourcing would enable unserved consumers and local entities to add missing addresses and refine the geocoding.
  • Step 4: Both wireline and wireless broadband providers would submit data to overlay the address database/map with information about where they have deployed broadband, by speed and technology.
  • Creating this unprecedented address-based broadband database is a significant undertaking but it can be done. With a fully cooperative effort it is estimated that it would take approximately 18 months to implement.
William Lehr (MIT CSAIL) Talk Title: Universal Service, Mobile Broadband and Reverse Auctions

Talk Abstract: The future of broadband is mobile provided via converged networks. Historically, universal service policies have focused first on ensuring access to fixed broadband services, and only more recently have recognized that support for mobile broadband (in the form of 3G/4G) is also important. In a 5G world, separately subsidizing legacy fixed or cellular broadband services will no longer make sense and, in any case, is inconsistent with the goal of transitioning to technology-neutral regulation.

The policy goal and fear this talk focuses on is the challenge of developing sound universal service subsidy mechanisms for broadband that aspires to technical/service neutrality with respect to how broadband is provided. A fear is to avoid directing public subsidies that either duplicate or crowd out private sector investments in broadband. The best current data on broadband availability is from the FCC's 477 data, which is compiled from service providers' mandatory, biannual reporting. Service providers are required to report by Census Block where they offer service to one or more subscribers by service (speed) tier for fixed broadband and by service grid zone by service (speed) tier for mobile broadband. This data is compiled by the FCC to identify the mobile grid areas and Census Blocks where different mobile and fixed providers provide service.

Accurate data on where service is available is needed to appropriately target scarce public funds. The 477 broadband data is known to be problematic and offer an incomplete and imprecise picture of the status of broadband availability. Assessing the availability of mobile broadband service and how it compares with fixed broadband service availability is especially challenging.

In the U.S., universal service subsidy programs total $8.7 billion per year. Of that, $4.5billion is directed to high-cost support, and of that $500 million is targeted for expanding access to 4G mobile broadband services. This program is called the Mobility Fund . Phase II (MF-II) program. As of December 2018, to implement MF-II, the FCC is engaged in executing a complex reverse auction process to identify both the service areas that will be eligible for funding and the reverse auction through which the funds will be assigned to qualified providers. The 477 data on service availability is known to overstate the locations where broadband service is available. To enhance this data's ability to identify qualifying areas lacking mobile broadband coverage, the FCC instituted a challenge process through which certain interested parties could submit measurement data demonstrating a lack of available mobile broadband service in areas that the 477 data indicates is already served by one or more un-subsidized providers. Identifying zones for additional subsidies is contentious because failure to qualify a zone may deny funding to providers and communities without adequate service coverage; whereas falsely identifying zones as unserved, could result in subsidized competition that could threaten the economic viability of providers already providing mobile broadband to those communities.

The MF-II process offers an interesting opportunity to study the design of effective Universal Service policy programs. First, a reverse auction is the preferred mechanism for allocating universal service subsidies by most economists. The first such auction was used during the Mobility Fund . Phase I that took place in 2012. In October 2018, the FCC completed a reverse auction for fixed broadband subsidies. Studying the MF-II process provides a good opportunity to better understand the design challenges of implementing universal service support for broadband.

Second, the challenge process represents a key mechanism for validating service provider submitted data. This highlights the larger need by the broadband performance measurement community of a way to collect timely and comprehensive data sets on broadband performance on a granular basis that would allow analysts and policymakers to collect data on the availability of service by quality (speed tier) on arbitrary geographic boundaries for both fixed and mobile broadband on a comparable basis. Ideally, appropriate standards and infrastructure would be in place for edge-based data collection of performance measurements that identify the provider, service tier, and provide basic performance metrics such as speed, latency, error rates, reliability might be available. Today, this data either does not exist or exists in inconsistent and non-interoperable data formats and sources. Better tools for integrating the measurement data are needed to enable better use of available data sets.

Third, aggregating measurements of mobile broadband performance measurements poses a range of difficult challenges that Bauer et al (2018, 2016, 2015, 2010) have highlighted.[1]

A review of the MF-II challenge process and the data, and comparing this with other data sources such as the Ookla or M-Lab performance measurement data would offer an interesting opportunity to stress test the MF-II 477 and challenge process measurement data. Additionally, several stakeholders have criticized and challenged the MF-II process as being overly restrictive, excessively complex, and potentially biased. Reviewing the success of the MF-II challenge should provide useful insights into the implications of universal service subsidies for broadband competition and its evolution.

The results of the proposed analysis should assist in promoting our collective understanding of how to collect and aggregate broadband performance data, and the strategic and technical challenges of using granular performance data for evidence-based policymaking.

[1] See Bauer, S. and W. Lehr (2018), "Measuring Mobile Broadband Performance," TPRC46: The 46th Research Conference on Communication, Information, and Internet Policy 2018, available at https://ssrn.com/abstract=3138610; Bauer, S., W. Lehr and M. Mou (2016), "Improving the measurement and analysis of Gigabit Broadband Networks," TPRC44, September 2016, Alexandria, VA, available at SSRN: https://ssrn.com/abstract=2757050 or http://dx.doi.org/10.2139/ssrn.2757050; Bauer, Steve, William Lehr, and Shirley Hung (2015), "Gigabit Broadband, Interconnection Propositions, and the Challenge of Managing Expectations," TPRC2015, Alexandria, VA, September 2015, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2586805; and Bauer, S., D. Clark, and W. Lehr (2010), "Understanding Broadband Speed Measurements," (August 15, 2010). 38th Research Conference on Communication, Information, and Internet Policy (TPRC.org), October 2010, Arlington, VA, available at SSRN: http://ssrn.com/abstract=1988332.

Interested in Discussing: I am interested in most of the talks that are listed in the agenda. Some areas that are not obviously to be addressed include:

  • techniques for estimating economic impacts using new analytic techniques and datasets enabled by Big Data/machine learning
  • implications of block chain/e2e encryption for regulatory enforcement (e.g., antitrust)
  • special opportunities for collaboration in empirical research between engineers and economists (looking for collaborators...)

Christopher Yoo (University of Pennsylvania) Talk Title: 1 World Connected

Talk Abstract: Of the seven billion people in the world, only half are connected to the Internet, and adoption rates are slowing down. Even in developed countries such as the U.S., the FCC's annual Section 706 report emphasizes that substantial rural and tribal areas remain unserved and underserved. Many countries are engaging in innovative approaches to connecting more people to the Internet. The problem is that no one is studying these efforts in a systematic empirical way that permits comparisons across projects. Even when such data exists, international organizations such as the ITU are politically unable to conduct critical analysis of which practices are working and which ones are not.

In addition, the lack of empirical studies measuring the impact that connectivity has on development goals such as economic growth, health care, education, food security, financial inclusion, and gender empowerment are obstructing building political support outside of communications ministries.

The lack of an empirical foundation is inhibiting the mobilization of political support within countries. It is also inhibiting investments by development banks and other members of the international finance community despite expressed interest in investing in infrastructure. It also may unlock investments from the social impact investment community.

1 World Connected is a research project that seeks to fill these gaps in the following ways:

  • Develop a database of innovative attempts to connect more people the Internet - This database covers more than 1,000 projects, covering both supply-side and demand-side initiatives.
  • Engage in systematic analysis to identify trends in this data.
  • Reach out to all of these projects to conduct structured case-study interviews to get some measure of cost effectiveness - This approach has already yielded 120 case studies.
  • Conduct studies in the U.S. to evaluate what is working - Ongoing work is studying municipal fiber and fixed wireless builds.
  • Analyze projects for which we have full financial data - Cases include Wi-Fi in India, fixed wireless in the U.S., Wi-Fi in Pakistan, and Li-Fi in the Ivory Coast.
  • Conduct controlled trials to measure the connection between connectivity and development goals - Ongoing trials employ a difference-in-difference methodology in Rwanda (economic growth, education), Vanuatu (health care, education), and Nepal (health care).

The insights can be used to refine business models, target universal service and corporate social responsibility funding, and mobilize capital investments. Once the analysis is complete, it will be publicized in international venues around the world.

Achilles Petras (BT Applied Research) Talk Title: Fixed Wireless Access QoE measurement Challenges

Talk Abstract: In Sep 2018 BT responded to Ofcom's call for expressions of interest to become Universal Service Provider. In summary, BT proposes a combination of Fixed Wireless Access (FWA) services and a proactive, targeted extension of fixed networks in order to provide a universal broadband solution to the vast majority of the 600k premises whose access would not otherwise meet Ofcom's requirements by 2020. The challenge is to capture the performance of the more variable service provided by FWA with metrics that are robust enough to compare with the service provided by fixed broadband. This requires to identify suitable monitoring methods and collect statistically enough measurements that fit the purpose of proving both to end-users and the regulator that FWA would provide the expected experience.

Mark Johnson (University of North Carolina) and Anita Nikovich (Illinois Institute of Technology) Talk Title: The Post Bandwidth Era: A Label for Internet Goodness

Talk Abstract: Submitted jointly by Mark Johnson and Anita Nikolich (anikolich@iit.edu)

Policy makers want public accessibility to "good" Internet, but how do they measure what that actually means across a wide range of users in varying parts of the country when no reliable map of goodness exists? Is the FCC national broadband standard of 25 Mbps/3 Mbps good enough? Is the SEDTA per student bandwidth target good enough? ISPs are able to deliver high speed broadband (up to 10G in some cases) to residential users, but availability varies widely, performance measurement is opaque to consumers and regulators, and accurate locations of infrastructure are often hard to come by. We argue that the concept of broadband quality needs re-examination. A more holistic view of "good" captures all the things that are needed to be known to craft better telecom policies and inform Federal and regional funding initiatives.

The technical community correlates last mile speed as the measurement of goodness, but this makes assumptions about how the upstream network is engineered, where and how physical interconnects are made, and the location of the physical infrastructure on a microscale. Metrics of goodness need to include inputs that might also non-quantitative in nature such as end user productivity, connection security, and privacy of user data. A Post Bandwidth world requires meaningful definitions of goodness that are understandable by policy makers and consumers. Part of our duty as a technical community is to tell our story better and communicate in a more meaningful way that goes beyond just charts and graphs. We'll sum up our talk by urging the community to become better storytellers in order to get our message across.

Interested in Discussing: How do we get reliable, independent, open measurement of Internet performance at scale? How do we describe Internet access quality in a way that is compelling to policy makers?

Steven Bauer (Massachusetts Institute of Technology) Talk Title: Sustainable Internet Measurement Infrastructures

Talk Abstract: 1) What is the policy goal or fear you're addressing?

Fear: Struggle to maintain network measurements efforts both for wired and wireless domains.
Fear: Internet going dark to 3rd party measurements
Fear: Google/Akamai/ISP/NSF only measurements

2) What data is needed to measure progress toward/away from this goal fear?

We should test whether decentralized measurements could be incentivized in ways that are analogous to how other decentralized systems efforts are being incentivized and built. Also we should begin collecting technical and economic data on these new decentralized systems.

3) What methods do you propose (or are) being used to gather such data?

All of our measurement systems currently rely upon cooperation, corporations, or confiscation (i.e. tax supported). How else might we incent funding, adoption, use, and measurement infrastructure longevity? In trying to answer that question I have been pondering on and off for the past year what proof-of-measurement could mean. Only recently have I found concrete technical ideas that I think are worthy of discussion and testing.

4) Who/how should such methods be executed, and the data shared, or not shared?

These are the central questions of designing the incentives for early adopters of decentralized measurement roles and then, over the long term, how to incent other actors to continue playing their measurement role.

Published
Last Modified