|Johannes Bauer (Michigan State University)||Talk Title: Designing Adaptive Regulation
Talk Abstract: During the past decade, the notion of adaptive regulation has gained traction. Early contributions (e.g., Cherry & Bauer 2004, Whitt 2007) suggested basic principles of adaptive policy-making yet remained at a rather generic level. Building on these earlier works, I would like to reassess the case for adaptive regulation in light of recent developments in the Internet and in Internet governance, the metrics needed to carry-out adaptive regulation, and whether some metrics could be harvested directly from the Internet. I will also explore how adaptive regulation could be implemented in the US legal and regulatory system and whether such an approach could be scaled internationally.
Interested in Discussing: How we can model and identify acceptable and unacceptable actions and strategies of players.
|Steve Bauer (MIT)||Talk Title: Exploring differing definitions of congestion and analytic techniques for evaluating and identifying congestion
Talk Abstract: Identifying when a network is congested depends upon the definition of congestion one employs. Loosely speaking, everyone would agree that "congestion" is the state of network overload. However, this is not a precise definition adequate for characterizing exactly when or for how long a network is congested. More precise, but different, definitions are supplied by various authors and sub- disciplines in networking. Each uses the term "congestion" to describe different (but related) phenomena. Each of these meanings of congestion is useful in its own right. But regrettably the particular definition is not always clear in discussions. This talk therefore first explores different meanings of congestion and then presents recent work on different analytic techniques for evaluating and identifying periodic congestion in time series data.
Interested in Discussing: I am interested in the topic of non-traditional forms of interconnection e.g. "intercloud", "cloud exchanges", "CDNi", "federated CDNs", etc. The traditional networking community is not adequately aware of the emerging standards, technologies, businesses and challenges in this space. There are lessons to be shared in both directions. For instance some might argue that "federated CDNs" are largely a failure (as rfc3570 was written way back in 2003) that bodes ill for "named data networking." What are the economic and technical challenges that stand in the way? What can be measured?
|Ignacio Castro (Institute IMDEA Networks/ICSI)||Talk Title: Remote Peering: More Peering without Internet Flattening
Talk Abstract: The trend toward more peering between networks is commonly conflated with the trend of Internet flattening, i.e., reduction in the number of intermediary organizations on Internet paths. Indeed, direct peering interconnections bypass layer-3 transit providers and make the Internet flatter. We study an emerging phenomenon that separates the two trends: we present the first systematic study of remote peering, an interconnection where remote networks peer via a layer-2 provider. Our measurements reveal significant presence of remote peering at IXPs (Internet eXchange Points) worldwide. Based on ground truth traffic, we also show that remote peering has a substantial potential to offload transit traffic. Generalizing the empirical results, we analytically derive conditions for economic viability of remote peering versus transit and direct peering. Because remote-peering services are provided on layer 2, our results challenge the traditional reliance on layer-3 topologies in modeling the Internet economic structure. We also discuss broader implications of remote peering for reliability, security, accountability, and other aspects of Internet research.
|Phillipa Gill (Stony Brook University)||
Talk Abstract: The goal of this research is to detect traffic differentiation in cellular data networks. We done trac differentiation as any attempt to change the performance of network traffic traversing an ISP's boundaries. ISPs may implement differentiation policies for a number of reasons, including load balancing, bandwidth management, or business reasons. Specifically, we focus on detecting whether certain types of network traffic receive better (or worse) performance. As an example, a wireless provider might limit the performance of third-party VoIP or video calling services (or any other competing services) by introducing delays or reducing transfer rates to encourage users to use services provided by the provider. Likewise, a provider may allocate more bandwidth to preferred applications. Previous work [1, 3, 5] explored this problem in limited environments. Glasnost focused on BitTorrent in the desktop/laptop environment, and used port/payload randomization to avoid differentiation. NetDiff covered a wide range of passively gathered traffic from a large ISP but likewise did not support targeted, controlled experiments. We address these limitations with Mobile Replay.
Interested in Discussing: Generally I'm interested in discussions about concerns of operators around congestion. My work measures actions that may be taken to mitigate congestion but I'd like to understand/discuss the economic factors that may motivate these actions. I'm also interested in norms for negotiating interconnection.
|William Lehr (MIT)||Talk Title: Interconnection in the Clouds (in collaboration with Steve Bauer, MIT)
Talk Abstract: This talk will make the case for why focusing on "Cloud Interconnection" is worthwhile and what the relevant research questions are. The focus of the workshop is on Internet Interconnection which has been going through significant changes in recent years due to a number of forces, including industry restructuring (i.e., the rise of the "hyper-giants), the growth of mobile broadband, and the growth of OTT video services. Nevertheless, the "interconnection" question still focuses principally on the Internet's primary role as a network for end-to-end packet delivery. Industry participants along the value chain, standards bodies, and researchers developing new Internet architectures continually discuss expanding Internet functionality that promise to transform the Internet from a packet transport network to a more capable platform for "cloud services," which in its most general form implies providing access to in-network storage and computing resources/services. This begs the obvious question of how such services should be considered with respect to the Interconnection question -- are these overlays on top of an Internet packet network? Unpacking this question suggests a range of interesting research questions that relate to but also go well beyond the narrower focus of the workshop on more traditional Internet interconnection questions.
Interested in Discussing: Some other topics of interest... Mobile and Implications for Interconnection Policy -- Mobile raises lots of interesting issues: Small cells (Wifi/cellular offloading), Multihoming, etc. -- offshoot of this that I am following is design of Spectrum Access System (SAS) which has its own interconnection issues since there are expected to be more than one SAS. Relationship to Internet is more about security/privacy and control plane (not a data plane issue).
|Richard Ma (National University of Singapore)||Talk Title: Subsidization Competition: Vitalizing the Neutral Internet
Talk Abstract: Unlike telephone operators, which pay termination fees to reach the users of another network, Internet Content Providers (CPs) do not pay the Internet Service Providers (ISPs) of users they reach. While the consequent cross subsidization to CPs has nurtured content innovations at the edge of the Internet, it reduces the investment incentives for the access ISPs to expand capacity. As potential charges for terminating CPs' traffic are criticized under the net neutrality debate, we propose to allow CPs to voluntarily subsidize the usage-based fees induced by their content traffic for end-users. We model the regulated subsidization competition among CPs under a neutral network and show how deregulation of subsidization could increase an access ISP's utilization and revenue, strengthening its investment incentives. Although the competition might harm certain CPs, we find that the main cause comes from high access prices rather than the existence of subsidization. Our results suggest that subsidization competition will increase the competitiveness and welfare of the Internet content market; however, regulators might need to regulate access prices if the access ISP market is not competitive enough. We envision that subsidization competition could become a viable model for the future Internet.
Interested in Discussing:
|Andrea Soppera (BT)||Talk Title: Measuring Application Quality of Experience
Talk Abstract: Measuring network quality of experience is critical to operate our network in the most efficient manner. The talk will focus on application (video, web, etc.) measurements in particular on the impact that network congestion can have on overall performance.
Interested in Discussing: Interconnection policies - Peering, Transit, etc. Beyond Speed - understanding impact of latency in the future Congestion - impact of access and core congestion
|Debasis Mitra (Columbia University)||
Interested in Discussing: The stability of Best Effort service on the Internet
|Patrick Ryan (Google/University of Colorado)||Talk Title: Specialized Services (and zero rating)
Talk Abstract: Does zero-rating help promote broadband growth? Does zero-rating violate net neutrality? I'll present an overview of what we know and what we don't in this space.
|Srinivas Shakkottai (Texas A&M University)||Talk Title: Creating an Ecosystem for Enhanced Spectrum Utilization Through Dynamic Market Mechanisms
Talk Abstract: The surge in demand for Internet access using smart hand held devices has resulted in the emergence of a multitude of apps and devices, which have diverse requirements and capabilities. However, underlying cellular packet scheduling policies optimize only for average or worst case performance, and do not allow end-users to indicate their value while running apps that are important to them. Simultaneously, it has become clear that while there are insufficient bandwidth resources for unlimited cellular data plans, the prevailing scheme of degrading access after a byte-limit is both unpopular and inefficient: for example, this policy impacts both high and low value applications in the same manner. The main goal of this presentation is to discus how to bridge this disconnect between user preferences and allocated resources by the use of dynamic market mechanisms that allow for packet-level value determination over time. The objective is to study both primary markets in which service providers sell network access to end-users, as well as secondary markets in which end-users share resources via hot spots and device-to- device networking. One of the main tools used will be that of mean-field games that are used to determine how many agents, each acting in their own self-interest, result in an equilibrium distribution in the system. The focus will be on characterizing such equilibria in terms of the overall social good, and on discussing ways of incentivising behavior that would yield good equilibra.
Interested in Discussing: The abstract above lies between the two topics indicated in that it allows for congestion pricing in the wireless setting across users by using market mechanisms. Existing legislation on wireless access links is compatible with such schemes that would require prioritizing one packet or app versus another. The focus on allowing markets in which end-users are the final arbiters to decide how to allocate resources (rather than letting the communications provider do so) might maintain a level of "fairness" across apps, compatible with the spirit of neutrality.
|Georgios Smaragdakis (MIT / TU Berlin)||Talk Title: Improving Performance and Cost of Content Delivery in a Hyperconnected World
Talk Abstract: Today, a large fraction of Internet traffic originates from Content Delivery Networks (CDNs). To cope with increasing demand for content, CDNs have deployed massively distributed infrastructures. These deployments pose challenges for CDNs as they have to dynamically map end-users to appropriate servers without being fully aware of the network conditions within an Internet Service Provider (ISP) or the end-user location. On the other hand, ISPs struggle to cope with rapid traffic shifts caused by the dynamic server selection policies of the CDNs. The challenges that CDNs and ISPs face separately can be turned into an opportunity for collaboration. We argue that it is sufficient for CDNs and ISPs to coordinate on server selection in order to perform better traffic engineering that also improves end-user performance. We also argue that the existence of Internet Exchange Points can be an enabler for closer collaboration between ISPs and CDNs towards a more sustainable (better performance and lower cost) content delivery to satisfy the ever-increasing demand for content.
Interested in Discussing:
|Christopher Yoo (University of Pennsylvvania)||Talk Title: Interconnection and the Multiple Roles Paid by Pricing
Talk Abstract: Interconnection prices play three distinct roles that are often conflated. First, in the short run, they allocate scarce capacity. Second, they provide incentives for users to conserve on bandwidth. Third, they provide a long-run signal of when markets are in disequilibrium and provide an incentive for bringing markets back into equilibrium. The analysis is enriched by the literature on two-sided markets, which adds some additional complexity to the analysis. The Comcast-Netflix interconnection agreement will be used as an illustration of these dynamics.
Interested in Discussing: Enter a specific question or relevant issue that you want to make sure gets discussed at this workshop