The (un)Economic Internet?

HTML version of "The (un)Economic Internet?" authored by kc claffy, Sascha D. Meinrath, Scott O. Bradner. Presented at the Internet Economics Track in 2007.

The (un)Economic Internet?

kc claffy, Scott O. Bradner and Sascha D. Meinrath

The (un)economic Internet kicks off a new series of articles on policy, regulatory, and business-model issues relating to the Internet and its economic viability. Articles will regularly appear in IEEE Internet Computing and will explore a range of topics shaping both the Internet of today and the discourse in legislatures and deliberative bodies at the local, state, national, and international levels in pursuit of enlightened stewardship of the Internet in the future.

Mindful of both the fundamental import of Internet connectivity for advanced as well as emerging economies, as well as its day-to-day irrelevance for the unconnected vast majority of human beings, pieces for The (un)Economic Internet series will cover technology as well as political, economic, social and historical issues relevant to Internet Computing's international readership. In this inaugural article we provide a historic overview of internetworking, identify topics in need of further exploration that we particularly encourage authors to cover in future articles in this series.

A Brief History of Internet (un)Economics

The modern Internet began as a relatively restricted US government-funded research network. One of the most revolutionary incarnations of this network, the pre-1983 ARPANET, was limited in scope -- at its peak providing data connectivity for roughly one hundred universities and government research sites. In the decades since, a few key transitions have been critical in radically transforming this communications medium. One of the most important of these critical junctures occurred in 1983 when ARPANET switched from the Network Control Program (NCP) to the (now ubiquitous) Transmission Control Protocol and Internet Protocol (TCP/IP). This switch helped change the basic architectural concept of the ARPANET from a single specialized infrastructure built and operated by a single organization to the 'network of networks' we know today. Dave Clark discusses this architectural shift in his 1988 Computer Communications Review paper "The Design Philosophy of the DARPA Internet Protocols." [Clark'88] Clark writes that the top-level goal for the Internet protocols (TCP/IP) was "to develop an effective technique for multiplexed utilization of existing interconnected networks."

During this same period, network developers chose to support data connectivity across multiple diverse networks utilizing gateways (now called routers) as the network interconnection points. Preceding communications networks such as the telephone system utilized circuit switching, allocating an exclusive path/circuit with a predefined capacity across the network for the duration of its use, whether or not the circuits capacity is efficiently utilized. Breaking with traditional circuit switching network design, which is still widely used in telephone networks around the world, early inter-networking focused on packet switching as the core transport mechanism, facilitating far more economically as well as technically efficient multiplexing of existing networking resources. In packet switching networks, non-exclusive access to circuits is normative (though dedicated lines are still sometimes bought); thus, no specific capacity is granted for specific applications or users. Instead, data is commingled with packet delivery occurring on a "best effort" basis. Each carrier is expected to do its best to ensure that packets get delivered to their designated recipient, but there is no guarantee that a particular user will be able achieve any particular end-to-end capacity. In packet switching networks, capacity is more probability-based than statically guaranteed. The best effort nature of Internet data transport has been a growing source of tension in regulatory and traditional telephony circles (c.f., the debates currently raging over Network Neutrality). Likewise, as the Internet becomes an increasingly critical communications infrastructure for business, education, democratic discourse, and civil society generally, the need for systematic analysis of core functionality and potential problem areas has become progressively more important.

Early Internet developers could not have foreseen the level to which the Internet and private networks using Internet technologies have displaced other telecommunications infrastructures. It was not until the mid 1990s that visionaries, such as Hans-Werner Braun [Braun'94], started warning protocol developers that they needed to view the Internet of the future as a global telecommunications system that would support essentially all computer mediated communications. This view was eerily prescient, yet core Internet protocols have not yet evolved to meet the increasing demands placed on them and are essentially the same as they were in the late 1980s.

A growing number of researchers are convinced that without significant improvements and upgrades, the Internet may be facing serious challenges that could undermine its future viability. Features such as network-based security, detailed accounting, and reliable quality of service (QoS) control mechanisms are all being explored to help alleviate potential problems. In response to these concerns, the International Telecommunication Union Telecommunication Standardization Sector (ITU-T) Next Generation Networks (NGN) [NGN] is working to define a new set of protocols that would include these and other features.

Security: It's Not the Network's Job

Different people have offered different explanations regarding the lack of security protocols in the initial design of the Internet. Clark's seminal paper does not mention security, nor does the protocol specification for the Internet Protocol. [RFC791] Since the network itself does not contain security support, the onus has fallen to the people managing individual computers connected to the Internet, to the network operators to protect Internet connected hosts and servers, and to the operators of Internet service providers to protect their routers and other infrastructure services. Since services such as user or end system authentication, data integrity verification and encryption were not built into the core Internet protocols, they are now layered on an infrastructure that is not intrinsically secure. Currently, few studies exist examining the potential economic rationales for this current and continuing state of affairs and the ramifications for efficiency, performance, and sustainability of the infrastructure.

Quality of Service: Too Easy to Go Without

The packet header of the original Internet Protocol included a Type of Service field to be used as "an indication of the abstract parameters of the quality of service desired." [RFC791] This field, later updated by Differentiated Services [RFC2430], has been used to define priority or special handling of some traffic in some enterprise networks and within some ISP networks, but has never seen significant deployment as a way to provide quality of service across the public Internet. Thus, the quality of the service a user gets from the Internet is typically the result of ISP design and provisioning decisions rather than from any differential handling of different types of traffic. Thus far, 'throwing bandwidth at the problem' has proven to be a far more cost effective method of achieving good quality than the introduction of QoS controls [Fishburn'98].

Yet what happens when conditions change so that overprovisioning is no longer a panacea? The day-to-day quality most users experience from their broadband Internet service is good enough, for example, to enable voice over IP (VoIP) services such as Skype and Vonage, which provide telephony services that compete favorably with plain old telephone services. However, the explosive growth of video and other high-bandwidth applications may increase congestion on current infrastructure to the point that special QoS mechanisms may be required to maintain usable performance of even the most basic services.

Accounting: A Missing Goal

In their first paper on TCP/IP, Cerf and Kahn felt that accounting was going to be required to enable proper payments to the providers of Internet transport [Cerf'74]. More than a decade later Clark echoed this requirement in his Design Philosophy paper. In his listing of second-level goals affecting the design of the TCP/IP protocol suite, the seventh and final goal was that "[t]he resources used in the internet architecture must be accountable." [Clark'88] However, as with security, there is no evidence that accounting was ever an operational goal for DARPA in developing and running the ARPANET, nor is there any indication that accounting was a goal for NSF in the follow-on NSFnet. Indeed, if a government agency is paying in bulk for the entire system, accounting itself is a technical as well as economic inefficiency. As a result, the Internet of today has no built-in accounting mechanisms, making it fundamentally different from previous circuit switched networks and creating substantial debate as to how to fairly meter and charge for broadband infrastructure and usage.

The Impact of the End-to-End Eodel

The Internet's architecture and initial deployment used an "end-to-end" (e2e) model of connectivity. Elements of this model were first discussed in the 1981 "End to End Arguments in System Design" paper by Saltzer, Reed and Clark. [Saltzer'81] The general rationale behind the e2e model is that the network does not have to know the applications running on it since it is simply a neutral transport medium. This neutral handling of traffic has enabled the explosive innovation in edge services and applications over the past several decades. For example, an application developer does not have to get permission of ISPs, or pay them anything other than their normal service fee, to deploy a new application. Likewise, network operators do not know what applications are running on their networks, nor can they participate in the value chain for these applications.

Dave Clark once said that the Internet "did not know how to route money." [Clark-a] Clark held that there was no efficient way for an independent service provider to cost/profit share with an ISP so that the ISP would provide better service to a user who is not a direct customer. The Internet economic model has always been "sender-keeps-all" -- an ISP serving a particular customer keeps all of the revenue from that customer without regard to where that customer's traffic is going. In many countries, no regulations covering peering relationships among providers exist, leaving ISPs on their own to decide whether to peer. Typically, especially in the commercial sector, these decisions are based solely on immediate business interests while more innovative business solutions are few and far-between.

Telephone Regulation

Many parts of the world have well developed telephone networks. However, this robustness often comes at a cost to the networks' users. Regulations requiring that the telephone carriers ensure reliability and price controls that the carriers demand in order to guarantee a rate of return on this investment boost service prices. A less regulated and price controlled future for telephone carriers seems inevitable. It remains to be seen if the telephone carriers will be as willing to put significant resources into reliable infrastructures and the personnel needed to run them if prices are set by competition rather than regulation. Likewise, the intersections among regulatory structures, pricing, service quality, and interconnectivity with other data communications services are still wide open for exploration.

Internet (non)Regulation

Regulation of the Internet has remained largely laissez faire. ISPs have not usually had to register with the government before offering services and governments typically have not regulated either the service offerings or service quality of ISPs. Yet government attitudes towards the Internet are beginning to change. For example, the first major US regulation covering ISPs, the Communications Aid to Law Enforcement Act (CALEA), goes into effect in May 2007 and requires ISPs to register with the government and track users as a part of this new regulation. Already, numerous regulators have begun investigating the viability of mandating that ISPs install QoS mechanisms to ensure that the Internet can be reliably used by emergency workers responding to natural or man-made disasters. Unless the network research community fundamentally changes our approach, future regulations will be considered, ratified and implemented with little peer-reviewed empirical research documenting their likely technical and economic effects.

Internet Measurement

Because no systemic measurement activities exist to collect rigorous empirical Internet data, in many ways, we do not really know what the Internet actually is. Thus, we do not know the total amounts and patterns of data traffic, the Internet's growth rate, the extent and locations of congestion, patterns and distribution of ISP interconnectivity, and many other things that are critical if we are to understand what actually works and does not work in the Internet. These data are hidden because ISPs consider much of the information proprietary and they worry that competitors could use some of the information to steal customers or otherwise harm their business. The information is also hidden, or not collected at all, because there is no economic incentive to do so, nor are their any regulations requiring its collection.

The Changing ISP Community

The original Internet was provided for "free" by governments and government-supported research institutes. In the US, direct federal government support for the backbone and attached regional networks ended in the mid 1990s, although tax incentives continued to promote private as well as public infrastructure development. However, the goal of complete private ownership of Internet infrastructure was never completed. Today, many states and consortiums continue to run their own networks. Most of these restrict who can use the networks in some way, most often to educational and research constituencies.

Historically, most telephone carriers were not interested in offering Internet service themselves to individual homes or to the business community. Even when a telephone carrier did offer such services, it was usually through a separate division that was often seen by company management as outside the basic mission of the company. Instead, commercial ISPs often provided Internet service by leasing telephone carrier facilities or by setting up dial-up modem banks to interconnect with the plain old telephone system.

After commercialization of the infrastructure began, the Internet service provision business model was predicated on making a profit by charging more to customers than it cost an ISP to run the service. It was a problematic business model since Internet connectivity is a commodity service, with most customers caring more about low prices than claims of better quality or advanced services. Thus, competition, along with undefined accounting mechanisms for the new technology, drove prices below sustainability levels for most providers. The resulting massive consolidation of providers is still in play, but customers are no more willing to pay high prices for Internet service in the new environment. A survey quoted in a 2002 FCC report determined that only 12% of customers would be willing to spend $40 per month for broadband Internet service [FCC'02].

Meanwhile the telephone carriers began to offer broadband Internet service directly over their own facilities, particularly in higher income, urban residential markets, directly competing with commercial ISPs who had been offering service via overlays on the telephone carriers' facilities. Paralleling telephone carriers' entry into the broadband market, cable TV companies also began providing broadband Internet service over their own facilities. Today, most residential customers get Internet access service from telephone carriers or cable TV companies, where the Internet business is only part of their service offerings, rather than from commercial ISPs whose main business is Internet service. While standardized, "cookie-cutter" service packages have hampered what customers can do with their network services, the impacts of the shift of broadband service provision from ISPs to telephone and cable TV companies on the quality and dynamicism of Internet service have yet to be systematically studied.

The (un)Economic Internet

All of these factors are background to the current debates on the future of the Internet, often lumped under the heading of "network neutrality" -- a discussion with far wider and deeper implications than that label conveys. The key question at the root of the debate is whether viable economic models exist for Internet service provision given the high cost of deploying physical infrastructure and operating the network, coupled with the current inability of the ISPs to participate in the much more profitable application value chain. Further complicating analyses of these factors is the internally conflicted regulatory agencies, tasked with ensuring both that the best interests of the general public are kept foremost and that the "free market" be allowed to innovate, and police, itself.

Many of the first generation ISPs went out of business because they could not find a successful business model given the constraints they were placed under by the twin forces of the ILECs and their own customer base. The current generation of telephone-carrier based ISPs are asking regulators for the ability to charge differentially based on the applications used and content consumed. These companies claim that they will not be able to afford to deploy the necessary infrastructure upgrades without this type of discriminatory pricing. Their opponents worry that letting ISPs decide what applications would be permitted to use their facilities and at what cost would destroy the very environment that enabled the creation of today's Internet.

Meanwhile a growing number of communities have decided that they are not being well-served by existing Internet service providers, generally the telephone carriers, and have decided to build their own Internet infrastructures, similar to what the academic community immediately after their (NSFNET) backbone was retired, and what many state education networks, e.g., California's CENIC, Florida's Lambda Rail, and New Mexico's LambdaRail, have already undertaken. There is a growing, but far from universal, view that basic Internet connectivity is a fundamental civil society requirement (much like roads, schools, etc.) and thus governments should ensure universal access to this valuable resource.

Another scenario that will deeply alter the economics is commercial ISPs leasing government funded infrastructure. These public-private partnerships are currently being developed in thousands of communities around the globe. In fact, the business models for ensuring digital inclusion and lessening the digital divide are as varied as the applications running on these broadband networks. Objective empirical analysis of these models, including empirical validation of inputs, outputs, and interacting technological factors, is one of the least understood and yet most vitally important aspects of this emerging critical infrastructure.

The (un)Economic Internet series focuses on the ongoing debates surrounding issues of economics and policy, and how they are influenced by, and should influence, science and engineering research. We are heading into another decade of tremendous innovations, not only in wireless connectivity and high bandwidth applications and services that use it, but the business models that will lead to their success of failure. Gaining a better understanding of the tussles (known outside our field as "economics and politics") among providers, users, and regulators of Internet access, services, and applications will help to ensure enlightened progress on security, scalability, sustainability and stewardship of the global Internet in the 21st century and beyond.

    References

  1. Hans-Werner Braun, private conversation, 1994.
  2. Dave Clark, IRTF presentation, unknown date.
  3. D. Clark, "The Design Philosophy of the DARPA Internet Protocols," SIGCOM'88, August 1988
  4. V. Cerf and R. Kahn, "A Protocol for Packet Network Interconnection," IEEE Trans on Communications, Vol Com-22, No 5 May 1974
  5. FCC, Third report on the availability of advanced telecommunications capability services, February 2002. PDF
  6. P.C. Fishburn and A. Odlyzko, "The economics of the Internet: Utility, utilization, pricing, and Quality of Service," Proc. First Intern. Conf. on Information and Computation Economies (ICE-98), ACM Press, 1998, pp. 128-139.
  7. www.itu.int/ITU-T/ngn/
  8. J. Postel, "Internet Protocol," RFC 791, September 1981
  9. J. Saltzer, D. Reed and D. Clark, "End-to-End Arguments in System Design," Second International Conference on Distributed Computing Systems (April, 1981) pages 509-512

kc claffy is the founder and director of the Cooperative Association for Internet Data Analysis (CAIDA) and adjunct associate professor in the Department of Computer Science and Engineering at the University of California, San Diego. claffy has a PhD in computer science from UCSD.

Scott O. Bradner is senior technical consultant at the Harvard University Office of the Assistant Provost for Information Systems. He's also a member of the Internet Engineering Steering Group, vice president for standards for the Internet Society, and a member of the IEEE and the ACM.

Sascha D. Meinrath is the Director for Municipal and Community Networking for the CAIDA COMMONS project and a telecomunications fellow at the University of Illinois, Institute for Communications Research, where he is finishing his Ph.D. His research focuses on community empowerment and the impacts of participatory media, communications infrastructures, and emergent technologies. Meinrath has an MS in psychology from the University of Illinois, Urbana-Champaign. He is the cofounder and executive director of CUWin, an open source wireless project.

Related Objects

See https://catalog.caida.org/paper/2007_ieeecon/ to explore related objects to this document in the CAIDA Resource Catalog.