Geographic Addressing Reconsidered
Eric Hoffman and K Claffy

Humans have not inhabited Cyberspace long enough or in sufficient diversity to have developed a Social Contract which conforms to the strange new conditions of that world. Laws developed prior to consensus usually serve the already established few who can get them passed and not society as a whole.

The addressing model of the Internet constrains the shape of the routing system and charging model. There is only one concrete proposal, provider-based, currently under consideration for the management of the next generation IP address space (IPv6), despite concerns about its ramifications. In this paper we outline a counter proposal called Metro addressing and examine some of the technical and business issues that would result from its adoption.


As the Internet begins to enter the popular scope, portending large scale changes in how society will operate, important questions have emerged regarding issues of fairness and responsibility with several aspects of the base Internet architecture.

The questions are many, and all have cumbersome legal, financial, and cultural ramifications. We focus here on only one: the addressing model, the technical cornerstone of the Internet's ability to move data from sender to any connected receiver. The shape of the address space ultimately determines the effective scalability and constrains the financial model of the system. We contrast two models of address assignment, provider and geographic based, expanding on the analysis of Tscuchiya, and explore their societal and technical ramifications.

One difficulty in discussing address space is that although its effective use is essential for the technical feasibility of Internet operation, the ownership of address space and the responsibility for justifying its use remain ill-defined. There are several revealing analogies in other spheres, e.g., spectrum assignment and international telephony. Like spectrum bands, IP address space is a finite and contended resource that requires careful assignment and management. We note also that it has been, and continues to be, particularly challenging for regulatory bodies to create and enforce equitable and consistent policies for spectrum allocation.

Internet addressing policy must be constructed out of a careful balance between:

Although the IETF has entertained discussion of addressing policy including all of the above issues, the Internet has reached a stage of maturity and breadth of scope which requires that these issues see wider debate.

Addressing and Routing

Routing in the Internet requires network elements to proactively exchange information concerning reachability to sites on the Internet. It is the responsibility of a router to filter this information through policy rules that specify which providers to use for what kinds of service. The router then summarizes the filtered information into a forwarding table, which it uses to make decisions per-packet basis about which outgoing interface to use for arriving packets.

The two most performance critical tasks for an Internet router are processing routing updates and consulting this forwarding table on a per-packet basis. The costs of memory to store both the routing and forwarding tables, and the processing power needed to update and consult them, place economic constraints on their size. While high-speed interface technology is becoming more broadly available and cheaper, routing table size is increasing, thus making the routing system an increasingly large factor in total network efficiency and cost.

Rather than maintaining information about each attached host in the forwarding tables of backbone routers, routers can summarize or aggregate reachability information. Aggregation allows the Internet to scale. The basic form of aggregation, collecting groups of hosts into subnets, collapses routing information for the hosts within the subnet into a single route describing reachability to the entire subnet. This base level of aggregation has become insufficient for reducing the size of routing tables in the backbone, and the operational Internet is now trying to collapse routing entries into larger blocks of contiguous addresses all served by the same provider, thus having the same path.

Provider Based Addressing

In the early 90s, an extrapolation of exponential address allocation trends thus far predicted two catastrophes: the depletion of usable address space around the year 2000, and an explosion in the size of routing tables beyond that which technology would be able to accommodate.

The grim nature of the prediction sparked the development of a next generation IP protocol (IPng) with a larger address space. But because it was clear that the situation would become unmanageable before IPng deployment, the IETF also undertook a short-term measure: closer examination of address allocation policy and subsequent usage. IETF working groups, including the ALE (Address Lifetime Expectancy) group, proposed Classless Inter-Domain Routing (CIDR) rfc1518 as a more sensible addresses allocation scheme that would mitigate the growth in backbone routing table size.

CIDR curtails routing table growth through address aggregation: coalescing into a single route table entry multiple routes from different organizations that are connected to the Internet through the same provider.

Development of the next generation Internet Protocol (IPv6) has occurred in parallel to CIDR development and deployment. It simplifies and rationalizes the forwarding semantics of earlier versions (IPv4), but the primary motivation for IPv6 was its use of larger addresses. With enough space to address some interfaces, proper address space management would allow the space to last humanity quite a long time.

However, the ability to address so many nodes is not the same as the ability to route to them, and the scalable aspect of CIDR-like addressing seemed a natural feature for the next generation address space as well.

Provider-based addressing for IPv6 rfc1887 uses prefixes of variable size, just as CIDR does today, to give providers blocks of an appropriate size given their usage patterns.

Assuming users of the network nestled properly under the large aggregation blocks of their providers, hierarchical aggregation would insure that the routing system at the top level was as terse as possible. Proper address management under this scheme would insure that the curve relating total number of attached hosts to backbone routing table size would be as flat as possible. This growth would hopefully remain within the ability of routers with increasingly better technology to manage routing tables of that size.


For individual customers, this provider-based addressing has the side effect of requiring them to renumber their nodes every time they change providers. Because this numbering can be costly, support for dynamic provider-based addressing presumes the existence of nearly transparent renumbering capability to avoid suppressing natural market forces that would dictate transitions between providers. IPv6 efforts have focused substantial attention to making renumbering as automatic as possible rfc1971 . However, renumbering equipment in the current Internet still imposes significant burden on even small organizations. Furthermore, many Internet applications still use host addresses as unique identifying keys, such applications range from transient sessions such as TCP connections, to globally cached information such as DNS mappings, to near permanent relationships such as tunnel endpoints and NNTP and AFS configurations.

Although such applications could use DNS records in place of IP addresses for these functions, software designers have preferred to avoid reliance on the DNS since transient DNS failures are quite common. Currently DNS itself requires manual configuration of server address information the forward direction, and an external registry to handle changes in the reverse direction.

Efforts to alleviate the renumbering burden have primarily focused on mechanisms to facilitate the assignment of arbitrary addresses to end systems, but another alternative, Network Address Translators (NATs), have also received attention. IPv6 itself has rules for dealing with multiple sets of addresses associated with an interface, primarily for phasing out old sets of addresses in deference to new prefixes. While these mechanisms can somewhat automate a transition, it is clear that without serious changes to hosts and application semantics, renumbering will never be fully transparent.

The degree of transparency ultimately determines the perceived cost of any customer to change providers. If renumbering is sufficiently disruptive and costly, provider-based addressing will seriously damage the purity of competition in the Internet service market.

Individual customer renumbering is not the worst case. Singly homed resellers of Internet service, i.e., those fully dependent on a parent provider for transit service, would bear a compounded risk. Current provider-based schemes, including RFC 1887 rfc1887 , allow service providers their own address blocks, but this policy will be unsustainable as the number of leaf providers grows enough to inhibit routing scalability. Continued growth will inevitably involve {\em recursive aggregation , resulting in singly homed smaller providers using address space allocated from the blocks of their parent providers. If such a provider needed to change transit providers for business or legal reasons, they would have to impose renumbering on every one of their customers.

Settlements For Route Propagation

In order to insure that the networks they serve will be universally reachable from the Internet, providers must arrange with one another for propagation of their routes through the system. Carrying transit routes incurs a load on provider infrastructure, and there is as yet no direct financial incentive to control the number of routes one announces. Unabated growth in routable entities with no feedback mechanism to control the proliferation of routes has seriously threatened the ability of current backbone routers to maintain the necessary routing tables. In order to limit routing table size and promote aggregation, at least one provider has already resorted to imposing an lower limit on the size of route blocks that they will announce.

Rekhter, Resnick, and Bellovin in PIARA piara propose creating a market around route advertisements, so that closed loop economic feedback can balance between the global costs of route maintenance and selfish desire to consume large amounts of address space and not renumber. In the limit, the PIARA scheme requires that settlements be provided on a contractual basis to carry individual routes.

Internet reachability to prefixes increasingly involves a set of contractual and legal relationships that stretch far beyond the customer and immediate provider. Although providers need some some mechanism to recover transit costs, whether usage based or flat rate, it is far less clear that their reachability should be subject to second-order business dynamics over which customers have no control.

Furthermore, although the economic ramifications of Internet access outside the first world is still slight, it is naive to assume they will remain so as countries and businesses rely more fully on electronic information exchange. Although any provider-based addressing scheme will likely involve allocating blocks to countries for local administration, control over route propagation will still likely fall under the auspices of a set of multinational contractual relationships. Considerable debate over precisely this concern in the context of the current IPv4 provider-based addressing policy has already occurred Hubbard .

Metro Addressing Scheme

One alternative approach to provider-based addressing uses network address prefixes that correspond to major metropolitan areas deering .

The essential design goals of Metro are:

The metro addressing scheme structures Internet backbone service around Metropolitan Exchange Points, or MIXs. These exchange points resemble the Network Access Points (NAPs) of today's Internet, but provide the added functionality of routing traffic destined for some customer within a metro to the second tier provider responsible for that customer.

The fourth design goal above, that customers should not have to change addresses so long as they stay within a metropolitan area, requires that addressing with a MIX be essentially flat, with no structure to exploit. The proposed metro address scheme allocates 3 bytes within the IPv6 address field to represent this flat space. Organizations permanently attached to the MIX receive a site identifier ; more dynamic, address-on-demand customers can receive identifiers from the site through which they connect. Each site within the Metro area will be granted a site identifier out of a pool of sites within each Metro. Each site will have 80 bits, or addresses with which to number hosts and implement internal hierarchies.

The underlying assumption is that indefinite recursive aggregation is not necessary, only a high level of aggregation based on short geographic prefixes. Since the number of countries is small, and hierarchical aggregation can optionally occur across country boundaries, backbone (core) router forwarding tables will be much smaller than current ones, while serving a subscriber base several orders of magnitude larger.

This scheme bears a strong resemblance to the stratum addressing that Rekhter outlines in stratum . In both schemes addresses focus around interchange points and allow arbitrary movement within the exchange point. The major difference between the approaches is that the stratum approach does not impose any geographic context on the interexchange address space, thus allowing less constrained interactions between the members of the stratum with a corresponding loss of permanence in the addresses assigned. The drawback, however, is that addressed assigned underneath a stratum are subject to the dynamicity of the providers and stratums themselves.

Intra-MIX Routing

Metro addressing drastically simplifies the backbone routing problem, but requires careful engineering to solve the intra-Metro routing problem. Provider independence requires that any site identifier be reachable through any provider from the MIX. This routing space is completely flat and corresponds in size to the total number of sites active within the region. As this number grows, the traditional dynamic routing system for dealing with highly dynamic changes in a small number of network prefixes will no longer be appropriate for exchanging reachability information to sites. The size of this table also creates a special role for MIX routers, requiring them to have a large routing table capacity and the ability to handle a large volume of routing information.

Deering deering proposes a relatively static MIX-wide broadcast protocol that would result in an exchange of customer identifiers on daily basis. Although this would handle the provider change scenario well, it does not handle the case of multihomed sites, e.g., those with redundant connections desiring active failover (see Section multihoming ).

An alternative solution includes the use of an ARP-like mechanism that would fill in customer IDs upon demand from attached MIX routers. MIX routers would then cache these addresses with a timeout value appropriate to the volatility of the customer-provider binding.

A third possible solution is to use a centralized server as part of the base MIX service. Providers would register customers at the server and could obtain partial or complete dumps of the intra-MIX routing information. Although servers that maintain such mappings are generally single points of failure and often architecturally unnecessary, a single synchronization point for this information would enforce a consistent policy across the MIX for each customer.

The routing problem may also not be as bad as it originally appears. Since we can number initial allocations out of contiguous blocks, and since the number of providers as well as the customer migration rate among providers will likely be small, opportunistic aggregation might provide increased routing abstraction.


Multihomed sites are those that attach to more than one provider. Sites multihome to enhance network reachability or to connect to special, perhaps mission-oriented, resources. Multihoming has traditionally complicated provider-based aggregation, since by definition multihomed sites do not fit neatly underneath a single aggregate prefix of a parent provider. To multihome in the currently Internet, a site must get its own autonomous system (AS) number in order to advertise its own reachability directly into a default-free, or core, routing table.

Since the MIX can accommodate routing from one provider to another entirely within itself, metro addressing avoids this problem, at least for sites multihomed within a single metro. Since aggregation occurs above the interchange point, multihoming will have no effect on the global Internet routing system.

Admittedly, using a second provider for fallback assumes a fairly dynamic routing protocol, which is contrary our previously mentioned alternative of performing only daily updates of intra-MIX site id information.

Topological Constraints

Many providers feel strongly that metro addresses and the implied two-layer MIX routing is too topologically constraining and costly. This concern derives primarily from the assumption that all backbone providers must connect to each interchange point.

We suggest that it is not strictly necessary that backbone providers connect to exchange points on which they have no customers. There is no architectural constraint preventing backbone providers from peering directly with each other using appropriate settlements for transit to metros that they do not serve directly. These peer relationships would be very similar to existing direct peerings, and could occur directly off the MIX or in some other context.

Another misconception concerning geographic addressing is that it strongly constrains topology by geographical proximity. Many of today's backbones do not map directly onto geographic proximity, especially given the current popularity of embedding long haul IP links within a mesh of frame relay, SONET, or ATM switching elements. Unlike more strictly geographic approaches, the Metro scheme uses dynamic routing to maintain, for each provider, the best path between exchange points given arbitrary topologies between MIXes.

One potential negative commercial impact of metro-based addressing isthe heavy stratification of roles. While the profit model for a provider that serves end customers is straightforward, it is less clear what business model will best serve providers acting solely in a backbone transport capacity. For current large scale backbone providers, subscriber fees from leaf customers cover much of the cost of maintaining the long haul resource.

However, there is no reason that a backbone provider should not also serve as an intra-metro provider, in which case they would bypass the MIX for inter-Metro traffic to customers within the Metro. Although not strictly required, such a provider would likely use metro aggregation within its own routing system to prevent the insertion of non-scalable customer routes into its global routing system.

Direct second-tier clients of a backbone provider can also arrange transit along dedicated links bypassing the MIX, as shown in figure intra . If this client is a provider, it would need to form an intra-MIX routing adjacency with the backbone provider and advertise its customers into the providers intra-MIX routing tables. These routes would allow traffic in both directions to use the dedicated link, but would not prevent the backbone provider from aggregating the entire Metro across the wide area, or force traffic from other sites within the MIX to traverse the backbone.

Establishment of Exchange Points

The MIXs play a central role in the proposed Metro architecture, and their establishment and maintenance merits careful consideration. The non-monopolistic nature of the exchange point is essential. Any second-tier provider capable of meeting some generally accepted criteria must be able to connect. Without this constraint the MIX itself would breed monopolistic behavior and encourage providers to violate the geographic locality of the address space. Possible models include mediation by a loose cooperative or government body.

Physically, a MIX would resemble a NAP or MAE (Metropolitan Area Ethernet): either a centralized switched backbone or a physically disperse shared media.

Proxied metros would relax the constraint of having backbone connections at each defined Metro, thus providing a crucial mechanism for any initial metro-based deployment. Assigned metro regions without an exchange would select a nearby exchange as a parent. Although they would number out of their assigned metro, each provider in the proxied metro would procure their own link to the nearby exchange.

As soon as there were enough providers in an area to justify an independent exchange point and attract a backbone provider, the network could incrementally rearchitect itself around the new MIX without any address reassignments.

Exchange Point Competition

In the San Francisco area there are currently several exchange points including MAE-West, the Pacific Bell NAP, PCH, FIX-west. Each of these has arisen out of differing needs and business models of providers that serve that region.

Metro routing does not preclude the creation of multiple exchanges in a Metropolitan area; on the contrary, accommodating all the service providers in a dense region might require more than one exchange point. Just like today's exchange points, each of these interchanges would need some default connectivity to each of the others, in order to carry inbound traffic to providers not connected to the destination MIX, as well as traffic between the exchange points within a geographic region.

These interexchange links are already difficult to manage in today's Internet, given that they are essentially public resources with no reasonable mechanism for cost recovery from aggregates across large exchange points. Metro based routing however imposes one additional burden: the maintenance of a coherent fully qualified intra-MIX routing table consistent among all of the exchanges.

Addressing Authority

Responsible use of allocated space in the provider-based model creates an interesting issue for resellers of Internet service in a provider-based framework, who negotiate address space for their leaf customers. Although less critical in IPv6, scalability of provider-based addressing requires active management by addressing authorities and providers to insure conservative use of space. An organization capable of demonstrating that it serves a large number of end users and other providers can receive large blocks, with future allocations based on growth of the top level provider as well as effective use of previous allocations.

In contrast, the site identifiers used by Metro are neither scarce nor structured, assignments will not by highly dynamic and not subject to as much policy consideration as a CIDR scheme. The third design goal for geographic addressing, that local regions have administrative control over their own address space, dictates that metros, or metro aggregates such as countries, manage their own site identifiers. As with existing exchange points, a natural organizational structure for the MIX could involve either governmental addressing authorities or cooperatives within the metro.

At the top level, some global organization will manage the 16 bit metro space. Appropriate initial allocation should allow this address space to remain static over time scales that make it amenable to management by a treaty organization.

The Charging Problem

As the Internet continues its transition to a fully commercial framework, continued indeterminacy remains regarding viable underlying business models. Yet one cannot really design a routing framework without analyzing its affect on the ability of providers to charge for service.

Most differentiation among current backbones derives from their ability to effectively transit for large leaf customers and directly attached singly-homed providers. Since attachment points are a potential source of revenue, there is little economic incentive to provision a backbone to provide transit for default traffic.

Although some networks offer usage based billing at rough granularities, and other proposals for usage based pricing are emerging, the most common charging mechanism in the operational Internet today is routing policy. ISPs typically provide service in two directions, by carrying packets from an attached customer to the backbone, and by providing routes at major exchange points to their customers. ISPs use explicit route filtering as well as route announcements to provide symmetric control over which routes to accept.

Charging by incoming interface is more implicit. An ISP applies forwarding policy based on the identity of the routing peer, which in most cases corresponds to the source interface or router from which routing information arrives. Leaf connections to the Internet today typically use this charging model.

Since metro based addresses contain no provider information, and site ids are flat and fully aggregated at the exchange boundary, advertising routes for directly attached customers is no longer possible. A provider could inject non-aggregated site routes to retain this policy flexibility, but it certainly would not scale to a large number of such routes. Service within the Metro model is thus necessarily constrained to providing settlements based on the sender, not the receiver. Ultimately, this unidirectional service model may turn out to be insufficient to express whatever charging policy may evolve. While it does offer providers the ability to instrument attachment and usage based policy for transmitters, it deliberately restricts the ability to filter based on receiver in the wide area.

An additional challenge to charging in the Metro model comes from the need to provision outbound service from second tier providers transiting the MIX. Since backbone providers currently only receive revenue from the destinations to which they explicitly route, they are unlikely to be willing to continue carrying traffic toward non-customers without some mechanism for revenue recovery. Unless each second tier provider is willing to negotiate separate interconnect agreements for outgoing traffic with each provider, the group of attached second tier providers will have to collectively subsidize default outbound connectivity. Enforcement in the first case, and in the second case if there are second tier providers that will not contribute to the subsidy fund, would require using layer 2 information to verify conformance to the agreement.


The IPv6 address space will host the Internet for some time to come. Careful consideration should precede initial numbering to insure that routing in the IPv6 will scale in usage as well as routing table size. Most discussion of addressing models to date has focused on the problem of allocation to support maximal aggregation. While aggregation is absolutely essential to maintaining a scalable infrastructure, it is not the only aspect of the wide area Internet that address assignment directly impacts.

Renumbering and route distribution policy are tools that can help improve the efficiency of the global routing system, but they also place the burden of implementation on the end user, and could actually encourage a complete or partial monopoly over some segment of the market. Provider based addressing optimizes routing assuming a relatively static, hierarchical routing system that is uni-connected at the edges. Metro based addressing provides an interesting alternative, although it presents a simplified costing model and requires further investigation into the details of intra-MIX routing.

We concede that excessive interference by regulatory can be harmful to technological development, but in this case the ramifications are too broad to be debated on technical merit alone. The deployment of an address model, whether provider-based, Metro, or some other alternative, will determine the ultimate scalability of the Internet, in terms of routing table size as well as general utility to society.