Skip to Content
[CAIDA - Center for Applied Internet Data Analysis logo]
Center for Applied Internet Data Analysis
Replication Strategies for Highly Available Peer-to-Peer Storage
R. Bhagwan, D. Moore, S. Savage, and G. Voelker, "Replication Strategies for Highly Available Peer-to-Peer Storage ", in Future Directions in Distributed Computing (FuDiCO), May 2002.
|   View full paper:    PDF    |  Citation:    BibTeX   |

Replication Strategies for Highly Available Peer-to-Peer Storage

Ranjita Bhagwan
David Moore
Stefan Savage
Geoffrey Voelker

Department of Computer Science and Engineering,
University of California, San Diego

In the past few years, peer-to-peer networks have become an extremely popular mechanism for large-scale content sharing. Unlike traditional client-server applications, which centralize the management of data in a few highly reliable servers, peer-to-peer systems distribute the burden of data storage, computation, communications and administration among thousands of individual client workstations. While the popularity of this approach, exemplified by systems such as Gnutella, was driven by the popularity of unrestricted music distribution, newer work has expanded the potential application base to generalized distributed file systems, persistent anonymous publishing, as well as support for high-quality video distribution. The widespread attraction of the peer-to-peer model arises primarily from its potential for both low-cost scalability and enhanced availability. Ideally a peer-to-peer system could ef- ficiently multiplex the resources and connectivity of its workstations across all of its users while at the same time protecting its users from transient or persistent failures in a subset of its components.

However, these goals are not trivially engineered. First-generation peer-to-peer systems, such as Gnutella, scaled poorly due to the overhead in locating content within the network. Consequently, developing efficient lookup algorithms has consumed most of the recent academic work in this area. The challenges in providing high availability to such systems is more poorly understood and only now being studied. In particular, unlike traditional distributed systems, the individual components of a peer-to-peer system experience an order of magnitude worse availability -- individually administered workstations may be turned on and off, join and leave the system, have intermittent connectivity, and are constructed from low-cost low-reliability components. One recent study of a popular peer-to-peer file sharing system found that the majority of peers had application-level availability rates of under 20 percent.

As a result, all peer-to-peer systems must employ some form of replication to provide acceptable service to their users. In systems such as Gnutella, this replication occurs implicitly as each file downloaded by a user is implicitly replicated at the user's workstation. However, since these systems do not explicitly manage replication or mask failures, the availability of an object is fundamentally linked to its popularity and users have to repeatedly access different replicas until they find one on an available host. Next-generation peer-to-peer storage systems, such as the Cooperative File System (CFS), recognize the need to mask failures from the user and implement a basic replication strategy that is independent of the user workload.

While most peer-to-peer systems employ some form of data redundancy to cope with failure, these solutions are not well-matched to the underlying host failure distribution or the level of availability desired by users. Consequently, it remains unclear what availability guarantees can be made using existing systems, or conversely how to best achieve a desired level of availability using the mechanisms available.

In our work we are exploring replication strategy design trade-offs along several interdependent axes: Replication granularity, replica placement, and application characteristics, each of which we address in subsequent sections. The closest analog to our work is that of Weatherspoon and Kubiatowicz who compare the availability provided by erasure coding and whole file replication under particular failure assumptions. The most critical differences between this work and our own revolve around the failure model. In particular,Weatherspoon and Kubiatowicz focus on disk failure as the dominant factor in data availability and consequently miss the distinction between short and long time scales that is critical to deployed peer-to-peer systems. Consequently, their model is likely to overestimate true file availability in this environment.

Keywords: peer-to-peer
  Last Modified: Tue Oct-13-2020 22:21:51 UTC
  Page URL: