Over the past year and a half, I have participated in the NSFNET backbone transition and its aftermath from two perspectives. At NorthWestNet, I transitioned its backbone connection from NSFNET to internetMCI while working closely with the MCI technical staff as part of the CoREN/MCI Joint Technical Committee. In my current role at the University of Washington, I assess the state of the Internet backbone and anticipate the UW's future requirements.
In some respects, the backbone transition has been very successful. One backbone provider is carrying at least six times the peak traffic load carried by the NSFNET in late 1994 (P. Gross, Dallas IETF open plenary presentation, 12/95). In general, the degree of provider interconnectivity has increased due to the emergence of new, high-capacity exchange points and the absence of the NSFNET AUP in the new architecture.
However, the dramatic growth of traffic over this last period has created a number of problems within the still DS3-based national architecture. Within the Internet constituency at the University of Washington, the largely intuitive end-user metric of Internet performance is felt to have worsened since the completion of the transition. A number of scientific faculty members who have been long-time Internet users for data transport increasingly report problems connecting with a variety of American sites.
In addition, the traditional role played by the academic sector in terms of Internet backbone development and bandwidth consumption has changed significantly over the last year. Based on anecdotal data, I estimate that many commercial organizations' traffic is exhibiting doubling times of 3 months or shorter while the academic sector continues to grow at rates consistent with or somewhat slower than the 11-month doubling time that characterized the NSFNET over recent years.
While NWNet and the UW have private access to a limited set of proprietary MCI backbone statistics that we ourselves collect at 1-minute intervals, we have generally lost the ability to make any coherent data-based statements about overall American backbone performance and predictions for future growth.
What is most concerning about this 'black box' Internet is that we now lack the quantitative basis to assess several key technology questions and a key policy issue facing us over the next 1-2 years.
The technical questions are:
On the policy side, the higher education community must to determine whether the commercial Internet will meet the bandwidth and on-demand, high-performance needs for ourselves and our affiliates. These requirements may increase dramatically in the next few years with the emergence and wide dissemination of network-based multimedia applications for distance learning and collaborative research.
One outcome of this workshop might be the emergence of a technically oriented framework for sharing aggregate Internet performance data among the various ISPs, key customers, networking researchers, and protocol and application developers.
Establishing an independent Internet statistics consortium to discuss
these issues and to exchange data in a largely non-disclosure
environment could accomplish this goal. Ground rules would be
established to prevent the use of provider-specific data for blatant
settlement or marketing use. In addition, this group could disseminate
a set of provider-independent aggregate data -- e.g, total octets,
average packet size, bandwidth consumption by application --
publically on a regular basis.