Named Data Networking(NDN): Evaluation

Since the software implements components of an architecture, computational performance metrics of project software components running on a testbed, or in some cases using simulation, offer some objective measure against which to compare architectures (see below). But at FIA PI meetings, researchers have engaged in more interdisciplinarly conversations, recognizing that evaluating the releative merits of different future Internet architectures is not a purely technical exercise. At the first PI meeting, David Clark introduced a list of aspirations of a future architecture (see below), and used three PI meetings to intensely focus on describing the architectures with just a single aspiration (or two) in mind. We got through the first four topics on his list which led into an exploration of case studies in which to evaluate each architecture's efficacy. Eight external experts joined the October 2012 PI meeting in DC, observing that the conversation about evaluating architectures would benefit from more time, structure, and diversity of interests represented. The March 2013 PI meeting will provide an opportunity to accomplish all three. The solicitation for a follow-on program to FIA is due out in February, and will provide additional framing for this conversation. This page summarizes proposed suggestions related to evaluation.

Technical metrics and evaluation methods for each project area, proposed in 2010:

Architecture ComponentKey Evaluation MetricsEvaluation Method(s)
Routing and Data Delivery FIB size, PIT size and lifetime, routing & Interest message overhead, successful path probability and length distribution, delay in finding data, optimal number and diversity of paths testbed measurement, simulation, theoretical analysis
Hardware FIB update delay, packet processing delay (including lookup delay) testbed measurement
Caching cache size, hit ratio as a function of content type testbed measurement, simulation
Flow Control interface throughput, link utilization testbed measurement, simulation
Application support ease of creating applications (e.g. closeness of mapping between application needs and network support[1]), application-level data throughput testbed measurement
Privacy privacy preservation capability of TORNADO (NDN version of TOR) testbed measurement
Data security speed of generating and verifying signatures testbed measurement
DDoS percentage of requested data delivered to legitimate users testbed measurement, simulation
Capacity and Traffic total amount of information transported by the network in space and time (i.e. consumed entropy rate), traffic patterns compared with IP testbed measurement, theoretical analysis

David Clark's list of criteria

Evaluation may proceed in two steps. The first is to ask what each of the designs says about these points. The second and more challenging is to ask how one would compare those answers and decide that one was "better". Our field has few tools to do that.
  1. Security
  2. Availability and resilience
  3. Economic viability
  4. Better management
  5. Meet society's needs : (but, hmm, "The Internet was Drastically good at restricting choice to one QoS")
  6. Longevity : Extensible/flexible header, or fixed, foundational building block? [ e.g., jtw's definition of architecture is ``what is (intended to be) left after 30 years''].
  7. Support for tomorrow's computing
  8. Exploit tomorrow's networking
  9. Support tomorrow's applications
  10. Fit for purpose (it works...)

Other methods for evaluating architectures

  1. Evaluation by assessment of impact: how (and for whom) would world be different? app designers or end users?
  2. Evaluation by comparison: either via list of requirements or structured another way, e.g., by mechanism and subsystem
  3. Evaluation by simulation (since many features only show efficiency at scale)
  4. Theoretical evaluation: by derivation of optimal design, or by derivation on bounds and limits (if we can find any)
  5. Evaluation by "extreme use case", e.g., post-disaster operation (open question: What use cases are appropriate to demonstrate that the architectures will work in a real-world environment? examples of interest: critical infrastructure, enterprise content delivery network with embedded clouds, a civil crisis or public safety network, a complex network of things, large-scale interconnected cyber-physical systems.
  6. Evaluation by red team
  7. Evaluation by deployment and use, with caveat that implementation issues may not refect core design principles

Suggested next step for each project

a 10-page overview where is the archictecture now: lucid explanations of architecture. what have you learned, what was right / wrong, what have you had to add, take away. what you thought was critical but turned out not to be. e.g., clarify essential problems in NDN forwarding, and how how data structures and algorithms should be applied to solve them. separate architecture from mechanism; separate architecture from systems research

Published