Regarding (1): if past traffic studies have taught us anything, it's that the Internet is tremendously heterogeneous. Consequently, we are desperate for invariants - things that aren't changing in a sea of change. The traffic modeling world has recently given us a candidate invariant across many different network environments: the presence of long-range dependence in traffic measurements. One issue I'm very interested in is the question of whether self-similar traffic models might provide the measurement invariants we seek. This is not to say we don't need to continue measuring metrics such as available bandwidth, drop rates, and RTT variance. Rather, the self-similar models might provide a solid framework for interpreting these metrics and their evolution over time.
Regarding (2): My dissertation primarily concerns developing measurement techniques for these metrics (available bandwidth, etc.) and for finding models to describe their behavior. The emphasis in the research is on endpoint techniques: what can you determine about a network's conditions given that you can only instrument it at its endpoints? The advantages of endpoint approaches are that they can be done without cooperation from the network, and that what you measure is precisely what is of interest to a connection - the end-to-end service it can receive from the network, rather than the service it can receive from only one piece in the chain (e.g., the local ISP).
The framework for my dissertation research is a "network probe daemon" (NPD) that runs on a number of Internet sites. This daemon accepts authenticated requests to measure the network in different ways (presently, traceroute to a remote host, or initiate and trace the packet stream of a TCP transfer to a remote NPD). An issue I'm very interested in exploring is whether largescale deployment of something like NPD might provide an Internet measurement infrastructure that could then be used both to trouble shoot problems, and to measure how traffic continues to evolve.