At our first workshop in December 2019, we spent a majority of the time hearing about various datasets that might be useful as a starting point to explore ways to improve the security of the Internet.
In this workshop, we are proposing a different exercise as part of the agenda: we are posing some possible scenarios for the future of the Internet, and we will ask the attendees to break up into smaller groups to discuss these scenarios. Each of these scenarios describes a possible future for the Internet. Some scenarios relate directly to trends that might change the security of the Internet, others relate to more general trends about the future shape of the Internet itself, and we want to ask how these trends might lead to a more secure future.
Of course, these scenarios might not come to pass. The groups should feel free to pose alternative futures that might arise instead of the one described here. What sort of data or analysis might allow us to predict the probability of one or another outcome? But we also want the groups to consider the question of "what if". If these outcomes do happen, what would be the implications for the security, stability and governance of the Internet?
Some of the ideas behind these future scenarios are young ideas—not yet well-enough formed to be called "good" ideas. We ask you to think positively. How might these ideas be shaped or modified to have the best chance of success?
For all these discussions, a key high-level question is whether there is data of any sort that might shed light on the scenario: the probability of it happening and the implications if it did.
- The rise of the app
- The regionalization of the Internet
- Beyond blacklist blocking
- Recursive MANRS
- Contention over encryption
- The future of the governance of the DNS
With the advent of mobile devices, more and more of the user experience is not by means of a web browser, but instead by the use of an "app", which directly embodies the desired application behavior using code that runs on the end device. Apps are also being written for laptop operating systems as well as mobile systems, so even on the traditional computer the center of the user experience is shifting from the browser to app code running on the end node.
What are the implications of this trend for the security of the Internet, and for the app itself?
This trend represents a shift in power and control. The provider of the browser could shape the user experience, monitor what the user was doing, protect the user from some forms of abuse, and so on. The proposal for DNS over HTTPS (DoH) moved the control over which recursive resolver was used for DNS lookups from the OS to the browser. Now, the move from the browser to the app again shifts power and control. There is a generality in the browser that need not be duplicated in code that is specific to an app. The app designer can use service elements in the OS that support web functionality, but can pick and choose among these3 to tailor the design of the app. For example, the app can control how it resolves names into addresses. It can invoke the OS service, embed a DNS protocol and resolver address in its code, or (in some cases) avoid using the DNS all together. For apps that primarily need to connect to a centralized component, the mechanism by which the address of that component is found need not depend on the DNS, but could be a custom mechanism.
In this app context, what is the future of advertising? Will the app code on the end node resolve URLs to fetch ads, or will they come via an app component in the cloud? To the extent that certificates are required to validate the other parties in the communication, how might these be managed? Today, the list of trusted root certificates is stored in the browser or the OS. Might trust roots be provided as part of the app? What would this trend imply? How might attempts to hijack routes or impersonate remote sites manifest in this context. What sorts of applications are not amenable to being coded as end-point code rather than through a browser?
Extra credit: Imagine that there is a push to develop applications that are more decentralized in character (think about the original shape of email), as opposed to the highly centralized applications of today such as Facebook. What issues will arise as a part of making distributed applications more secure?
There are forces at several levels that are pushing toward a more regionalized Internet. Content is being specialized for different parts of the world. Different applications are used in different parts of the world, some countries are explicitly pushing for the Internet experience in their country to be localized to that country.
At a topological level, more and more of the services that a user invokes today are provided by service points that are directly connected to the access network of that user. High volume services such as Netflix and YouTube directly connect to access providers to improve delivery and reduce costs; other services utilize service points directly connected to access networks to improve resilience as well as performance. Less and less of the user traffic is crossing multiple ASs on its way to its destination. There are proposals to explicitly move in this direction to improve the resistance of the Internet to events such as DDoS attacks. For example, the Dutch government is proposing to work with its domestic ISPs to ensure that all critical services for Dutch citizens are directly connected to those domestic ISPs. They do not intend to sever the connections to the rest of the Internet, but they could be rate-limited during a DDoS event to prevent Dutch services from being overwhelmed. Their view is that Holland is small enough that it will be a challenge for an attacker to build a large enough botnet inside Holland to launch an attack internally.
Imagine that the Internet evolves in this direction. What does that imply for the core services of the Internet, and their current security vulnerabilities?
- Does this trend reduce the need to worry about BGP hijacking?
- Are there approached to managing DNS queries that can better protect such a region?
- What about management of the CA system within a region?
- What would it mean in practice to manage DDoS attacks in this context?
- Could a region as large as the US be protected in this way?
- List important classes of apps that cannot be regionalized in this way.
Extra credit: It is not clear that the region of trust that is created by the direct connection of service points to access networks needs to have the same scope as a region of trust that manages DNS lookups, the CA system, and so on. Are their more general concepts of regional protection that could improve the security of the Internet and its users in that region? Setting aside the extreme of countries that want to quarantine their users to an experience restricted to that country, as content and applications become more localized to different regions, how can that trend be exploited to make the experience within regions more secure and trustworthy.
It is well-documented that there are certain DNS registrars and TLDs that are highly involved in the registration of abusive DNS names. What if, for those actors that get above a certain threshold of support for abusive registrations over some period of time, a group of operators of recursive resolvers switch from a blacklist model to a whitelist model, where all names are blocked by default? The group could construct the whitelist independent of the domain provider, or the group could require that the provider undertake a somewhat onerous request process as a part of getting a single domain name white-listed.
- Perhaps the group could require that the registrar/TLD reveal more information about the registrant than ICANN requires in order to get the name white-listed. There is an analog here to credit card processing. Merchants must code all card transactions with the type of transaction, and if the transaction is miscoded, the credit card network may throw out the merchant (and bank). So, merchants (mostly) report this data accurately. By making the rules clear, the group can impose some discipline on reporting by the registrar/registry.
- Could domain operators seek legal relief from such a discipline?
- Perhaps the group can require that the registrar/TLD pay a fee to whitelist a name. Would this constitute criminal extortion by the group of operators?
- In the CA space, there is a somewhat similar organization, the CA/Browser Forum, which vets CAs and determines that some CAs are not trustworthy. It is voluntary, but since all the major browser providers participate, it has a lot of clout. What can be learned from the experience in that space?
Alternative approach: As an alternative to whitelisting, operators of resolvers might move to more aggressive blacklisting to discipline domains and registrars that seem to encourage abuse.
- Again, could domain operators seek legal relief from such a discipline?
- In order to focus attention on the abusive domains, would it be effective to design some sort of mechanism that would allow resolver operators to share (and perhaps make public) what domains they are blocking?
- More aggressive blocking might cause collateral harm, as innocent URLs are blocked. Can research estimate the magnitude of this potential harm, perhaps by looking at logs of queries to recursive resolvers? Could decisions to block at a regional level better mitigate harms?
Participation in the MANRS program includes a number of requirements on member ISPs, one of which is to check the BGP origin assertions of their customers to make sure that the AS and prefix are valid. (The method by which an ISP should do this is not specified, but an obvious approach would be for customers to register ROAs for their prefixes.) If ISPs drop BGP assertions that fail this test, simple forms of BGP hijack with be prevented. However, hijackers can launch more sophisticated attacks, which involve an invalid path assertion. The general form of this attack is that the customer provides a BGP assertion with (perhaps several) AS numbers in the path, where the first is a valid origin (AS/prefix) and the last is the valid AS of the customer. In other words, the customer is asserting that it in turn has customers, one of which is this AS, for which there is a valid AS/prefix ROA.
Here is a possible way to augment the MANRS requirements to deal with this class of attack. Require that every MANRS-compliant ISP know which of its customers is also MANRS-compliant. This information will not change rapidly, so it should not be a burden to track it.
If the customer of a MANRS compliant ISP is also MANRS compliant, then that ISP can assume that the customer ISP has checked its own customers, so accept the path. If the customer does not participate in MANRS, treat the BGP assertion as suspect. If the ISP receiving the assertion has another route to the origin, discard the suspect one independent of the AS path length, etc. If a MANRS compliant customer launches an attack based on an invalid path assertion, BGP monitoring data can detect this, and the MANRS organization can revoke or suspend that ISP’s membership. (As well, any MANRS compliant ISP can take a local decision to assume that path assertions from one of its customers is suspect.)
- Are there obvious ways for an attacker to sidestep this scheme? Could it be effective in practice? To what degree?
- This scheme would seem to create some benefit to an ISP from joining MANRS. An ISP that has legitimate customers but is not a member of MANRS may not be able to support the multi-homing of its customers. More generally, can an enhanced form of MANRS provide an incentive for ISPs to join MANRS?
- This scheme is an alternative to a proposal in the IETF called ASPA. (ASPA is described in https://tools.ietf.org/html/draft-ietf-sidrops-aspa-verification-03. In general terms, in the ASPA scheme a customer ISP registers in a global database the list of its legitimate transit providers. If the ISPs along a path to a destination have all recorded this information, any ISP can detect if the hops in a BGP route are legitimate. See the draft for the details.) How does this scheme compare in terms of degree of effectiveness, complexity of implementation, incentive to participate, etc.
Extra credit: Can some classes of route leaks be detected and mitigated using similar techniques? Is prevention of route leaks a separate problem?
As the Internet moves to more aggressive encryption, including most recently TLS 1.3 and Encrypted SNI, it is becoming more difficult for those who claim a legitimate need to intercept and examine encrypted traffic to do so. Preventing interception of encrypted traffic is, or course, exactly the goal of these mechanisms, but the counter-pressure of these claimants must be recognized. The financial and banking industries are required by regulation to monitor the communications of some of their employees, and TLS 1.3 seems to make it impossible for banks to decrypt and monitor TLS connections. As well, some nations with a strong claim to the right of interception in support of national security (for example, Kazakhstan) are likely to take counter-measures against these advances.
How will these contrary goals to play out in practice?
- Within corporations, may encryption move to the edge of the enterprise to allow internal logging? What sorts of risks will this create?
- Can we expect that application designers will be forced to remodularize their apps to better permit interception in certain circumstances?
- Will corporations add their voice to calls from law enforcement for back doors in encrypted apps?
- Can we expect new sorts of attacks on the CA system?
- Which components of the system might end up with the power to control when and if interception can happen?
Our previous scenarios described a possible future, and asked you to consider whether it was likely, and what it would imply for security (or how security could be enhanced) if it came to pass. In this scenario, we offer several possible futures, and ask you to consider the extent to which each might be likely, and what the implications of each alternative might be.
Today, ICANN has (to some extent) responsibility for the governance of the DNS, and specific responsibility for the space of TLDs. There is clear evidence that many Domain Names are created for abusive and malicious purposes, and this raises the question of what entity should or could play a role in disciplining this behavior. The continued use of the DNS for abusive purposes might lead to a number of futures. Are any of these likely? What might trigger one or another outcome?
- Some crisis or highly visible form of malicious behavior involving the DNS might trigger substantial pressure on ICANN to take more active role in governance. How might ICANN react to such pressure?
- Some independent organization might arise that undertakes to impose discipline on the DNS. In our scenario on "Beyond Blacklist Blocking" we hypothesized that a group of recursive resolver operators might band together and start to make demands of registries/registrars as a condition of resolving names in those domains. There might be other variants of this outcome. Is this likely? What might push toward this outcome? There might be a number of approaches that such an organization might take to discipline the DNS. What might they be, and what sort of data would be useful for any such approach?
- The importance of the DNS and domain names might fade. If the user experience moves from the browser to apps, the apps may use specialized mechanisms to find app elements, rather than the DNS. Browsers might take on a much more vigorous job of protecting users from rogue URLs. Search tools might evolve in ways that less emphasize domain names. Today, attackers construct DNS names that are intended to fool the user. In a future world, the browser designers might take a different tack and never show a URL to a user (since users are easily fooled) and take a different approach to protecting the user. If a user never sees a name, its form does not matter. In this case, the utility of names to spammers might diminish (in particular, impersonation names), the interest in name squatting might diminish, and the importance (and revenues) of ICANN and the other actors in the ecosystem might erode. What might then happen?
Could some of these outcomes amplify others? Could an erosion in the power and finances of ICANN lead to the emergence of an independent body attempting to exercise some governance over the DNS? Can you think of other futures in this space?