Death of the Internet Predicted!

The Internet is going to die on us unless we do something fairly drastic to fix it. There, I said it up-front. It’s true. The Internet is not a solved problem. The network you’re using to read this is not the end of the road. It’s merely a stepping stone to the next Internet.

The above isn’t really too outlandish. The end of the road for the internet has been predicted essentially since day 1. (Technically before day 1; in the ’60s, the AT&T naysayers were decrying packet switching as an interesting idea which would never work in practice. They had a vested interest in circuit switched networks, after all… But I digress.)

The Internet architecture we run today has essentially remained constant since around 1994. So, to clarify where I’m focussing when I’m talking about an “Internet architecture”, I need to spend a moment on the protocol stack: conceptually, the network is built in layers. The key point is that there is a network layer which transparently connects together many smaller networks, and this network layer is essentially what pushes your packetised data from point A to point B around the globe. At endpoints, higher layers are built on top: the transport layer deals with masking or exposing the underlying network characteristics (packet losses, packet reordering). On top of these transports are your applications: your web browsers, email clients, instant messengers, your bittorrent clients, etc. Respectively, examples of protocols at each of these layers are: IPv4 at the network layer; TCP or UDP at the transport layer; and IMAP, SSH, HTTP, etc, living in the application layer.

When I suggest that the end of the Internet is nigh, I’m focussing on the network layer. This is what makes finding a solution especially difficult: the protocols we run at the network layer are the glue on which everything else we think of as “the Internet” is built. Computers and routers adhering to these protocols exist everywhere in the network. Replacing the Internet is not an easy task.

Currently, we’re running running IPv4 in the network layer, which  provides the 232 addresses we’re accustomed to. Actually, it’s a little less than that; 232 bits would allow for ~4billion addresses; it’s closer to ~3.7billion addresses once you take out reserved blocks, the multicast address space, private address regions, etc. Classless inter-domain routing (CIDR, pronounced “cider”) was formalised and introduced into the network by 1994/1995 to provide variable-length network prefixes (different from the three fixed-size network prefixes of the class-based system, and thus finer levels of granularity in network sizes) and, at the same time, to allow for aggregation of addresses to help alleviate the sizes of forwarding tables in routers.

Today, the larger of these networks are known as Autonomous Systems (ASes), and the routing protocol used to knit them together, in order to provide the global IPv4 service, is BGP-4. There are in the region of 30,000 ASes which provide the Internet service. That, ladies and gentlemen, is the Internet today. (I’m ignoring IPv6 for now, due to minimal usage.)

Problems facing the internet today

This is a system that works. The Internet works, right? Sure, it works just now. But there are various problems on the horizon.

The commonly known problem is that of IPv4 address exhaustion. It’s been known for a while that IPv4 doesn’t offer enough address space. Sure, ~3.7 billion addresses sounds like a lot, but it’s quickly running out. We probably have 3-or-so years left before the totally unused IPv4 address space is all allocated. This is an easily understood problem which suggests we just need more address space. Well, that’s what IPv6 was meant to provide, right? Sure. It offers a crazy amount of space. It was chosen as the successor to IPv4 in 1995. We still haven’t adopted it 13 years later.

Why? Largely because the reasons for shifting to IPv6 were not good business reasons 13 years ago. To nudge anybody into making a good decision, you either need a carrot or a stick: incite their greed response, or incite fear for if they don’t act. IPv6 is not compatible with IPv4, so it was difficult to come up with a good capitalistic profit mechanism to incite greed amongst competitors, since IPv4 was the market they were playing in. And the fear portion (address exhaustion) always felt quite far away until now. Part of the reason that the exhaustion problem felt a long time away was that NAT devices artificially extended the lifetime of IPv4.

The other problem is a very different one, but is aggravated heavily by IPv4 address allocation, and the design of the network stack. Routers in the network maintain forwarding tables, listing the best path to choose for any given destination, or a “default” path to choose if no better information is available (the “somebody else can deal with this packet” response). In the “core” of the network, to borrow a loosely defined concept, we have a default-free zone (DFZ); routers essentially know exactly which path to choose for any destination in the network. Forwarding table sizes in the DFZ are growing at alarming rates.

What this means is that, eventually, the size of a DFZ forwarding table will exceed that of the available memory for that forwarding table. This is a big deal: memory sizes are limited due to the fact that each lookup must be completed at line-rate. A router must be capable of accepting a packet, processing it, and forwarding it, without incurring delay. If it incurs delay, then the router’s buffers fill up and packets start to get dropped. Building fast memory for forwarding tables at the sizes we’ll see soon is becoming prohibitively expensive to create, and prohibitively expensive to keep cool once built. So, the Internet doesn’t immediately break one day, so much as start to degrade. That degradation will affect the behaviour of pretty much everything you use today. Given the way that TCP connections back-off on missing packets, your service will probably slow way down.

Why is this a problem? In part it’s because of the design of the protocol stack. Currently, transport layer protocols generally use IP addresses as identifiers. The transport layer is using the same 32 bits for a different purpose than the network: the network layer uses those bits as a locator. In general, people don’t really give a damn about how the network works, just that it does. They actually want an identifier they can take with them, and plug into the network no matter who their network provider is, much like a mobile phone number. This is known as Provider Independent (PI) address space; Provider Assigned (PA) address space is the nice hierarchically assigned space envisaged for CIDR-based address allocation. Instead, more and more networks are buying PI address space, which is not aggregatable, and thus routers in the DFZ end up handling more and more small address blocks representing these networks. Another reason for an explosion in the uptake of PI address space is multihoming: connectivity is cheap, and networks choose to boost their connectivity by attaching to the Internet via two ISPs rather than paying a premium for one. Further, the NAT devices mentioned earlier allow networks to buy smaller and smaller address blocks, so fragmentation increases. Further still, there’s traffic engineering requiring that small address blocks are punched out of their otherwise nicely aggregated space. And IPv6? Hang on, those addresses are 4 times larger, and take up more memory…

So we have this huge problem. IPv4 addresses are running out, and are being used differently from planned since people and networks expect to be able to move around. And it’s all leading to an eventual collapse.

The Internet will die?

In theory, we can claw back IP addresses from organisations not using all of their blocks, but then we will wind up with more unaggregatable address space. Anyway, this is only a short-term solution.

It seems then that the Internet as we know it must change. This is a huge task, owing to the fact that we must design, implement, and deploy a new network layer across multiple competing service providers, and possibly all the way to the billions of end-hosts.

In an ideal world, we want a clean solution to this problem. We want a clean solution because we want to design an Internet based on the lessons we’ve learned from the current one, and attempt to ensure that the future Internet can evolve for some time to come. If the network is to survive, then we need to provide a larger address space, and we need to slow the rate of forwarding table expansion.

In general terms, the first, most obvious, step in the correct direction is to separate the two different functions of identification and location in the protocol stack, and we want to figure out how to do multihoming correctly. Neither of these is essentially easy. In theory, once the former is done, we can implement a reasonable solution to the latter.

There are two broad categories for providing a locator/identifier split: network-based solutions, and host-based solutions. Network-based solutions mean that end-hosts need not be modified and continue to use IP addresses; networks transparently “jack-up” existing networks on top of another network; essentially, IPv4-on-IPv4, shunting a new layer into the protocol stack. The Locator Identifier Separation Protocol (LISP) is a strong example. Host-based solutions are possibly “cleaner” solutions; they provide a completely new identifier namespace at the endpoints, leaving the network layer untouched. In theory, this can be achieved over time by pushing out operating system updates and modifying existing software, though this is not an easy task. Clearly some support for legacy applications must be provided, but the Host Identity Protocol (HIP) is a good example of this sort of architecture. Debate over whether network-based or host-based solutions is about as fierce as emacs vs. vi.

Both, however, introduce a new problem: If you separate the transport identifier from the network locator, how do you then map from my identifier to my current location in the network if you want to communicate with me? These solutions will require a scalable mechanism to provide a mapping service. This, perhaps, is an engineering problem more than a research problem.

It may be that if we can provide a clean locator/identifier split, turn on IPv6 on the network layer, and enforce/encourage hierarchical locator assignment rather than the use of PI address blocks, then we solve the scalability issues. But IPv6 is a very minor modification to IPv4, and isn’t necessarily a clean solution when it comes to some other crucial pieces in a modern Internet environment: It doesn’t guarantee a good solution to addressing multihomed sites, it doesn’t provide a really tidy solution to node mobility, nor does it put enough emphasis on multicast to support the ever-growing realm of content dissemination. In theory, we can provide a clean identifier/locator split on the current architecture, and at some point in the not-too-distant future, provide something entirely new on the network layer. We’re either going to end up running a new Internet, or we’re going to end up with a really screwed up legacy Internet. Preferably the former.

The Internet is dead. Long live the Internet.

(I did a talk on this a while back; you can view my slides here.)

Footnote

Posted by Stephen Strowes on Wednesday, September 10th, 2008. You can follow me on twitter.

Recent Posts

(full archive)

All content, including images, © Stephen D. Strowes, 2000–2016. Hosted by Digital Ocean.