Changing the Stack: Bottom Up or Top Down

In computer networking and telecommunications, the OSI (Open Systems Interconnection) model provides a simple way to identify and categorise technologies, allowing individual layers to change and improve without impacting others.

This is a very useful capability; without it, a small change to one aspect of a network system, such as the data transmission medium it uses to physically move data from one point to another (layer 1) would require wholesale changes through other related systems, hardware and software, drastically increasing the impact of any change and decreasing the likelihood it would be implemented.

The computer networking and telecommunications industries spend a disproportionate amount of time focused on the bottom two layers of the model: physical and data link. We’re all guilty of it at some time or another: who can say they’re not excited to move to gigabit wireless networks, or to get bandwidth that yesterday cost $10,000 for $100?

Although this may seem odd at first, it’s not hard to see why: changes at these two layers, especially at the physical layer, have the potential to accomplish a few notable goals:

1. Deliver significant increases in connection performance, particularly for the local network

2. Prompt new hardware purchases, which are very useful for manufacturers of network hardware

3. Keep the scope of change required small, making implementation as easy as possible

When these 3 factors combine, it’s not hard to see why change at layers 1 and 2 is highly attractive. It offers the most easily understood end product (a faster connection than yesterday) with limited change required elsewhere to implement it – the perfect combination for a manufacturer’s product refresh cycle.

It’s useful to carve up the other remaining layers of the OSI model into two parts. Layers 3 and 4, the network and transport layers, can be thought of as the ‘upper layers’ of the ‘network’ half of the model, with the layers above them (session, presentation and application) being in their own ‘application’ half.

Of course, the distinction isn’t always this clear-cut, but for the purpose of this blog, it’ll do.

A similar story exists to layers 1 and 2 in the ‘application’ half, driven by software and application developers rather than manufacturers of network hardware. Changes at these layers are often fast, as the developer typically controls the server and client software, and so as long as changes are properly synchronised between the two they can be handled well.

So, why is change much less frequent at layers 3 and 4?

It’s a peculiar quirk of network technology that seismic changes can occur at the lower layers, such as the change from coaxial cable-based Ethernet networks to incredibly complex multi-user wireless networks, operating at gigabit speeds, and yet judging by deployment statistics this change is easier-implemented than changing from IPv4 to IPv6, a change in layer 3 protocol primarily concerned with increasing the number of network addresses.

Complexity in design or technology doesn’t necessarily equal difficulty – or willingness – to implement.

In the same span of time as networks have moved from kilobits to gigabits, the protocols in use at layers 3 and 4 have gone mostly unchanged for decades, with some dating back to the 1970s – older than most of the users enjoying the applications that these protocols carry day in, day out across our global networks.

The key distinction is that layers 3 and 4 require changes from multiple organisations to be effectively implemented, all with different teams, budgets, equipment and priorities. Making significant change happen inside a single organisation is difficult – coordinating a technical change without immediate ‘bigger, newer, faster’ benefits across multiple organisations, by necessity on a global scale, is a different matter entirely.

So not only do these bottom two layers represent the quickest way for a network equipment manufacturer to sell new products for a tidy profit, they are also the easiest to implement in a network due to their lack of interdependence on other systems outside of a single organisation’s control.

Networks today are performing vastly different work than those designed in the 1970s – a tremendous evolution from simple remote terminals to the real-time HD video enjoyed by hundreds of millions of people every day. But they’re still using many of the same underlying protocols, such as TCP, to get the job done.

So, regardless of which approach is taken to change the stack – bottom up, or top down – they both grind to a halt today in the middle, at layers 3 and 4.

Improving the much less visible functionality of layers 3 and 4 is simply much harder to productize than their lower-numbered neighbours, due to their need to be implemented in billions of systems and devices worldwide from almost every manufacturer of network-connected equipment, and harder to sell because they’re less visible and don’t offer immediate speed benefits in the traditional sense.

For our networks to continue keeping pace with the demands of users and network operators as we head towards 2020 and the planned emergence of 5G, layers 3 and 4 need to be improved and optimised just like every other layer of the stack has. Some of the protocols we need, like IPv6, already exist; yet some are merely in formation, existing in prototype form or in the back of a researcher’s mind.

The traditional bottom up and top down methods of changing the stack have failed to properly incentivise implementation and development of layers 3 and 4 – it’s time for a new approach, that of middle first.

Middle first stack development aims to concentrate on improving layers 3 and 4, developing and implementing new functionality and improved performance to solve the challenges of end-to-end network connectivity that exist today, many of which are drastically different than those protocols in use today were designed for in the 1970s.

Unlike bottom up stack development, this won’t be led by equipment manufacturers; it will take a global provider of networked services, like Google, Microsoft or Amazon, to take the problem by the reigns and ensure middle first fulfils its promise. These companies have control of client and server applications, as well as enough of their own network infrastructure to make the change first and have others follow.

For the sake of our future networks and applications, it can’t come soon enough.

1 thought on “Changing the Stack: Bottom Up or Top Down”

Comments are closed.