Networks are systems of devices talking to other devices, and much like in human communication, they need to use the same language to achieve anything meaningful. Try having a conversation with someone who has the opposite idea of what half the words in the English language mean than yourself, and you’ll quickly get an idea what it’s like when two incompatible network devices meet. The same applies to other devices, such as PC software talking to hardware. Compatibility is the key.
Let’s be honest, though – compatibility isn’t very interesting. Few consumers, engineers and marketers will get very excited about the idea of compatibility by itself; if anything, it’s taken for granted that when new devices are released they’re completely compatible with the many thousands of existing devices at a protocol and language level.
Sometimes it’s worth taking a step back and looking at the amount of time, money and complexity built into almost every product we use for the sake of compatibility. For example:
1. Every new flagship Wi-Fi router with several radios, the fastest multi-core processor and more antennas than you can shake a stick at still needs to support Wi-Fi standards going back almost 20 years, with headline speeds of 11 Mbps.
2. Intel and AMD processors carry a decades-long legacy of complexity from the original x86 instruction set, as well as all the iterations and additions that have enhanced it since its original development.
3. The operating systems we use today, notably Windows, still support ancient concepts, files and code that allow programs written decades ago to function as intended, even on a computer assembled 30 years after the program was written.
It’s easy to look at any one of these examples and think – are any of these worth the effort to support today? Wouldn’t it be easier to simply remove them, focusing on the most modern set of functionalities that gets 99% of the job done and move on from the past?
At times, this is a hard point to argue against. Take one area of technological development, that of computer peripheral interfaces. In a fairly short span of time, the industry has evolved through serial and parallel ports, early iterations of USB and FireWire and other lesser-known types through to being set on USB-C for everything from peripheral connectivity to video output and power.
Apple in particular has never been shy about retiring peripheral interfaces when they feel they are getting in the way of the overall user experience and product design, even when they know it will create backlash. See the removal of FireWire from Macs, or more recently the loss of the headphone jack from the iPhone 7.
If we can do it with peripheral interfaces, why can’t we get rid of the legacy complexity from other areas of our systems? Here’s the difference: peripherals tend to have a shorter life than the systems and software they are supporting. Although it might seem odd, it has been shown that people will happily pay for a new external hard disk for example supporting the new interface, but at the same time would rather not replace that essential old server humming away in the corner of the office or that piece of in-house software that makes the numbers look right from all the weekly reports.
Perhaps the real underlying reasoning this exposes is as users, we often do not want to or do not have the capability to properly understand all the complexity of our systems. Even what appears to be a simple modern computing system is fantastically complicated, with many thousands of dependencies that are very hard to predict, account for and fix if we alter them in some way.
For instance, Intel can’t remove the legacy baggage from the x86 instruction set because a still-significant number of users need to run old code that relies on tricks and unique characteristics inherent in that legacy compatibility, and it’s simply not worth it to Intel to remove them.
The same with every shiny new Wi-Fi router; why not restrict it to operating in 802.11ac mode only, and forget everything that came before? Pretty much everything is using the latest and greatest Wi-Fi protocol now, right? Perhaps – but you might be surprised. All it takes is one device, say a security camera or alarm that can’t be easily updated (or updated at all without a complete re-installation of a new system) and you have yourself a compatibility showstopper.
So, while there are some areas where the industry is able to make faster, more progressive change such as peripheral interfaces (just watch the adoption of USB-C from now until the end of 2018 for an example of how fast these changes can move) in many areas it will be seemingly forever stuck supporting decades of legacy complexity.
Is this necessarily bad, however? No, not at all. It’s easy to think of the purpose of our computing equipment to be the fastest, cheapest and most streamlined it can be – and those are all very important goals which had they not been chased, the industry would certainly be worse-off today. But the benefit of a modern computer supporting all of this legacy baggage is it can do just about anything you want it to. Your PC from 2017 can connect to a network from 1999 if it needs to, and it can run software from back then or even earlier too.
It might not be the most exciting thing in the world, but compatibility in many ways makes the world turn. Without it, we would have to keep more and more older machines still active, still consuming power and resources to operate – and wouldn’t that itself be a waste when they could be replaced with a modern system that can do so much more on top of those original legacy tasks?
Ultimately, compatibility is king, and something that should be celebrated in our systems and devices more than it is today. Yes, we should be looking forward to the future – but there is plenty we can still learn from the past, as anyone who’s tried to re-write some decades-old proprietary software after the only machine that supports it died will tell you.