From the perspective of a network engineer, a wireless network connection can be distilled down to a network path of variable performance attributes, likely to change at any given point in time based on unpredictable environmental conditions.
For those of us more familiar with wireless networks, indoor and outdoor, short and long, this assessment may seem rather bleak; but, compared to more predictable wired networks, especially those confined to a single building, this is a fair judgement, of course with the frequency and severity of these changes depending on the characteristics and quality of each wireless deployment.
For end users, this variance in performance can be troublesome; their applications that were using the full bandwidth available on the connection are now struggling against each other for the now-limited available throughput. Not ideal by any means, right?
But, for network operators, this can be an equally difficult problem; and not just because they are likely to receive a tech support call if the issue persists for any length of time. The routing protocols that govern which paths the network uses to send data packets to a given destination rely on accurate information of the performance of each network path.
So what, you may ask with a shrug? Well, in a wired network, the capacity of these paths typically does not change. Yes, they may become congested, but without a connection physically being reset and returning with a different negotiated data rate, which is detectable by the network, the total capacity of the connection does not fluctuate in the same way a wireless path can.
This becomes an issue when the network must make a path selection decision. Consider the following scenario: two network segments are interconnected by a point-to-point wireless connection intended to deliver 1 Gbps. The Ethernet ports at either end of the connection are up at this rate, and so the network assumes that full capacity is available unless told otherwise.
During operation, wireless interference causes the capacity of the wireless link to drop to 500 Mbps. The network, unaware that this has happened, continues to send 1 Gbps of data across this connection rather than via other available links. The result? Half the traffic is dropped, significantly impacting application performance.
The issue is there is no mechanism in many networks for the wireless link to inform the rest of the network, really the router connected to each end of the wireless link directly, that its capacity has changed. However, there are protocols which do address this issue, such as MAB (Microwave Adaptive Bandwidth) and RAR (Radio Aware Routing) which create a message exchange between the wireless network nodes and their immediate connected wired devices.
So voila, problem solved? Unfortunately not, owing to the lacklustre deployment of these protocols. The majority of equipment, whether wireless or wired today does not implement these protocols, and where implemented by some vendors it is sealed behind a license cost or locked to specific hardware platforms unnecessarily. The protocols are simple and do not require a significant amount of processing power of specialised hardware, making them approachable to implement widely.
One other issue with these protocols is less to do with the lack of vendor support for them and more to do with the protocols themselves. Today they target only PTP links; which, due to their use case as high-bandwidth network interconnections or backhaul are the correct choice to target first. However, many networks deploy point-to-multipoint Fixed Wireless Access (FWA) networks; these networks would also benefit from a standard approach to indicating changing available bandwidth to the rest of the network, though with finer-grained reporting to support a hub-and-spoke network.
For FWA networks to continue to provide high levels of value to network operators, they would do well to become better-integrated with the routing protocols controlling those networks. Otherwise, there are manual ways to either forfeit some of the capacity of the network by manually telling the routing protocol that it always provides its expected lowest bandwidth, or by a different mechanism that some equipment implements today; shutting off the wireless link when its capacity drops below a certain level, forcing the network to recalculate its routing paths.
Although this is a very useful feature and some manual tweaking can indeed improve the situation from its default, neither of these provide the same level of benefit to network operators as a Radio Aware Routing protocol implementation can. It can be very valuable for network operators to become aware of unused capacity on a wireless network connection, saving them money by avoiding a more expensive routed link and improving their network performance and customer experience.
The severity of this issue varies widely depending on the network operator, with a fairly small number today feeling the need. This can be attributed primarily to the highly reasonable stability of a modern, professional wireless network deployment; even outdoors, a network can be modelled before, during and after deployment to ensure that its expected minimum level of performance and availability is sufficient for the network operator’s needs – or at least their expectations.
Ultimately, it is an area with much potential that today sits mostly untapped. For those users with a need for the functionality, equipment, though expensive, is available; and over time, it is likely that more users will seek out Radio Aware Routing functionality and the equipment supporting it, pushing the functionality into lower-priced equipment and into the mass market.
To me, this would be a welcome development as any way to increase the value of a network technology by better-integrating it with the rest of the network – especially between layers, as this is essentially integration between layers 1 and 3 of the network – typically gets my vote, owing to its potential to improve network performance and efficiency with less manual intervention.
It must be said that the protocols have existed for years, however, and have seen little adoption so far. Will this change at any point in the near future? Only time will tell, but I hope the answer is yes.