IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image
Artificial Intelligence and 5G Are Better at the Edge
Wed, 24th May 2023

Our world is filled with exciting technologies that promise to unlock new possibilities for enterprises. In some cases, the confluence of two emerging technologies amplifies the benefits of both, and AI and 5G are perfect examples of such complementary technologies. 

Each has tremendous potential, but they’re even better when used together.

Why location matters for AI

The AI workflow involves ingesting large amounts of data from multiple sources to train models and then using those models to produce automated, data-driven outcomes. Increasingly, as more data gets generated at the edge, the different steps (workloads) in the AI workflow are performed in different locations based on performance, privacy, flexibility and cost requirements, known as distributed AI. Many businesses are now working with distributed AI orchestrators to help them move their AI training and inference workloads to the appropriate locations.

Model inference and model training have very different requirements from one another. Model training is more resource-intensive, so it typically runs in a large data centre or on the public cloud. In contrast, the model inference is more latency-sensitive and typically runs at the digital edge, where it has closer proximity to data sources.

Why location matters for 5G

Being successful with 5G depends on having the right infrastructure in the right places, and its promise is all about enabling enterprise-class wireless services. When it comes to enabling these services, the user plane function (UPF) is among the most important components of the 5G network infrastructure, responsible for de-encapsulating 5G user traffic so that it can move off the wireless network and onto outside networks such as the internet or cloud ecosystems.

Since the applications that 5G users want to access live on those outside networks, it’s essential to have reliable, low-latency connectivity where the UPF resides. For this reason, moving UPFs from their core networks to the digital edge is one of the most important steps that telco operators can take to unlock the full value of their 5G infrastructure.

5G helps AI move away from device and on-premises infrastructure

Many AI use cases have strict performance requirements; one way to meet these requirements is to perform inferencing on the device itself or using an on-premises server stored very close to the device. These kinds of servers are often found in the closets of stadiums, retail stores, airports, and anywhere else AI data needs to be processed quickly. This approach has its limitations: Doing complex AI inference processing on the device can drain the battery quickly, and the AI hardware on the device is often not powerful enough to do the required processing.

Further, many AI use cases require data aggregated from multiple sources and all too often there won’t be enough memory/storage space on the device to host the diverse data sets. Similarly, doing AI inferencing in the on-premises closet has issues around physical security, physical space limitation, inability to provide the required power, and higher OPEX to maintain the hardware. Since 5G networks provide high-bandwidth connectivity, it’s now possible to host AI inferencing infrastructure and also cache the required data sets at the 5G infrastructure located close to where the data is generated. Thus, AI inferencing tasks can be moved from device and on-premises locations to the 5G multiaccess edge compute (MEC) location at the nearby network service provider’s (NSPs) 5G infrastructure in the same metro.

Being co-located with the 5G network helps satisfy the latency and bandwidth requirements of the application while also allowing enterprises to move their AI infrastructure away from the device or on-premises closet. Depending on the carrier 5G deployment architecture and application latency requirements, the 5G MEC infrastructure could be located in a micro data centre (such as a cell tower), a cloud 5G Zone (such as AWS Wavelength) or a macro data centre, such as an Equinix IBX.

AI enables better slicing and maintenance for 5G networks

One of the most powerful aspects of 5G is that it allows NSPs to perform network slicing, essentially offering different classes of network service for different classes of users and applications. Today’s NSPs can apply predictive analytics backed up by AI models to enable smarter network slicing, and to do this, they can collect metadata on different applications, including how those applications perform under specific network conditions. When the 5G infrastructure and AI models are both located at the edge, it’s easy to get predictive insights about what quality of service different applications might need and classify them into different network slices accordingly.

Also, NSPs can pull log and utilisation data for the network and use it to train AI models that support proactive maintenance and management. These models can help detect conditions that indicate a possible service outage or surge in user traffic. The network can then automatically react to prevent the outage or provision additional capacity. Again, having both 5G and AI infrastructure at the digital edge is key to making the most of this capability.

Deploy at the edge

With data centres in more than 70 metros across six continents, Platform Equinix makes it easy for organisations to deploy AI inferencing and 5G infrastructure in all the edge locations that provide the best results for their 5G and AI workloads. Additionally, digital as-a-service infrastructure such as bare metal, fabric and network edge can help simplify and accelerate rollout. 

To help maximise the value of 5G and AI deployments, we have a global partner ecosystem of more than 2,100 NSPs. We are intimately familiar with how to help them modernise their networks for the 5G era and know how to pass the power of those networks on to our customers. Finally, to enable better results for distributed AI, we offer cloud-adjacent data centres that provide proximity to all major cloud hyperscalers. Customers can do their AI training in the cloud and subsequently move their models to our data centre in the same metro, moving data between the clouds and the edge locations using a private, secure Equinix Fabric connection.