What is network optimisation and why is it so important?
Article by Gigamon A/NZ Manager George Tsoukakis.
Network optimisation is an often-overlooked area of data security.
At its most basic, a network is a system comprising two or more computers sharing resources, data and communications, with the goal of accomplishing essential tasks more quickly and effectively.
A network is more than just the sum of its parts; it’s an essential infrastructure that facilitates everything from interoffice hardware solutions (such as sharing a wireless printer) to the very existence of the internet. It comprises hundreds of millions of smaller networks, all sharing information and resources.
Simply put, networks are a vital part of how we conduct business. As such, optimising network performance should be a significant goal for any modern business.
Network optimisation is an umbrella term that refers to a range of tools, strategies, and best practices for monitoring, managing and improving network performance.
In today’s highly competitive, dynamic business environment, it’s not enough for essential networks to perform adequately. As we move further into the digital age, the world depends more and more on reliable, fast, safe, available, 24/7 data transfer.
Unfortunately, outdated or under-dimensioned hardware or sub-optimal software can limit available bandwidth and introduce increased latency. Obsolete or underutilised network security options can impact performance and leave systems unprotected.
Sudden surges or spikes in traffic can overwhelm essential network functions and slow down response times. The list goes on, creating potentially hundreds of mounting issues capable of deteriorating the end-user experience.
The primary goal of network optimisation is to ensure the best possible network design and performance at the lowest cost structure. The network must promote increased productivity and usability and allow data to be exchanged effectively and efficiently. This is achieved by managing network latency, traffic volume, network bandwidth and traffic direction.
Network optimisation can happen only when the current state has been fully assessed. However, many parameters and components are involved in getting a clear picture of networking performance within an organisation. Here are five essential factors to consider:
Latency describes the time it takes for data to travel between two locations (such as between two computers on a network), with lower latency indicating a faster, more responsive network. This delay in data transmission may amount to only a few milliseconds at each point in the journey, but when combined can add up to a noticeable amount of network lag.
Although the absolute upper limit of data transmission speed is the speed of light, certain limiting factors, such as the inherent qualities of WAN routers or fibre optic cables, will always introduce some amount of latency.
Other causes may include increased data payloads, retransmission of duplicate packets, extensive inline security tools, proxies, switches, firewalls, and other network elements analysing and adding to network traffic, and retrieving stored data.
Availability measures how often relevant network hardware and software function properly. The flip side is downtime, where systems are not performing to the desired specifications. Optimal availability means that no hardware or software downtime negatively impacts network performance.
Network availability can be calculated by dividing the uptime by the total time in any period, with the most obvious goal being 100% availability and 0% downtime. That said, it is not uncommon for complex systems such as networks to experience problems occasionally, so 100% availability is not a goal that any business is likely to achieve.
On the other hand, striving for this lofty standard is an essential aspect of network optimisation. Achieving ‘five nines’ (99.999%) or better for availability is paramount.
A network packet is a small segment of data that may be transmitted from one point to another within a network. Complete messages, files, or other types of information are broken down into packets which are then individually sent and recombined to reconstruct the original file at the destination.
If a packet fails to arrive intact, the origin will need to resend only the lost packet instead of resending the entire file.
Although the occasional lost packet is seldom cause for concern, a large number of lost packets can disrupt important business functions and may be an indication of larger network-related problems. Packet loss is quantifiable by monitoring traffic at both ends of the data transmission, then comparing the number of sent packets to the number of packets received.
Jitter describes the amount of inconsistency in latency across the network, while latency measures the time it takes for data to reach its destination and ultimately make a round trip. When delays between data packets are inconsistent, it can affect a network’s ability to deliver real-time, and especially two-way, communication. This can create issues with video conferences, IP security cameras, VoIP phone systems, and more.
Network jitter is symptomatic of network congestion, lack of packet delivery prioritisation, outdated hardware, and overburdened network equipment. Other causes may include a poor internet connection or using lower-quality wireless networks.
Because network jitter may result in lost packets, dropped connections, network congestion, and poor user experience (especially audio, voice, and video feeds), it is an important consideration for network optimisation.
Generally speaking, whenever a network component is more than 70% utilised, slowdowns will occur due to buffering of packets, switch port head-of-line blocking issues, and their backplanes being overwhelmed. If the component is highly utilised for long periods, the slowdowns turn into serious delays.
Connection to the internet can become a bottleneck when the number of simultaneous interactions involving internet provider-based applications and services exceeds what service allows for. Measuring utilisation provides a big-picture view of a network to determine which sections are seeing what amounts of traffic and what times peak traffic is most likely to occur.
Correctly measured, utilisation can give you insight into which networks carry the largest load, where the loads are coming from, and whether utilisation is too high in certain areas.
In terms of measurement, traffic usage may be represented as a ratio between current network traffic and the peak amounts networks are designed to carry, represented as a percentage.
Managed effectively, network optimisation is capable of helping organisations to build more effective and efficient internal and external networks. This carries with it many distinct advantages, including the following:
Increased network throughput
Network optimisation removes the hurdles that stand in the way of optimal data transmission speeds. This means decreased latency and jitter, faster response times, and a better-connected IT ecosystem, and, as a result, increased throughput.
Enhanced employee productivity
Latency, packet loss, and downtime in internal networks prevent employees from accessing and using vital tools and information when and how they need them most. Network optimisation keeps data flowing properly, so a workforce doesn’t have to sit on its hands waiting for the network to catch up.
Improved analytics and security
An important element of network analytics and security is traffic visibility. By keeping a close eye on what traffic is moving through the network, where it’s going, and what it’s doing, users benefit from being able to identify and respond to threats more quickly and track various crucial metrics, including those outlined above.
Armed with this information, organisations using network performance monitoring and diagnostic (NPMD), application performance monitoring (APM), and security tools can analyse captured data and turn it into valuable, actionable insights.
These tools can be further enhanced with advanced metadata, including attributes from the application layer, to solve more advanced use cases. Network analytics can likewise be employed in predictive modelling, providing accurate forecasts of future network usage.
Customer-facing networks likewise benefit from network optimisation, with faster, more available services. When customers enjoy full functionality without waiting longer than expected, they are more likely to want to continue doing business with a company.
Obviously, the overall goal of network optimisation is to optimise the network’s operation. This means better performance across the board and improved returns from any and all services and systems that rely on network performance.
Effective network optimisation
To achieve this goal, seek a specialist vendor in network visibility solutions for the hybrid cloud, and an essential partner in enabling effective network optimisation. Ensure the vendor delivers the power to put a network and all network traffic under a microscope.
Tech can acquire all relevant workload traffic and, using NPMD, APM and security tools, identify which network elements and applications are consuming the most bandwidth or operating in a compromised manner. Seek a vendor that helps to ensure clear, comprehensive visibility and control across hybrid cloud deployments.
A hybrid cloud visibility and analytics fabric can provide a next-generation network packet brokering solution. This is essential to ensuring security and network performance monitoring tools can assist an organisation in making the most out of its networks and taking network optimisation further than ever thought possible.