CFOtech Asia - Technology news for CFOs & financial decision-makers
Story image

Why tool consolidation should be a top priority for businesses

Thu, 26th Nov 2020
FYI, this story is more than a year old

Businesses today are spoilt for choice when it comes to developing and managing software environments.

An arsenal of between 20 and 40 different tools has now become the norm for today's businesses. Whether these tools are purchased or open source, having a myriad of disparate tools generates a host of new problems instead of helping with faster innovation, improved mean time to detection (MTTD) and mean time to resolution (MTTR).

Data silos and blind spots pop up, and there's increased toil from switching between tools. In fact, 16% of Australia and New Zealand IT decision-makers think data silos are causing lower productivity and hindering the ability to collaborate.

A lack of data correlation is another issue, as well as inadequate point solution scalability. Plus, there's licensing and cost friction with multiple vendors involved.

How can businesses expect to scale for their biggest day when a single, unified view of their infrastructure doesn't exist? How can developers be expected to troubleshoot an entire software stack when data is siloed? The impact on the business is too high to ignore.

Here are three top reasons why tool consolidation and rationalisation needs to be a top priority for businesses:

1: Modern environments necessitate modern observability practices

Today, DevOps and site reliability engineering (SRE) teams oversee a sprawling network of complex systems and changing environments. It's critical to business success, but the more mission-critical it becomes, the more complex it is to monitor and manage.

Downtime hurts even more, but gets increasingly harder to diagnose when dealing with a distributed architecture and large teams. According to Gartner, the average cost of downtime is $5,600 per minute.

Businesses need to embrace observability. True observability enables businesses to view their entire environment— the full stack — in one UI and get to the root cause of an issue to understand why it occurred in the first place. By understanding why problems arise, they can be fixed faster, and also prevented from happening again.

2: Blind spots need to be eliminated to achieve greater efficiency and scale

As the sources of data inflow increase across complex cloud environments, it gets increasingly difficult to see what's going on. If an application has slowed, what's causing the issue? Is it a code error in the app or an infrastructure resource issue? Or a problem with data flow — which may mean every device needs to be checked individually?

Time is wasted switching between different tools that monitor different parts of the stack. According to Mavenlink, 73% of companies report stating they spend more than one hour per day on average navigating from app to app.

This also creates data silos that increase toil and the risk of blind spots, as well as divert developer's attention to improving software. Interpreting performance metrics from multiple tools can also lead to human error.

3: A better developer experience translates to better customer experience

Frustration at the developer level inevitably translates to poor customer experience as infrastructure health and performance directly impacts the end-user.

Many traditional monitoring tools run on-premises, which means they require additional resources and skills to be managed appropriately. As a result, problems take far longer to resolve.

A lack of detailed data means root causes can't be identified, so issues recur, placing strain on teams and ultimately leading to a poor customer experience.

The answer: tool consolidation and rationalisation

Modernising infrastructure is essential to maintain a competitive advantage with software. But a bag of metrics from a disconnected set of tools isn't suited for a modern environment.

To consolidate tools, organisations should first create a clear overview and understanding of all the tools used in the organisation. Existing tools should be mapped to teams and outcomes, and an ideal end state defined with KPIs.

Next, a comprehensive set of use cases should be built that outline possible approaches. Critical scenarios should then be piloted, and once this is done, migration and integration can begin. This involves training teams on new processes and socialising documentation and knowledge sharing.

Ultimately everyone benefits from the reduced complexity and improved collaboration enabled by tool rationalisation. It unlocks savings and improves efficiencies so teams can focus on innovation and maximise customer value.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X