How to Prevent Latency from Crashing the Enterprise

Article by Karan Kirpalani

Enterprise networks are facing a perfect storm as the data-driven economy embraces ever-increasing levels of application and user activity. With highly sophisticated processing capabilities literally in the palm of every knowledge worker’s hand these days, the amount of data bombarding the enterprise has climbed to unprecedented levels.

And this situation is only expected to get worse. According to market research firm Statista, worldwide data volumes are climbing at a compounded annual rate of about 30 %, which puts us on track to top 96 exabytes per month by 2016.

While finding the resources to process and store all this data is a challenge in itself, the impact on networking capability is what keeps most IT managers up at night. The overriding question: how to keep massive volumes of data from increasing network latency to unsustainable levels?

At its heart, this is a question of scale. While there is no doubt network resources can be ramped up to meet any need, just like any other information resource, the real challenge is to accomplish this without breaking the IT budget.

This is the primary reason why so many organizations are turning toward hosted infrastructure management services. Far from having to build an advanced network from grounds up, hosted bandwidth services provide instant access to a high-performance network architecture, offering broad scalability, as well as top-level uptime and availability.

Hosted network services come in many flavors, from local area networking to internet leased lines that provide web-based point-to-point connectivity between hosted and non-hosted entities. As well, there are numerous customer-premises options for both wireline and wireless connectivity.

When evaluating hosted network services, enterprise executives should note whether the potential provider offers state-of-the-art capabilities like continuous monitoring, packet-loss prevention and low-latency operations. In addition, they should be able to demonstrate adequate bandwidth protection through redundant network architecture and rapid acquisition of alternate data paths should a primary connection fail for some reason. This is particularly crucial for enterprises that deal with international delivery of mission-critical data or applications.

In most cases, the provider should offer a range of service plans as well, which are often built around rate- or volume-based pricing structures. These plans offer a variety of cost/performance benefits depending on your data and application requirements. The provider should also offer peering arrangements and other features designed to maintain maximum availability and reliability.

It is also a good idea to take careful measure of the provider’s own physical and virtual infrastructure. Do they provide redundant fiber links to the Internet? Do they have 10 Gb (at least) switching and cabling capacity, plus thorough monitoring of core routing and switching systems? And is there adequate device redundancy throughout the core, access and distribution levels?

The cloud is already making massively scalable storage and server resources available to business users across the globe, many of whom are side-stepping traditional IT procedures to acquire them. As we’ve learned from five decades of infrastructure development, data environments are only as effective as the weakest component allows them to be.

Unless and until the enterprise adopts the same service-based approach for networking that it does with the other major functions of the datacenter, full end-to-end scalability will forever be out of reach.