Popular Now: Microsoft could be launching a smartwatch in a few weeks

Share This!

Riverbed Technology: Optimising through successive bottlenecks

Published by on Feb 25th, 2013, No Comments

Logo - Riverbed (GIF)

By Wimpie van Rensburg, Systems Engineer, Riverbed Technology

Is the value of WAN Optimisation needed if you have “enough bandwidth”?  At any given point, communication across a network is bottlenecked by something. It may be bandwidth, but if it’s not, it’ll be by the latency due to application level protocol chattiness, the latency effects on TCP, TCP flow dynamics, or a combination of them.

Suppose you were first bottlenecked by limited bandwidth, you dynamite that curve in the river, and it flows free.  But it’s not only bandwidth that needs to be considered, the latency will slow you down too. You need to break down the other barriers.

Two tests are illustrated below which prove this theory:

The first test was to set up a network with a client and server separated by a relatively fat pipe with a lot of latency and two Riverbed Steelhead Appliances. A simple Microsoft Windows file was then copied across. With no optimisation, it was slow, as expected, around 10 seconds.  Once compression was enabled on the Steelhead appliances there was only a minimal improvement of 4%. This was to show that “bandwidth” wasn’t the problem as the question enquired.  Simply put, the latency of the link was restricting the flow of the application carrying out the file copy.

The second test disabled compression (so all bandwidth savings were lost), but enabled application level optimisation and TCP optimisation.  The transfer was immediately 2.25x faster (that’s 225%). So, now the application has it’s bottleneck removed.  But clearly something was still holding back and suspecting it was bandwidth, the deduplication and compression were enabled on the appliances; the warm optimised transfer was an additional 2x faster.

What can be concluded from this is that the test was initially not constrained by bandwidth, but after removing much of the application andTCP level latency-related slowdown, the transfer was faster. Freed from the constraints of latency effects, the application performed more rapidly and could transfer more data.  But it then became constrained by bandwidth.  By then re-enabling the bandwidth streamlining features as well, was all bottlenecks were overcome, to net out a 5.2x speedup for the test.  This is common of course, which is why optimising at all levels is the default Steelhead configuration.

Comments

http://www.bandwidthblog.com/wp-content/themes/cnnetwork