IT departments today have an enormous challenge on hand-optimizing their
networks to deliver complex Web-based applications to a diverse, burgeoning
end-user base. Then, there is the added requirement that these applications be
secure. A situation necessitated by commonsense business constraints and
security conscious end users, so that communications remain private. As these
applications become 'business critical' to the growth of an organisation,
the required levels of availability, performance, and network scalability are
not being met.
Today, no single technology addresses these challenges. So, IT managers have
been forced to introduce significant complexity into the network, inducing
latency and resulting in significant management costs as they attempt to patch
together a mix of point products. As a result, enterprises have compromised both
the protection and secure delivery of their applications and pay the price of
degraded end-user responsiveness and network performance.
Having
recognised this problem, providers of network optimisation solutions have
introduced state-of-the-art technology-request switching-to address these
problems. Request switching delivers applications in an accelerated, secure, and
optimized manner: inspecting and directing incoming traffic, based on client
request.
Look Before or After the Leap?
The evolution of traffic management has resulted from the need to distribute
Web traffic across different servers to facilitate more efficient utilization of
site resources and increase the overall site availability. In a well-configured
and administered site, servers are configured to ensure that if one server farm
goes down, alternative server farms are able to continue servicing user requests-ensuring
business continuity.
The first stage in the evolution of traffic management was the use of the
round-robin DNS (domain name service). With this technique, the IP addresses of
multiple servers are bound to a DNS name. When clients request the address
associated with a DNS name, the DNS server responds with each server address-by
turns. In this manner, client traffic is spread among all the servers. While a
good first step, this approach was unable to provide the ability to monitor the
state of the server. This leads to an unequal load distribution on servers.
Unequal load distribution on the servers result in some servers being overloaded
with server requests being directed to failed servers, resulting in unacceptable
service levels for clients and poor scalability for content providers.
|
To address some of the deficiencies of the round-robin DNS approach, traffic
management evolved to 'server load balancing' to ensure efficient traffic
management. While these solutions were an improvement over earlier solutions,
they were not able to efficiently process individual requests. Rather, they made
a single, key decision for all of a client's connections based solely on the
first individual request received. This resulted in connections being bound to a
specific server, based on a preconfigured load balancing program, and the effect
was nonuniform load distribution and server overload.
The next step in the evolution of traffic management brought products that
make traffic distribution decisions at the content level. Since content switches
must make a switching decision before a connection is forwarded to a server,
they cannot affect requests that occur on an already-established connection.
This means that each connection has to contain the request, for content
switching to be effective. Since the server will respond to only one request at
a time, content switches are unable to keep connections alive and improve
response time for clients. Also, due to the deeper inspection required to make
switching decisions based on content, content switching is often much slower
than connection-based load balancing.
Easier Applications Delivery
Today, request switching forms the cornerstone of some of the most advanced
network-application delivery systems. By managing traffic at the originating or
request level, request switching enables high-performance acceleration and
optimization for delivering content to users-irrespective of location or
connection speed.
Request switching also eliminates the server's dependency on a client's
connection speed. Once a request has been made to a server, a response can be
sent to the application-delivery system at full LAN speeds. The system will
buffer the response and send it to the client at the client's own speed,
freeing the server to move on to the next request.
As a result of offloading the TCP processing and buffering tasks to the
network application delivery system, the server is able handle many more
simultaneous requests that it would be capable of serving on its own. The
resulting cost savings in server hardware and software, data centre space, and
maintenance are significant.
Compression is a common technique used to enhance performance and reduce
bandwidth usage. Compressed content takes less time to download and thus
improves user response time, while at the same time using less bandwidth.
However, using compression has its drawbacks. Real-time compression is a very
processor-intensive task that can overload the very busy servers. Precompressing
content is an option, but the additional management brought about by an
additional type of content is an issue.
This is where some of the best application delivery systems come in, to solve
these issues. Data coming from servers is compressed at the request level,
before it is sent to the users, improving download time and saving on bandwidth
charges.
Another effective method of improving Web performance is caching. However, a
caché server adds complexity and management tasks to a network and often
standalone cachés don't provide the desired control or dynamic content
support.
By combining the request switching engine with a high-speed, in-memory caché
the system is able to offload a large number of requests from the server, and
then deliver the content at high speed.
In the current business ecosystem, security of content is a growing concern.
Request switching enables the delivery of secure content. As requests arrive,
it applies the relevant content switching policies and filters before diverting
the request to the right server. Responses from the sever can be compressed or
cachéd then encrypted for transmission to the client.
The availability of high-performance encryption also means that IT managers
need not curtail the use of secure content due to performance concerns; security
can be increased while maintaining scalability and response time at the desired
levels.
Request switching also plays an important role in protecting against
malicious attacks. Since all traffic is inspected at the request level, attacks
are prevented from reaching server infrastructure.
Request switching technology today represents a very significant advance in
traffic management in ensuring application management of traffic for a global
infrastructure. By managing traffic at the request level, and breaking the
connection between the client and server, request switching provides optimal
control over application traffic. Combining request switching's granular
traffic management with advanced optimization and acceleration techniques
delivers: performance, flexibility, security, and manageability unmatched by an
alternate technology today.
Rakesh Singh GM, Asia
operations, NetScaler