Service Provider Evolution
In this era of e-commerce and e-businesses, many companies rely on
distributed network services as a significant part of their entire business
model. As a result, Internet-related initiatives require recruiting, staffing
and funding to operate. Typically, these expenses represent a significant cost
and not one that decreases with time. Service providers address these concerns
by creating economies of scale. By serving numerous clients, they reduce costs
for everyone. In fact, if they manage bandwidth and service usage properly,
service providers can provide better customer service than internal providers.
However, service providers need to manage the following issues:
-
Bandwidth hogging that decreases
access to network resources -
Maximizing potential business
applications -
Ensuring high performance across
applications -
Customizing bandwidth to expand
services
In order to manage these issues, service providers must have an accurate map
of their network. They must monitor traffic to determine normal and abnormal
usage and establish baseline metrics to be able to efficiently monitor
productivity.
Factors to consider
-
Burst traffic: Burst traffic is
abnormal usage that typically lasts only for a short time during peak
periods, for example, when a popular, downloadable file goes online.
Typically, burstable traffic consists of lower priority requests that may
receive decreased bandwidth. -
Interactive traffic: Interactive
traffic typically refers to normal use of a web site, such as users
following links. If it is high-priority traffic, it must be fast. Other
application-level clients require a minimal amount of bandwidth to provide
mission-critical services. -
Non-mission critical traffic:
Employees behind a router may be using network resources for personal
reasons. Such traffic may usually be assigned lower bandwidth and priority. -
Mission-critical traffic: With
mission-critical traffic, incoming orders for example, must be given a high
priority, if not the highest priority, and a substantial amount of available
bandwidth. -
SLAs: In some cases, service
providers may have Service Level Agreements (SLAs) in place with clients,
and are contractually bound to provide a set amount of dedicated bandwidth
and server resources. Service providers must monitor this traffic to meet
contractual obligations.
Bandwidth Management
This process includes classifying traffic, managing bandwidth allocation and
mapping traffic classes.
Classifying traffic
There are a number of ways to classify traffic—server names, network
subnets, destination IP and port, source IP and port, requested file type—the
list goes on. However, propagating a client request to a fulfillment server
should be automatic and transparent to the requestor. Classifying traffic
enables network managers to better track requests and allocate resources based
on priorities. Is an FTP request from accounting more important than an HTTP
request from human resources? This functionality also enables service providers
to establish priorities based on application usage and the nature of request as
well as any other metric they want to use.
Managing bandwidth allocation
Once traffic is classified, specific configurations can be defined to control
how bandwidth is distributed. The two most widely used methods are partitions
and policies.
-
Partitions: Partitioning
creates a separate, exclusive channel for traffic that manages the total
network usage by traffic type, for example, FTP traffic might be assigned 10
percent of network resources, e-mail might receive 10 percent and incoming
HTTP requests might receive 50 percent. Unused bandwidth in that traffic
type can be placed in a pool that is available to other applications to
speed up the overall network. -
Policies: Rate-based
policies set a minimum amount of guaranteed bandwidth for burst traffic.
Priority policies set aside bandwidth for traffic that must compete with
burst traffic. These policies provide network stability and efficient
fulfillment of all requests and responses.
Application Management
As network infrastructure evolve, administrators have started implementing
bandwidth management solutions. Now that service providers are providing still
more mission-critical services, they need a new set of tools to help them make
the best use of their existing infrastructure to provide exceptional service to
their customers.
There are several tools that can help.
-
SLAs vs. ASLAs: SLAs
commit to specified uptimes or times between failures. However, these
traditional measurements are no longer adequate when service providers enter
the application marketplace. Application Service Level Agreements (ASLAs)
are precise, per application, measurable agreements, specifying the nature
and quality of application deliverables. They cover per application response
times, availability and other metrics, as well as the types of reports
describing application performance. -
Delivery models: ASLA
content and management strategy depends on a service provider’s delivery
model. This can be extranet or intranet-based and is specified in the ASLA. -
Extranet model: For many
service providers, the fastest and most effective way to deliver application
services is by utilizing the client’s existing WAN that connects end-users
to a centralized data center. The service provider provides an extranet
connection to this data center to relay hosted applications and content.
Clients control performance within their own networks. -
Intranet model: Some
service providers combine managed network and application services to
deliver a comprehensive solution. In addition to delivering hosted
applications and data to the client’s data center, the service provider
takes responsibility for the end-to-end quality of the user experience.
Since applications flow across multiple networks, including delivery-chain
providers and the client before reaching the end-user, it is often difficult
to determine where and when problems arise. Support costs can spiral out of
control if service providers spend too much time troubleshooting problems
that fall outside their responsibility, so they must carefully define
service boundaries which lie beyond their control.
Typically, service providers should set a boundary where
their applications enter the data center. Additional boundaries may be necessary
for other network services.
Performance Analysis
Both the service provider and the client should validate
deliverables for different reasons. The client wants to confirm that they got
what they paid for. The service provider needs a comprehensive analysis
including detailed per-application metrics. The optimum solution should offer
the flexibility to assist multiple parties and include:
-
High-level graphics and reports
for clients and service providers to compare committed versus actual
performance -
Remote access that enables remote
validation such as browser-based user interface -
Detailed graphs and reports for
diagnostic and planning purposes -
Metrics to import into
third-party reporting applications -
FHSS systems are primarily used
for low-power, low-range applications, such as 2.4GHz cordless phones, and
do not inter-operate with DSSS products.
Deployment and Integration
The solution should not change users’ desktops, servers,
routers or applications. Installation should be straightforward, quick,
compatible and integrated with existing infrastructures. A third-party solution
should not become an obstacle to fulfillment.
Enforcement Examples
Service providers should make sure they meet their
performance obligations. The key to enforcing performance is to differentiate
each application needing special treatment, and then appropriately and precisely
assign resources.
Example #1
A service provider hosts Microsoft Exchange and a Great
Plains accounting application running over Citrix MetaFrame. The benefits of
e-mail, scheduling, information sharing and public folders can consume
additional bandwidth and affect the other applications. The accounting
application is business critical and the service provider needs to deliver
consistent, prompt performance. The control requirements differ for the two
delivery models:
-
The Extranet Model: This
model requires the service provider to balance only Microsoft Exchange and
Great Plains performance and deliver it to the client’s data center. -
The Intranet Model: This
model requires the same, plus control of performance for all subscriber
applications running over the enterprise WAN, all the way to the end user.
Example #2
A service provider adds a streaming media application, such
as Voice over IP (VoIP), to the hosted Repertoire. The service provider defines:
-
A per-session bandwidth minimum
of 18 Kbps for each session to secure good reception for all. If any
minimums are not used, they are available as excess rate. -
A polite block to new sessions if
VoIP is overextended. Late users receive a polite message if they seek
service when earlier conversations occupy all bandwidth reserved for VoIP.
Example #3
A service provider expands to incorporate managed network
services, stepping up to the Intranet Model to manage application performance
across the subscriber’s WAN. In this case, the service provider defines:
-
A per-session cap on applications
that use an inappropriate amount of bandwidth. Web browsing, news and large
file transfers are typical candidates for caps. -
Per-application bandwidth
minimums and caps for the client’s own, non-outsourced applications. -
Efficiency methods such as
slowing the sender to suit a slow recipient.
GB Kumar, general manager, business programs, Intel India