Advertisment

ENTERPRISE NETWORK SERVERS: Is it Time to Upgrade?

author-image
Voice&Data Bureau
New Update

Without servers no network is complete. Network servers available in the
market can often leave a CIO confused and looking for the right answers. If he
has to integrate a new network with new servers, it might be easier for him than
upgrading from existing servers.

Advertisment

With the management getting increasingly conscious on IT spending, when does
he decide to buy a new server? The first sign to replace an old server is when
the workflow of the company gets hampered due to server snags. Particularly in
mission critical applications and data being accessed from the server, even a
small downtime can register huge losses.

Types of servers

Entry-level servers:
The entry-level servers can be categorized into
standard Intel architecture servers (SIAS) and RISC/Unix servers. The typical
minimum specs for an SIAS server should include an Intel Pentium 4 processor, up
to 533 MHz front-side bus speed, up to 2 GB memory and at least one HDD, besides
some manageability tools. Typically, the entry-level severs are in the Rs 1 to
Rs 25 lakh bracket. These servers are ideal in case of non-critical enterprise
requirements like e-mail messaging, e-sharing, and print-sharing tasks.

Mid-range servers: Typically, the mid-range servers can cost somewhere
between Rs 25 lakh to Rs 2 crore, depending on whether these are RISC/Unix
servers or Intel servers. The usual specs for mid-range Intel servers are up to
four Intel Xeon processors, 512 MB—12 GB memory, hot-pluggable 6x
64-bit/100MHz PCI-X (supports 3V or universal PCI adapters), up to 10 high-speed
SCSI drives in a RAID configuration. These are ideal for performing multiple
functions like departmental applications, files and print, and can exist in a
cluster configuration. RISC/Unix servers that offer mainframe-like capabilities
could be used for large back-end databases. These servers are ideal for
consolidating smaller workloads and to host large applications.

Advertisment

High-end servers: Typically, these are the mainframes priced in excess
of Rs 2 crore, the de facto servers for mission-critical applications. Till
recently, this segment
used to be dominated by Unix-based servers, while Windows and Intel were
relegated to the background. However, off late, the Wintel combination has grown
much beyond expectations in this segment only. These servers are ideal for
mission-critical applications like database management, data warehousing, and
e-commerce applications.

Clustering versus symmetric multi-processing (SMP): There are many
applications, which can both scale-out and scale-in. Scaled-out is inherently
the capability of running across multiple servers, for example, Web servers and
mail servers can all run and scale from a host of servers. Scale in, refers to
the scalability within the box, that is, the application demand vertical growth.
For example, a typical Oracle database server or any database server or
applications, which would run on a single server.

Clustering provides a tool for enterprises today to build a high scalable
environment or a very highly scalable database environment using off-the-shelf
available products, which are low cost, open, and easily available. While SMP
servers are expensive, they are good for scaling-in for application growth.

Advertisment

Typically, the high-performance technical competing marketplace and the
education and research labs are good examples of customers who typically looking
at clustering as an alternative to SMP for building high scalability
environment.

Blade servers: These are ideal for environments where space and
electrical power are limited, and powerful processors are not an absolute
necessity. Enterprises going for server consolidation might go for blades,
especially since these are easier to manage.

These servers are strong on power efficiency, space saving and very easy to
manage and maintain. Blade server technology can accommodate 280 servers in a
single industry standard rack. Alongside, one could also have dual and Quad CPU
Xeon servers in a blades form factor.

Advertisment

Itanium processors: Itanium is largely looked at as an alternative to
the RISC/Unix market place for customers looking at a different alternatives
computing design, which is based on explicit parallel instruction set computing
(EPIC) architecture. This provides parallel execution of data, provides an
architecture which allows the processor to have more registers for data
processing compared to RISC based processors and is also a 64 bit environment.

Opteron processors: Opteron is just an extension to the existing
32-bit processor architecture with some 64 bit functionality like memory
addressing etc. Opteron is primarily targeted at the volume market in the 1 or 2
CPU space while Itanium is for customers looking at higher levels of
performance, scalability and reliability with the capability to run three
different operating systems. Most of Microsoft applications are not easily
portable into Opteron.

However, current third-party benchmarks place the Opteron at a significant
advantage over the 32-bit Intel server CPUs (Xeon and Xeon MP). Although there
has not been a head-to-head comparison between the Opteron and the Itanium
(given the difficulties in designing an apples-to-apples comparison), a lot many
users are probably wondering whether the advantage (of the Opteron) of backward
x86-compatibility does not outweigh the 'perhaps-not-quite-there'
performance advantage of the Itanium. The adoption of the architecture, first by
IBM and more recently by HP and SUN, is certainly excellent endorsement of AMD's
strategy.

Advertisment

Linux servers: Linux is an important server OS that is gaining
momentum. Initially, it was largely accepted among technical users like the
education industry and R&D labs but now its gaining momentum in the
commercial marketplace too. Many of he commercial customers do have a Linux
strategy and are trying to put some part of their data center or applications
that they are using for their organization on the Linux environment today.

Linux is a very good alternative for enterprises looking at providing their
applications with an open source and flexible operating system.

The fact is that open source does provide the flexibility that research
organizations and educational institutions need, for example, to modify the
kernel for suiting a particular application need. Lots of Linux-based
environments are also chosen in the high performance technical computing
marketplace.

Advertisment

Do not overlook these

Once the technology has been decided, there are certain parameters over
which the CIO must work before buying the server.

Evaluate total cost of ownership (TCO): Before making a server
purchase, a CIO should evaluate the TCO over the period of 5 years and include
parameters like cost of services, software license, manpower requirement, floor
space, electricity consumption, software upgrades cost of maintenance, backup
and management-related costs, hardware upgrade costs, and definitely look at
what are the various applications operating environments the server supports.

Server redeployment: Redeployment and repositioning of servers is an
equally important parameter like price and performance. CIOs often overlook
investment protection as not a serious point of evaluation. CIOs should look at
this very carefully and if required talk to some of the existing customers to
get their opinion on the same.

Advertisment

Modular approach: The traditional approach of using a bigger/more
powerful piece of hardware to address the issues of computing, disk, I/O, and
availability bandwidths may no longer be the best approach. With the increasing
maturity of clustering and niche-OS solutions, simpler and smaller blocks of
hardware are able to deliver the performance and availability benchmarks at a
much lower TCO. This modular out-of-box approach (as opposed to the monolithic
approach) needs to be evaluated when designing solutions for future
requirements.

Technology evolution: The server market is currently at a
technology-life cycle saddle point. The next nine months will see the
introduction of a new generation of standards, be it in CPUs, I/O interconnects,
or disk subsystems. CIOs need to be on the ball, in so far as these changes are
concerned.

Benchmark evaluation: CIOs prefer to evaluate the performance of the
server by looking at a suite of benchmarks rather than just going by any one
single benchmark. For example, customers today look at OLTP benchmarks like the
TPC-C, data warehousing benchmarks like TPC-H, specs benchmark like the specjBB
and SpecWeb. Most of the CIOs will like to refer to at least two or three
benchmarks before making a decision on purchasing a server from a performance
point of view.

Vendor choice: The CIO should look at the support level that he gets
and the amount of SLAs that he can ask from the vendors in supporting these
particular servers. In addition, the vendor should have the capability to
provide both short term and long-term solutions to the organization and should
have a large India presence and focus.

A direct interaction with a principal vendor for services and solution is
preferred over a partner or agency providing the same. However, there is a cost
difference in getting direct support services from the server vendor.

Reliability and redundancy: A fundamentally reliable platform,
designed with self-diagnostic capabilities and redundant subsystems, tends
towards a lower TCO. These features also allow the vendor to commit to higher
SLA level slabs with only marginal increases in cost.

The fundamental design of the servers, aiming at better power management, has
as its objective increasing system mean time between failure (MTBF). The
enterprise server platforms are designed with redundant subsystems in key areas-memory
(in the new range), disk, networking, power supply module, and cooling module.

The new range of servers will feature an e-Panel for system health monitoring
and alerting as well as for pre-OS self-diagnostic capabilities. This hardware
module will allow a remote/user-organization non-IT specialist to communicate
hardware fault-analysis information to wherever it is the IT infrastructure
administration is based.

Server management: Server vendors today provide many management tools,
which help customers manage complex clusters of servers through a single console
and through a single administrator. They should provide a single window for
managing a number of servers to take their back-up, and to create users. Most of
the management features that is required by the administrator are easy to use
GUI-based, can be done and managed through multiple servers from a single
console.

Vendors also provide remote dial-in management facility and management of the
servers through the Internet and through Intranet. It can be done through any PC
in the entire office and need not be in the same premises as the server. So
there is tremendous amount of flexibility and simplification done for server
management and CIOs should look at fully utilizing these opportunities.

Scalability: CIOs should look at a 2004 roadmap for server platforms
with significant expandability headroom as well as incorporating new
technologies that will boost I/O (PCI-Express I/O bus, network controllers with
in-built TCP-offload-engine). This addresses the scale-up requirements of
customers.

Manageability: CIOs should look at a modular server management
framework starting with a choice of hardware-specific components depending on
the sophistication and SLA of the requirement. The framework should hook to
enterprise management solutions, to allow the management of these servers to be
integrated into the overall infrastructure management scheme of the enterprise.

Advertisment