Over the last few years, we have seen the number of data centers
growing at an exceptional rate to keep pace with the growth in data. And this is
a huge concern for IT managers. Until recently, continuous improvements in price
and performance made it easy and affordable to solve storage concerns simply by
adding more disks to existing storage systems.
Now, each and every enterprise has its own data center. But the
complexity of managing these efficiently has also become a challenge for CIOs.
An email survey done on CIOs of some SMEs by VOICE&DATA found that there are
limits to that easy growth-floor space, weight loads, rack space, network
drops, power connections, cooling infrastructure, and even power itself are
finite resources. Failing on any one of these limits significantly jeopardizes
the ability of the IT department to meet the demands of business.
The Data Center Scenario
The Indian scenario is a bit different from the global for managed data
services. In India large enterprises rely on their own in-house data centers,
whereas SMEs prefer to outsource. What are the benefits? "That depends on
the storage needs. For direct attached storage where data access is needed by
servers, in-house storage is a better option," says Rajesh Uppal, chief GM,
IT, Maruti Udyog.
High availability, capacity planning, and optimal utilization of
resources are some of the biggest concerns for CIOs. Cost cutting is not one of
the top priorities. To ensure high availability, power backup has to be there
24x7x365. Data centers should also adopt network load balancing along with
disaster recovery (DR) so that stress on them can be minimized. For the critical
applications running in the data centers one should have an automatic fail-over
set-up. Redundancy building into all the elements can also affect high
availability.
If the organization does not have enough trained staff to
provide high availability, then outsourcing of critical applications is a better
idea. "Enterprises prefer to outsource to data center service providers
because of lack of expertise," says Deepak Makhija, business head, Storage
Services, HCL Comnet.
To combat the issue of capacity planning, one of the options
suggested by some of our respondents was server consolidation. A requirement for
which is monitoring the IT resources before formulating the strategy. Broadly
speaking, server consolidation translates into IT resource management. You
should only revamp your data center if you think it can't take the load of
your upcoming projects or if you don't have enough time and budget then-outsourcing,
then, becomes a better option.
Virtualization is another solution for capacity planning. With
virtualization enterprises can add more apps in the same environment in order to
utilize the unused server power, for efficient resource management. This will
also help in addressing other concerns like ensuring optimal utilization of
resources and keeping costs under control.
Key Management Concerns
Power concerns top the list, followed by crash and recovery. There are also
connectivity related cooling and data backup issues. Let's take these issues
one by one.
"In the next 18 months, |
|
——Soumitra Agarwal, |
|
"Enterprises prefer to |
|
——Deepak V Makhija, |
|
"For direct attached |
|
——Rajesh Uppal, |
Power, undoubtedly, is the basic need for a huge data center. As
it grows it requires more electricity in order to power the infrastructure.
"In the next 18 months, increase in average storage rack density is
expected to drive the average power consumption from 2kw per rack to 30kw,"
says Soumitra Agarwal, marketing director, India, Network Appliance. Here too,
capacity planning plays a major role. One has to evaluate the present and future
power requirements for a data center.
Servers can alone consume 50% of the power coming into the data
center. The first step in reducing power consumption is to attack the power
problem where data centers can reap the most gains-consolidating and
virtualizing application servers.
In environments with lots of direct-attached storage, as much as
27% of the power going into the data center is being consumed by storage. These
days, many organizations have their own power generation units for powering
their data center grid.
Next is the crash recovery issue. For instance, if any of the
mission critical applications fail due to hardware failure, what would be the
recovery strategy to bring back the application with minimum down time? You can
keep spares in stock 'so that you can just replace the hardware and host the
application on a new piece of hardware', suggests Makhija.
Connectivity issues are another concern that CIOs face. In fact,
one interesting aspect that came up from our survey was availability of network
equipment. What if one switch fails somewhere in your large data center? For
this, you need real time monitoring of the networking equipment, and failover
support for the most critical ones.
Data centers have a lot of servers and other equipment that
generate huge amounts of heat. As temperature rises, it adversely affects the
performance of the data center, plus chances of wear and tear of equipment also
increase. Therefore, cooling plays a very important part. Before building a data
center you need to analyze your cooling requirements and design accordingly. For
your existing data center, you should put in temperature monitoring and control
equipment. One of the respondents said that for additional cooling on demand,
you could also deploy emergency chillers.
Monitoring and management is another element. This helps CIOs
evaluate the health of a data center on a real time basis. According to our
survey, this was evenly split between either having a 24x7 monitoring set-up
manned by in-house staff or a completely outsourced management model. Very few
respondents said that they didn't have a dedicated monitoring set-up. So, one
thing is pretty clear-a 24x7 monitoring should be in place for any data
center, whether in-house or outsourced.
Disaster Protection
Disasters happen all the time and businesses that can best survive, win. To
ensure survivability, businesses must have a disaster recovery (DR) program and
infrastructure in place. Says the CIO of a large enterprise, "In this day
and age of RoI, IT managers must think of the basic and critical business
objectives of a DR program and infrastructure. IP-SAN serves near line data
protection needs."
Businesses know that controller-based replication is a
time-tested solution for disaster recovery. But what few people understand is
the different types of replication, and how it meets their needs.
Many IT organizations today are challenged with moving their
online and near-line data to offline tape backups and archives. The requirement
for 24x7 application uptime dramatically shrinks the backup window. Yet the data
volume on the multitude of servers, desktops, and laptops continues to grow at a
rapid rate. While tape arrays and incremental backup solutions help achieve
shorter backup windows, they are often complex and costly both for backup and
restore. Hence, not many enterprises back up regularly, if at all.
For the data that is backed up, the latency of restoring data
from tape is usually long. If the backup log, ie, the catalog is maintained
online and the data maintained in a tape library, restoring it might take time.
It could take hours, perhaps days, to retrieve the tape from an offsite vault
before data can be restored.
To address these challenges, IT departments are now deploying
low cost, cost-effective ATA disk arrays as a staging area, either as a
front-end to a tape library or as a stand-alone appliance on the network.
This approach minimizes the impact on the application hosts and
effectively eliminates the backup window issue. It also enables backup servers
and the associated tape drives to be consolidated, to achieve further cost
savings. Most Indian companies are looking at building DR capabilities by
utilizing their existing Ethernet infrastructure and already available IP
skill-sets of their IT technicians.
Gyana Ranjan Swain
gyanas@cybermedia.co.in