Advertisment

CAMPUS NETWORKING: Gung ho about Gigabit

author-image
VoicenData Bureau
New Update

B-schools in the US and other developed countries have been providing

intranet connectivity to hostels and student dormitories for quite some time. Of

late, Indian institutes have also felt the need for such connectivity at their

premises.

Advertisment

Indian Institute of Management, Lucknow (IIML), with the increase in student

strength, began experiencing a growing strain on its computer center resources.

In the beginning of 2002, the institute’s management decided to augment the

campus network so that students could have intranet connectivity in their rooms.

It was envisioned that this would encourage students to use desktop/laptop

computers from their rooms, thus reducing the demand for resources at the

computer center. The computer center, in turn, would be equipped with high-end

clients for running limited license software, specialized

mathematics/statistical software, and development tools.

Project

Activity List
Activity Estimated

Duration (in days)
Marking

fiber cable routing
4
Marking

of UTP outlet jack locations and rack locations
2
UTP

cable laying and fixing information outlets
25
Fixing

of LIU, racks, and switches
7
Termination

of UTP at jack patch panel
8
Fibre

optic cable laying
25
Fibre

cable characterization
7
Integration

of the networking components
7
Testing 7
Certification

of information outlets
10

Gigabit Network Design Issues



Although some inter-building multimode optic fiber links existed, the

project–a gigabit networking one–was for all practical purposes a

green-field project. It required laying of new fiber throughout the campus for

interconnecting different hostel buildings. It was also decided to include about

61 faculty residences in this networking project, as this would help the faculty

to gain access to the intranet resources, including the class material, office

machines, the Internet and e-mail services from home. However, this made the

network design much more complex as the houses are spread out. So suitable

traffic concentration points were identified to be included in the backbone.

Advertisment

The network conceptualized in January 2002 had to be designed, implemented,

and made operational by June 2002. The objective was to design and implement a

scalable network that would require minimum sinking of the equipment and cables

in use. And of course, it had to be a minimum-cost network.

Gigabit Ethernet versus ATM



In general, there are two options for very high-speed backbone networks–the

gigabit Ethernet and ATM networks. ATM, while providing connection-oriented

services, has in-built QoS features, and can support multiple classes of

services with varying priorities. The biggest advantage of gigabit Ethernet over

ATM is the seamless integration with the existing IEEE 802.3 compliant 10 and

100 Mbps Ethernet LANs. This is mainly because it was developed by the IEEE

802.3z Gigabit Task Force.

IIML team designed the network as a three-layer architecture.



The first layer includes one/two gigabit switches housed at the central

locations, such as the computer center and the faculty block. These switches

have UTP (1000 Base T) and fiber ports (1000 Base-SX) for connecting firewall

and server farms.

Advertisment

They also have enough 100 Base TX/FX ports to connect to the existing 100

Mbps switch stack housed in the computer center. This is to make the migration

of the existing fast Ethernet to the gigabit backbone easier.

Gigabit

Ethernet Architecture at IIM Lucknow

The second level consists of backbone switches with Gigabit uplink ports to

be connected to the Gigabit switches in tier-I, using 50/62.5-micron multimode

fiber. The tier-2 switches are normally concentration points aggregating traffic

from a group of hostel/residential blocks. These switches also have 100 Base TX/FXs

ports to connect to information outlets or to other switches in tier-3. Tier-3

switches provide direct connectivity through 100 Base TX to the information

outlets in various rooms.

Advertisment

Since the per-port cost of switches is not significantly more than per-port

cost of a hub, IIML decided to go for a pure switched network configuration

where the individual information outlets are directly connected to a switched

port, giving a dedicated bandwidth of 100 Mbps. Further, the network was

designed to be future-proof. Each information outlet is provided with a 4-pair

category 5 UTP, so that it may become possible to switch to gigabit later on.

Most of the existing fiber was 62.5-micron multimode fiber, and hence could

not be used for the Gigabit links due to distance limitations. It was decided to

lay 6-core 50-micron multimode fiber for inter-block connectivity where the

distance was more.

The 62.5-micron multimode fiber was used only for short hauls, especially

across hostels within the same cluster. IIML decided not to go with single-mode

fiber, as 1000 Base-LX ports were very expensive compared to SX ports.

Advertisment

Layer-3 versus Layer-2 Switches



Layer 3 switches can look at the IP address, for example, in making

switching decisions. In this aspect they behave almost like routers. Since an L3

switch is approximately 25—40 percent more expensive as compared to an L2

switch, IIML decided to go for an optimal combination of the two types. L3

switches were chosen for both tier-1 and tier-2 while L2 switches were opted for

tier-3.

Dilip Mohapatra, computer centre manager, and Dr V Sridhar, professor

in-charge of computer centre, came up with broad network specifications and

floated preliminary tender specifications in January 2002. Prospective vendors

were asked to firm up their design specifications based on the inputs given by

IIML. After completely reviewing the tender specifications of different vendors,

CMC was awarded the contract.

Enterasys Matrix E1 switch, with 6-gigabit Ethernet ports and 48 number of

10/100 Base-TX ports, was selected as the core switch. SSR 2000, another

higher-end L3 switch with 2 gigabit interfaces, 8 numbers of 10/100 Base-FX

ports, and 16 numbers of 10/100 Base-TX ports was chosen at tier-1. Enterasys

vertical horizon (VH) L3 switches were chosen for tier-2.

Advertisment

These switches have 24 number of 10/100 Mbps ports and a gigabit uplink to be

connected to tier-1 switches. Enterasys VH L2 switches were selected at tier-3

to provide 100 Mbps switched connectivity to individual information outlets.

Project Implementation



The project started in mid-March 2002 and the implementation was completed

by the end of May 2002, for a total cost of Rs 75 lakh. The annual maintenance

contract for four years was also signed with the vendor for active components,

such as switches and network management software.

Overall, about 4,000 meter of optic fiber and 30,000 meter of UTP cables were

laid. At some locations, for interconnecting hostels within each block where the

distance is less, fiber was terminated using a fiber-UTP converter on 100

Base-TX switch ports, in order to reduce the cost on the fiber ports on the

switches. Around 800 information outlets were provided in various hostels and

residences. Matrix E1 and SSR 2000 switches were chosen as the backbone switches

as indicated above.

Advertisment

Around five L3 switches and 41 L2 switches were installed at various

locations in the campus, 33 19" racks were deployed to house the switches,

39 patch panels for interconnecting information outlets to the switch ports, 16

LIUs, 17 fiber splices, 6 fiber-UTP converters were used.

Network Management



Virtual LANs (VLANs) have been defined over the switched configuration in

portions of the network to keep a check on broadcast and unwanted traffic. VLANs

also provide security to portions of the network, such as those of the faculty

and the server farms. All the switches chosen are 802.1Q compatible, and hence

are VLAN capable. VLANs can be used in the future when there is a need for QoS

for different classes of traffic. Multicasting networks based on VLANs can also

be designed to facilitate the use of group-based collaborative applications.

All the switches were assigned IP addresses, so that they could be managed

from a simple network management protocol (SNMP) manager from the computer

centre. IIML went in for Enterasys NetSite network management system (NMS) for

managing the network. This was chosen over the widely used HP Openview due to

cost reasons. Moreover, it was felt that since most of the switches were from a

single vendor (Enterasys), it would be better to go for a proprietary product.

Using NetSite, the network administrator would also be able to manage the

existing Intel switch stack and the two Cisco routers.

IIML is planning to develop a comprehensive open source network management

library, which will be used to augment NMS in future.

In the near future, firewall and some of the main servers will be augmented

with gigabit network interface cards, so that they can be connected directly to

the gigabit ports of Matrix E1 switch, thus eliminating network congestion

problems.

The network went operational in phases starting from 15 June 2002, and by the

end of July 2002 all connectivity problems were sorted out with help from the

contractor. Dilip Mohapatra, computer centre manager, IIM Lucknow, says,

"This was the single largest networking project I took up. I am glad that

the project was completed on time within the allocated budget.

IIML is one of the few institutes in India which has deployed a gigabit

campus networking, providing broadband connectivity not only to student hostels

but also to all faculty residences in the campus."

Dr V Sridhar, associate professor (IT and systems), Indian Institute of

Management, Lucknow

Advertisment