Arun Netravali is an exception among successful Indians in the US. A
researcher who has chosen to remain close to research, rather than going for a
start-up. He not only guides groups of brilliant–some of them Nobel laureates–researchers
as the president of Bell Labs, but is a researcher par excellence himself. He
confessed of not getting enough "quality uninterrupted time" for doing
his own research. But the vision of this hardcore research person goes well
beyond the boundaries of laboratories.
In an interview with Shyamanuja Das, he shares his thoughts on the evolution
of networks and network technologies.
Everything is getting connected to everything else. Where is it all leading
to?
At present, people are not actually connecting everything to
everything else. That will only happen when the cost of connectivity comes down.
Only then will you know what you get by connecting these things. You will have
software that will do value-analysis
of the information collected from this connectivity. Once you have that, you
will not connect everything to everything. You will connect only those things
where you get some value.
You talked of a "communication skin" around the world. What do you
mean by that?
There are three implications to that. First, in a human body,
different activities happen at different parts. All these activities are
transferred to the brain via the skin. So skin is like an overlaying network
that connects all the parts of the human body.
Two, skin is everywhere. It covers the whole body. Similarly,
I believe that the communication media will cover the whole earth. In ten years,
you might have half billion kilometre of fibre laid across the world. Nothing
will be very far away from the network. It is the ubiquity of the network that I
mean.
The third dimension is the tremendous bandwidth that will be
available in the near future.
All this combined will give you the ability to connect
without going too far.
In human history, most of the systems that have evolved, including the
example of human body that you just cited have centred around something. They
have been centralized systems. But the way today’s network of networks, the
Internet, is evolving, there is no central thing to it…
That would be one difference. Because there is no central
thing that can be scaled. It is a large distributed system that can be
enhanced and scaled up by anyone.
But will there be just one network?
No. There will be many networks. There will be many owners of
these networks. There will be many kinds of networks. But they
will all be interconnected. Some will be centrally controlled. Some will have
distributed control. Some will be wireless networks. Some will be wireline.
There will be some people who will own some segments of these networks. There
will be all sorts of leasing arrangements between owners of these networks. If I
am an owner and you are an owner, sometimes I will carry your traffic. I will
have some business agreement with you. All of the previous things–just one
phone company, just one architecture, just one service, i.e., voice–are
becoming obsolete.
But eventually what will the network operator charge for? Distance-based
charging has given way to charging for bandwidth. You say it will soon be too
cheap to meter. So what will he charge for?
Just like cable TV companies, they will charge flat fees.
Your cable TV service provider does not charge you more if you watch more
programmes.
AT&T has already announced that it would have flat
service. Five hundred minutes worth of flat service will be given to you.
Whether you use 400 minutes or less, you pay a flat fee. So there will be all
new billing systems that will come. There will be many different services. The
operators will be bundling services. And they will have large number of choices
to offer at the billing discount situations.
It is like what happens in departmental stores. They deal
with large number of products. Each store prices products differently. They have
different combination of prices through bundling, discounts based on value of
purchase. It is very flexible.
The network service providers will similarly have flexibility
in offering these discounts.
Are you saying that technology will be taken for granted? That it will become
secondary. What will matter is marketing, innovative pricing and so on?
No, it is not that. For us, the equipment providers, the
challenge is to find out what the service providers want. The service providers
have to figure out what their customers want. And all of us will not have the
same thing. My company may choose to invest in a particular product and,
therefore, it may have the best of that. Some other company may choose to invest
in something else. One company cannot invest in everything at once. I think what
will happen is that people will have to make the best planning, make the best
combinations–whether it is to provide technology or provide services–that
will work out for them. There will be all sorts of combinations.
Won’t there be mega service providers offering almost everything?
Sure there will be. But the mega service providers may have
problems. They may be slow. Smaller niche players may be adopting technology
much faster. They will have different propositions. What I am saying is that it
will not be a centrally controlled game. It will be much more entrepreneurial
oriented.
Coming to the technology front, you identified four technologies as the
driver of change–optics, semiconductors, software, and wireless. Starting with
optical fibre, the speed at which the current photonics operates and the speed
at which the current electronics operates, there is a gap, which is increasing.
So there is a mismatch situation. How do you see this affecting the growth of
communication technologies?
I think there will be a combination of electrical and
optical. That combination today will be of certain type, based on the
capabilities of what each of these technologies has at present. Tomorrow we may
see a different kind of combination because each one will change.
You are absolutely right that optical is much, much faster
but it can do limited number of things. Silicon is slower but more flexible.
People will learn how to combine these two.
That is probably because they are in different phases of their development
cycles. Silicon has been there for years. Fibre is comparatively new
That is one reason. But there is also another reason. There
is no optical memory. There is no optical transistor action. You cannot take
something in optics and turn it on and off very quickly. It is like a big pipe.
Once you open the pipe, the amount of data you can carry is very high but
opening and closing is very slow. In electronics, you open and close very fast.
The amount of data that can go through it is smaller. So, there are some basic
things we do not know how to do in optical fibres. People have been trying to do
that for quite some time now, but they have not invented an optical transistor
or optical memory. What they have done is very crude and very slow. People do
not know how to conquer physics.
But what about your lambda router? It just uses mirrors to reflect...
No, no. The mirrors cannot be turned on and off in
nanoseconds. But once you turn it, it does not care how much light goes through
it. It does not matter if it is in terabits. Today, we move them mechanically
and cannot do it in nanoseconds. People are working on it, but that capability
is certainly not there.
In wireless, you talked of a capacity increase of one thousand times. But
what is thinkable today, even in the best of 3G/ 2.5G technologies is may be at
best a 10-15 times increase than even the analog technologies. Then, unlike
fibre, which you can lay as much as you like, you cannot create radio spectrum.
First of all, I did not say a thousand times has happened
already. But look at what has happened. Lot more spectrum is being made
available. Not just thousand times, but several times more. Second, the
modulation schemes have become much better. It is possible to transmit much more
number of bits per Hertz. Third, there is lot of compression that has happened.
We started with 64 kilobits. Then we used 13 kilobits and now we are giving
reasonable quality voice in 8 kilobits. And we can go to 4 kilobits very soon.
Fourth, there is the new multiple antenna technology that I
talked about. Each of these is taking up the capacity of wireless by a factor of
five to ten. When you combine all these, you get a much larger number. But may
be, one thousand is still a big number.
Do you agree with Negroponte’s view that all the spectrum will finally be
utilized by things that move?
No. Currently, we are working on a technology called "fibre
in the air". We put an antenna on a building. From the central
office, a laser beam is thrown at that antenna. For the last mile, it is a very
effective way of giving you hundreds of megabits, or even gigabits per second,
which is much more expensive than laying a cable. In cities like Mumbai, it is
also very cost-effective. So, I really do not agree with that observation. It
really depends upon the situation.
When do you see large-scale deployment of software switches?
What do you mean by software switches?
I mean a general-purpose box with the entire switching functionality in
software.
That will take some time. First, the instruction set of the
general-purpose computers is not matched to what we do in switching. Today’s
special-purpose switches are optimized for that. If you make such
general-purpose boxes, you will waste capability. Or you fail in terms of
matching the performance. I think there is a long way to go before we shift from
optimized, low-cost, special-purpose hardware to general-purpose hardware. There
is progress of course. Each year we get closer and closer. Look at Digital
Signal Processing (DSPs) for example. It is now the big thing in wireless. It
allows you to do all signal processing actually through software. DSP is becoming general-purpose platform.
You also talked of layering in communication, the way it is there in
computing. What do you mean by that?
The switch has a special hardware, but it also has stored
programme control– the firmware. The firmware can be developed on a
general-purpose machine and can be downloaded into it. What I meant was that
this will expose very rich set of APIs using which people can write new
applications.
Won’t that mean big companies like Lucent will get tough competition from
the smaller start-ups?
Big companies will do what they are best at doing. Small
companies will do what they are best at. Small companies might look at niche
areas. Big companies will work to create the infrastructure. They have to make
sure that whatever they make work with others.
That reminds me of another issue–interoperability. Today, with so many de
facto standards, is it not a tough task? Can you still provide the same kind of
reliability of network with so disparate, though individually brilliant pieces
of technology?
Yes. It is a tough task and is another challenge for large
companies. Small companies make specialized boxes or software. Challenge comes
in integrating them, when you want to provide an end-to-end solution. You have
to go much beyond just standards and protocols.
Also, has the standardization process gone through significant changes? The
3G that we are discussing today is very different from what ITU proposed for IMT
2000. In a way, we have accepted that there will not be a single worldwide
standard.
That is a different question. There will be multiple
standards. These multiple standards may produce another opportunity. Somebody
will produce phones with multiple modes. Somebody will produce gateways that
will connect one standard-based network to the other and so on.
You talked of the Hi-IQNet. Who will build the IQ into that network?
The one good thing about Internet is that it does not favour
anybody. So, big and small companies, individual programmers in every part of
the world will contribute to that.
How do you see convergence?
Convergence to me is how much sharing you do. Today, there are many networks.
They may never fully converge. They may share bandwidth or even parts of
network. And that is also convergence. It is not necessary that everything
should "converge" to become one.