Advertisment

Creating India’s first-generation indigenous AI infrastructure

The ‘India AI Programme’ aims to bring to the forefront efforts to create the first generation of indigenous AI infrastructure in the country

author-image
VoicenData Bureau
New Update
Creating Indias first generation

Creating Indias first generation

The ‘India AI Programme’ aims to bring to the forefront efforts to create the first generation of indigenous AI infrastructure in the country

Advertisment

On October 13, the Union Minister of State for Electronics and Information Technology, Rajeev Chandrasekhar, spoke about the India AI Programme at a press briefing. Under the ambit of the Ministry of Electronics and Information Technology (MeitY), this programme is set to define the development of datasets, as well as indigenous compute capability, for what is being colloquially referred to as Indian artificial intelligence (AI). In reality, the policy is a much-needed move that can not only bring multi-billion-dollar contributions to the country’s economy but also bring about considerable democratisation in the access to technology within the nation.

Rajeev Chandrasekhar
Rajeev Chandrasekhar

“We are creating AI compute capacity in the public sector, where C-DAC is building an indigenous AI compute service—Param Rudra.”

Advertisment

Rajeev Chandrasekhar, Minister of State, Ministry of Electronics and Information Technology, Government of India

WHAT IS THE ‘INDIAN’ AI?

To put things simply, what is being referred to as ‘Indian’ AI is essentially a version of global AI applications and infrastructure, tuned specifically to Indian enterprise sensibilities. An early attempt at setting this up lies in the MeitY-backed Bhashini—the datasets project under the Centre that seeks to develop and deploy local language databases within indigenously developed mobile applications.

Advertisment

It is interesting to note that Bhashini, in many ways, laid the foundation stone for the development of Indian AI infrastructure. By creating repositories of local language data of 22 languages of India, Bhashini laid the early approach towards creating a data infrastructure framework—leading up to the formation of the India  AI Programme.

‘Indian’ AI, on this note, refers to the development of local language datasets, as well as locally accessible hardware that can power AI applications suited to lower costs. At the heart of the India AI Programme lies the fact that today, AI as we know it is considerably expensive to operate—thereby making it a near-exclusive club for Big Tech companies to play in. India, on this note, wants to alter this.

BIRTH OF THE INDIAN AI INFRA

Advertisment

Detailing this further in a media interview, Chandrasekhar said: “We are creating AI compute capacity in the public sector, where the Centre for Development of Advanced Computing (C-DAC) is building an indigenous AI compute service—Param Rudra. For the private sector, we have submitted a proposal to the government and it will need funding. The idea is to create a significant amount of Graphics Processing Unit (GPU) capacity in the private sector, with the government as a partner. This will be like a public-private partnership. The latter will give AI compute as a service for startups, researchers and for anyone who has a model that needs to be trained.” This details what the programme will bring along with it.

India’s infrastructure for AI chips is crucial as most chips used for AI tasks in India are presently designed in the US, and manufactured in Taiwan.

Creating Indias first generation Box
Creating Indias first generation Box
Advertisment

Set to be announced in January 2024, India AI will seek to grab Big Tech and use their solutions to be able to design chips that are cheaper to scale and deploy in the market—than the most commonly used commercial standard at the moment from US chipmaker Nvidia.

At the heart of this effort will be an intention to make chips available to academia, researchers and startups. Academia, in particular, has complained for long about the lack of access to resources that could help them scale applied research efforts in AI. Unlike large conglomerates, institutions in India have little budget to acquire high-performance computers, complete with servers of cloud-based GPU access to speed up research efforts. Owing to this lack of financial resources, Indian academia is, for now, lagging behind its US counterparts in the pace of adoption and adaptation of AI research.

WHAT IS THE PRIVATE SECTOR DOING?

Advertisment

As the country gears up towards the announcement of the India AI Programme, private entities are accelerating work with larger available capital. On December 6, Hiranandani Group’s data centre project, Yotta Data Services, announced a partnership with Nvidia to create an on-cloud supercomputer, called Shakti Cloud. The service is set to go live this month itself, and customers will have access to 4,096 GPUs on the cloud through its platform.

By June this year, the GPU capacity is expected to increase 4x, while by end-2025, this capacity is expected to finally reach its target of 8x the number of chips it inaugurated with.

Access to top-grade Nvidia GPU chips is crucial for running generative AI applications, the flavour of the season in 2023. These chips are one of the most reliable and most powerful, hence justifying their massive market demand at the moment.

Advertisment

Bhashini laid the early approach towards creating a data infrastructure framework—leading up to the formation of the India AI Programme.

Nvidia, however, has not just partnered with Yotta. On September 8, the company announced two successive deals with two of the country’s largest conglomerates—Reliance Industries and Tata Sons. While Jio Infocomm, the tech and telecom subsidiary of Mukesh Ambani’s RIL, will leverage Nvidia to build local language development infrastructure, Tata’s applications are two-fold—one, building an AI cloud to be served directly to clients through a cloud platform running on Nvidia chips. Two—through Tata Consultancy Services, India’s largest IT services firm, the latter will develop AI-driven applications for its clients.

DEVELOPMENT OF THE IDEA

Building India’s chip capability will also, in the mid-term future, tie in with the India Semiconductor Mission (ISM). Under the latter, the Centre continue to field new applicants for the production-linked incentive (PLI) scheme in this sector. Given that the onus of indigenous chipmaking will take a while to be realised, and the Centre modernising India’s fab, Semi-Conductor Laboratory (SCL) in Mohali, ISM could well help fast-tracking the development.

The development of India’s infrastructure for AI chips is also crucial from a geopolitical standpoint. At present, most chips used for AI tasks in India are designed in the US, and manufactured in Taiwan. Given the propensity of geopolitical conflicts with two ongoing wars at the moment, designing and developing chips locally will give India immunity and soft bargaining power with other nations that do not have the bandwidth to source commercial-grade chips for academia.

While there is no timeline at the moment, a two-year tentative plan to see the first batch of indigenously developed chips coming in physically is in store right now, as per industry officials with knowledge of the matter. This timeline, many have said, is practical too; Indian-origin chip designers and engineers make up one-fifth of all chip designers globally. Cashing in, therefore, only seems to be the right idea going forward.

By Vernika Awal

feedbackvnd@cybermedia.co.in

VoicenData Bureau
Advertisment