Advertisment

IIT Madras’ Centre for responsible AI and Ericsson partner for joint research in responsible AI

Generative AI models based on attention mechanisms have recently gained significant interest for their exceptional performance.

author-image
VoicenData Bureau
New Update
Developed in collaboration with tech giant IBM, the curriculum is part of CBSE’s Social Empowerment through Work Education and Action (SEWA) program

 Indian Institute of Technology Madras’ (IIT Madras)  Centre for Responsible AI (CeRAI) today announced that it is partnering  Ericsson for joint research in the area of Responsible AI. To commemorate the occasion, a Symposium on Responsible AI for Networks of the Future was organized where leaders from Ericsson Research and IIT Madras participated to discuss the developments and advancements in the field of Responsible AI.

Advertisment

During the event held at the IIT Madras campus today, Ericsson signed an agreement to partner with CeRAI as a ‘Platinum Consortium Member’ for five years. Under this MoU, Ericsson Research will support and participate in all research activities at CeRAI.

The Centre for Responsible AI is an interdisciplinary research centre that envisions becoming a premier research centre for both fundamental and applied research in Responsible AI with immediate impact in deploying AI systems in the Indian ecosystem.

AI Research is of high importance to Ericsson as the 6G networks would be autonomously driven by AI algorithms.

Advertisment

Addressing the symposium, Chief Guest, Prof. Manu Santhanam, Dean (Industrial Consultancy and Sponsored Research), IIT Madras, said, “Research on AI will produce the tools for operating tomorrow’s businesses. IIT Madras strongly believes in impactful translational work in collaboration with the industry, and we are very happy to collaborate with Ericsson to do cutting edge R&D in this subject.”

Speaking on the occasion, Dr Magnus Frodigh, Global Head of Ericsson Research, said, “6G and future networks aim to seamlessly blend the physical and digital worlds, enabling immersive AR/VR experiences. While AI-controlled sensors connect humans and machines, responsible AI practices are essential to ensure trust, fairness, and privacy compliance. Our focus is on developing cutting-edge methods to enhance trust and explainability in AI algorithms for the public good. Our partnership with CERAI at IIT Madras is aligned with Indian Government’s vision for the Bharat 6G program.”.

A panel discussion on ‘Responsible AI for Networks of the future’ was organized to commemorate the partnership during the symposium and some of the current research activities being carried out at the Center for Responsible AI were showcased.

Advertisment

Elaborating on the partnership between CeRAI and Ericsson, Prof. B. Ravindran, Faculty Head, CeRAI, IIT Madras, and Robert Bosch Centre for Data Science and AI (RBCDSAI), IIT Madras, said, “Networks of the future will enable easier access to high performing AI systems. It is imperative that we embed responsible AI principles from the very beginning in such systems. Ericsson, being a leader in future networks is an ideal partner for CeRAI to drive the research and for facilitating adoption of responsible design of AI systems.”

Speaking about the work that would be taken up under this collaboration, Prof. B. Ravindran added, “With the advent of 5G and 6G networks, many critical applications are likely to be deployed on devices such as mobile phones. This requires new research to ensure that AI models and their predictions are explainable and to provide performance guarantees appropriate to the applications they are deployed in.”

Some of the key projects presented during this Symposium include:

  • The project on large-language models (LLMs)in healthcare, which focuses on detecting biases shown by the models, scoring methods for real-world applicability of a model, and reducing biases in Large Language Models (LLMs). Custom-scoring methods are being designed based on Risk Management Framework (RMF) put forth by National Institute of Standards and Technology (NIST), the U.S. federal agency for advancing measurement science and standards.
  • The project on participatory AI addresses the black-box nature of AI at various stages, including pre-development, design, development and training, deployment, post-deployment and audit. Taking inspiration from domains such as town planning and forest rights, the project studies governance mechanisms that enable stakeholders to provide constructive inputs for better customisation of AI, improve accuracy and reliability, raise objections over potential negative impacts.
  • Generative AI models based on attention mechanisms have recently gained significant interest for their exceptional performance in various tasks such as machine translation, image summarization, text generation, and healthcare, but they are complex and difficult for users to interpret. The project on interpretability of attention-based models explores the conditions under which these models are accurate but fail to be interpretable, algorithms which can improve the interpretability of such models, and understanding which patterns in the data these models tend to learn.
  • Multi Agent Reinforcement Learning for trade-off and conflict resolution in intent based networks: Intent-based management is gaining traction in telecom networks due to strict performance demands. Existing approaches often use traditional methods, treating each closed loop independently and lacking scalability. This project studies a Multi-agent Reinforcement Learning (MARL) method to handle complex coordination and encouraging loops to cooperate automatically when intents conflict. Current efforts explore generalization abilities of the model by leveraging explainability and causality for joint actions of agents.
Advertisment