/vnd/media/media_files/2026/02/19/sam-altman-2026-02-19-18-01-34-2026-02-19-19-38-34.webp)
At the AI Impact Summit 2026, Sam Altman, Chief Executive of OpenAI, set out a stark vision of the near future of artificial intelligence. Rather than focusing solely on technological progress, he described the coming years as a decisive test for democracy, governance and global power.
Altman suggested that, if current projections hold, by the end of 2028 more of the world’s intellectual capacity could exist inside data centres than outside them. While acknowledging uncertainty, he said the possibility warrants serious consideration. The key question, he argued, is not only how advanced AI systems become, but who controls them.
India’s Role in Shaping AI Governance
Altman highlighted India’s rapid adoption of AI technologies. More than 100 million people in the country now use ChatGPT each week, with students accounting for over a third of users. India has also emerged as the fastest-growing market for Codex, OpenAI’s coding agent.
He said India’s scale and democratic framework place it in a distinctive position to influence global AI norms. As the world’s largest democracy, India has the opportunity to deploy AI widely while also shaping how it is governed. Developments in sovereign AI infrastructure and smaller language models, he noted, indicate that the country is contributing to the evolution of the technology rather than simply consuming it.
If intellectual capacity increasingly resides in data centres, Altman argued, democratic nations such as India will help determine whether that power is broadly distributed or concentrated.
Superintelligence and Systemic Risk
Altman offered one of his clearest forecasts to date, suggesting that early forms of true superintelligence could emerge within two years. Such systems, he said, might outperform senior executives and leading researchers in many domains.
He pointed to the rapid acceleration of AI capabilities in recent years, from difficulty with secondary-school mathematics to handling research-level problems. Although progress remains unpredictable, the pace of improvement suggests that more advanced systems may arrive sooner than previously expected.
Altman maintained that democratisation is not only an ethical position but also a safety strategy. Concentrating advanced AI within a single company or country, he argued, would heighten systemic risks. He rejected the idea that societies must trade political freedom for technological breakthroughs, warning against any model that exchanges open governance for accelerated progress.
He also stressed that safety cannot be addressed solely within AI laboratories. Risks range from the misuse of open-source biological models to new forms of AI-enabled conflict. Managing these threats will require coordination between governments, institutions and civil society.
Economic Disruption and Long-Term Choices
Altman acknowledged that rapid advances in AI will reshape economies. As systems improve, the cost of services such as healthcare, education and manufacturing could fall significantly. However, labour market disruption is likely. While machines may outperform humans in many forms of analytical or repetitive work, he argued that people retain advantages in empathy, judgement and social connection.
Placing the moment in historical context, Altman said each generation inherits more powerful technological foundations than the last. The challenge now is to ensure that AI’s benefits are widely shared rather than captured by a narrow concentration of power.
He concluded with a warning that the direction of travel is not predetermined. As AI systems become more capable, societies will face a fundamental choice: whether to use them to empower individuals or to centralise authority.
Altman called for international coordination, potentially through a framework comparable to the International Atomic Energy Agency, to oversee advanced AI systems and respond to emerging risks.
If intelligence increasingly resides in data centres, the defining issue will not be the capabilities of machines alone, but whether their deployment strengthens democratic values or entrenches centralised control.
/vnd/media/agency_attachments/bGjnvN2ncYDdhj74yP9p.png)