Sam Altman at Adda: ‘I don’t think there should be any single superintelligence in the world’

Wait 5 sec.

OpenAI CEO Sam Altman on the exponential acceleration of Artificial Intelligence (AI), India’s employment anxiety and AI’s geopolitical faultlines. He was in conversation with Anant Goenka, Executive Director, The Indian Express GroupNo. I think this is generally true for many companies — the person running the company gets way too much credit relative to the work everybody else does. But in our case, in particular, this has really been a story of a scientific discovery.Also Read | Close cooperation between Govts & AI firms becoming more important: Sam Altman at Express AddaThere were a handful of researchers who did something close to a miracle. They figured out something very deep about how the world works. What is so special about deep learning — and what has led it to be so general — is that this small set of researchers discovered an algorithm that can learn anything. And at scale, it just keeps getting better. Then you had a whole ecosystem of people who figured out how to deliver that scale — how to build data centres, how to optimise the training, how to handle the inference. And then, the world figured out how to build products around that. So if there’s a tectonic shift, it belongs first to the scientists.Since the last time you were in India to now, a lot has changed in the world. What do you think has changed in India?Before talking about what’s changed in India, I think it’s important to talk about what’s changed in AI. A little over a year ago, AI could do high school math. That was incredible. It could do a very good — not perfect — job at Grade 11 mathematics. And people were genuinely in awe. Just a couple of years before that, it struggled with grade-school math.By last summer, AI was competing in the hardest mathematics competitions in the world. And recently, mathematicians put out 10 unsolved research-level problems. Our latest model solved seven of them correctly. That’s not incremental progress. That’s a jump from ‘very bright student’ to ‘pushing the edge of human knowledge.’The same thing is happening in physics. And this happened in about a year. Now to India, the biggest shift, I feel, is psychological. The last time I was here, India felt like a consumer of AI. People were using it, experimenting with it. Now the builder energy is off the charts. India is our fastest-growing market for Codex (two AI-assisted software development tools released by OpenAI). That’s remarkable. At IIT Delhi this morning, the energy was electric. It doesn’t feel like people asking, ‘What is AI?’ It feels like people asking, ‘How do we build with this?’ That’s a big change.Last time, you made, what became, a controversial comment — that a $10 million fund wouldn’t be enough to build a frontier LLM (Large Language Model) in India. Now we are talking about a full-stack AI ecosystem coming out of India.That comment was taken out of context. The question was whether you could build a frontier model — meaning the most advanced model in the world — for $10 million. Even then, I didn’t think you could. And today, it’s even more true. Frontier models are incredibly expensive.Story continues below this adBut that’s different from saying India can’t build world-class AI companies. Of course, it can. And it is. There is a huge difference between ‘frontier model training at global scale’ and ‘deep, valuable innovation.’ Many of the narrow, specialised models and application-layer companies coming out of India are fantastic. India absolutely has the talent. The constraint isn’t intelligence. It is capital and infrastructure. Audience at Express Adda in New DelhiThe other side of that coin is jobs. About 8 per cent of India’s GDP comes from IT services. Tools like Codex are moving fast. Is this a threat?It’s going to change a lot. And it’s never helpful to pretend otherwise. The job of a programmer has changed more in the last year than in any year since I have been an adult. We have gone from autocomplete to ‘type an idea and get an application.’ But every step forward in computing has caused panic. And every time, the abstraction layer rises.People move from writing assembly to writing higher-level languages; from writing functions to designing systems. Now they move from writing code to describing intent. The amount of software produced will explode. Expectations will rise. But as long as companies and countries adapt quickly, there will be plenty to do.I’m not a jobs doomer. The promised leisure society never came. Humans always invent more things to want, more ways to express themselves, more problems to solve. India’s demographic reality makes this question more urgent. But urgency can be an advantage. Countries that adapt fastest win fastest.Story continues below this adWhat’s the least vulnerable job?When AI started generating images, people said, ‘Graphic artists, that’s over.’ That might be true for the kind of graphic artist job that was making someone’s birthday card invitation or something.But for fine art, the price of AI-generated art is zero and the price of human-generated graphic art has continued to go up since this has happened. There are many things like that where we care about the person who does it. Another example is that I really cared about the nurse that was taking care of me when I was in the hospital recently. If that were a robot, I think I would have been pretty unhappy, no matter how smart the robot was.Five hundred million Indians are under 30. When you meet political leaders, how much does this anxiety around jobs come up?A lot. The main themes I hear globally are infrastructure, jobs, distribution of benefits and safety. Everyone asks some version of: ‘What should my kids study?’If you study history, especially primary sources from the Industrial Revolution,  you see that people were spectacularly wrong in predicting future jobs. No one predicted the YouTube influencer. No one predicted the AI safety researcher.Story continues below this adSo instead of guessing specific careers, I think about durable skills: adaptability, fluency with AI tools, resilience, creativity, collaboration. The change won’t happen overnight. Society has inertia. But eventually, it will be huge.Jensen Huang (Nvidia CEO) talks about the five-layer AI stack — energy, data centres, chips, models, applications. Where can India realistically win?Whether you subscribe to a five-layer cake or a seven-layer cake, I think India should play at all levels. Vertical integration matters. For an economy of India’s size, it is important not to be dependent at critical layers — energy buildout, data centres, chips, models, applications. Different layers require different strengths. India already has world-class application-layer talent. It is building capability in semiconductors. Energy is improving. The Prime Minister is clearly motivated to compete at all levels. That ambition is important. (From left) Rajesh Nambiar, Co-Business Head, 360 ONE; Prasoon M Tripathi, Director, IMS Ghaziabad; Anupama Sharma, Co-Head of Business, 360 ONE; Anant Goenka; Sam Altman; Abhishek Khaitan, Executive Director, Radico Khaitan; Hazel Siromoni, Pro Vice Chancellor, Chitkara University; Koreel Lahiri, Chief of Strategy and Innovation, NDTVBut does the world even have enough compute for India, which has over one billion online users, to become an AI-first society?Not yet. If you ask people how many GPUs (Graphics Processing Units) they would like working for them — thinking about their problems, running their robots, writing their code — no one says less than one. Some say a thousand. Multiply that by eight billion people. We cannot deliver eight trillion GPUs; not on Earth. But that thought experiment shows the scale of ambition required. This will be the most expensive infrastructure buildout in human history. But AI and robotics will help us build it. It would have been impossible the old way. Now it’s just extremely hard.Is that why (data centres in) Space keeps coming up?Orbital data centres are not happening this decade. The launch costs alone make it impractical. And fixing broken GPUs in space is not easy. Space will matter eventually. But right now, terrestrial infrastructure is where the action is.Story continues below this adBut then, all this infrastructure buildout seems to force organisations like yours to have an intimate relationship with the government. At one point, big tech was a privatised force. Now it seems you need to have a good relationship with the government. Is the government a big enabler? Have you ever thought about the evolution of this relationship?Government is important not just for for building infrastructure but just, given the level of impact this is going to have on society and the need to truly democratise this technology. Governments are going to have to be involved and companies like ours are going to have to partner with governments.The tech industry started out as extremely libertarian — ‘we don’t need the government, the government doesn’t need us’ sort of view. That has changed a lot. Even before AI, in the last couple of decades, as the companies got bigger and more central to the economy and the way the whole world works, that has changed significantly but, maybe, never before has it been this important, just given the scale of the infrastructure that needs to happen.And, how do you feel about that? Because one of the theories we keep hearing is that this White House is very close to Silicon Valley. Silicon Valley was a close sponsor of JD Vance. Is that true and is that good?I would say close in some ways and not not so close in others. There are some tight ties and then also, this administration has had some big criticisms of tech. Close cooperation between tech companies and the government is going to become increasingly important over time. It obviously won’t be a perfectly smooth relationship but the better it can be, the better for all of us.What about relationships between governments? Recently, Pax Silica —a partnership among several countries, thinking about AI in advanced ways — was announced. Is it all these countries versus China? Is AI playing out as a race and is China, like we keep hearing, leap years ahead?I suspect AI will become one of the most important political issues in the world; one of the highest order bits of geopolitical tension and cooperation. But I don’t think it will be a fixed thing. As it develops, political alliances will shift over time.Story continues below this adChina, I would say, is ahead in some areas and not ahead in others. In terms of manufacturing physical robots, it is clearly ahead and has a big edge on things like electric motors and magnets. It is clearly ahead on energy buildout as well.But then, there’s places where I think we are ahead of them and my guess is that that’s always what it has been like and how it will continue to be. It’s hard to be ahead on everything. Maybe if you have the only super intelligence in the world, you could do it but that would actually be bad.I don’t think there should be any single super intelligence in the world. There should not be any one person or any one country or any one company in charge of super intelligence, including the United States. The world is at its best when power is widely distributed, when people have a lot of different ideas and when there’s enough of a balance in power that we can keep each other in check. You don’t literally want one AI in charge of the world, no matter who has it.Will AI fragment power or concentrate it?That’s one of the most important questions of our time. You can imagine a world where AI massively concentrates power — one entity controlling it all. You can imagine chaos — everyone having superintelligence with no rules. Reality will be somewhere in between. I believe in broad distribution with guardrails.Story continues below this adThe clearest signal already is this: one- or two-person startups now have extraordinary leverage. That was impossible a few years ago. AI lowers the cost of execution dramatically. That is decentralising. But frontier model training is capital-intensive. That concentrates. So both forces exist simultaneously. The outcome depends on policy, culture and how quickly tools become widely available.How competitive is the AI space? At the AI Summit, we saw a little awkward moment on stage (Altman and Anthropic CEO Dario Amodei didn’t hold hands on stage).You got to give the internet something to laugh at. It is definitely competitive.It is also very incestuous. A lot of people who are building out were working with you at one point and everybody knows everybody. Nvidia is investing in you. You are buying Nvidia chips.It is. It is a weirdly small world for sure. I think it is very competitive commercially but among the groups building frontier models, there is also serious commitment to safety and alignment. Competition accelerates innovation. Cooperation is essential for safety. We need both. And the truth is, this technology is too important for any one actor to ‘win’ in the traditional sense.Do you want to say a little bit more about what happened on stage?I don’t really have that much more to add.Story continues below this adWhen you look at India right now, what excites you most?The shift from consumption to creation. India has the scale, the talent, the demographic energy. If India combines that with compute infrastructure and bold policy, it could surprise the world. The countries that adapt fastest to abstraction shifts win. Right now, India feels like it wants to adapt fast. Anant Goenka with Sam Altman at Express Adda in New DelhiOne thing you admire about Google.The first thing I admire is that Demis Hassabis and the Google team started working on AI long before it was fashionable and they did so with deep conviction. Without their early inspiration, I don’t think we’d be here.The second thing is their recent execution. They were behind but they refocused, scaled aggressively and improved quickly. That ability to regroup and execute at scale is impressive.Which country is broadly on the right path for AI regulation?No one knows yet. What I’m happy about is that different countries are experimenting. Over the next few years, we will see what works and what doesn’t.Let’s do a short game. I’ll give you criticisms of AI. You give me the defence.Too much concentration of power.Unless we push hard to democratise, the world needs to hold companies and governments to a high standard. If AI is going to reshape the world, it must be broadly accessible.What about water and other natural resources used by data centres?The idea that one ChatGPT query uses gallons of water is not connected to reality. It used be true when we used to rely more on evaporative cooling in data centres. Energy consumption, however, is real at a system level. We need to move faster toward nuclear, wind and solar energy.It was said that earlier versions of ChatGPT consumed energy equivalent to 10 iPhone batteries per query and now maybe one. Is that accurate?It’s way less.But Bill Gates’s theory was that AI will learn from human evolution to be more efficient on how much energy it consumes.People measure how much energy it takes to train an AI model and compare it to the energy for one human answering one question. But humans require about 20 years of development — food, shelter, infrastructure — to become capable of answering questions. And that’s built on centuries of human evolution. The fair comparison is: once trained, how much energy does it take for AI to answer a question versus a human? Measured that way, AI is probably already competitive in energy efficiency.‘AI is making kids dumber.’True for some kids! There are kids who say, ‘This is great, I cheated my way through school.’ That’s worrying. But most kids say, ‘Look at what I can build now.’ They’re creating new workflows, learning faster, experimenting more. When Google first came out, teachers thought memorisation was dead. But education adapted. We will need new ways to evaluate learning and creativity. But overall, AI will increase what students are capable of.Some people say AI isn’t democratic enough. Where does this resistance come from — from those who have experienced technology at its highest levels and are now nervous and want to pause, or from those who have never experienced it and are nervous about feeling vulnerable?Everywhere. As more people use the technology, fewer want to totally pause it. Instead, they ask, ‘What does this mean for us? Can it go slower? Can we have more input?’ That’s a healthier debate.One thing Silicon Valley should learn from Chinese tech?Move fast.One thing from Indian tech?Move faster.One imagined and one real fear about Chinese dominance in AI?The imagined fear is humanoid robots marching through cities. The real fear is cyber conflict — AI being used to influence populations, hack infrastructure, manipulate information.Surveillance States.I’m super worried about that. Increasingly, fear of AI going wrong is used to justify surveillance. People haven’t fully thought through the downsides of a Surveillance State.The reason you moved from nonprofit to capped-profit and now broader revenue models.Democratising AI and staying at the frontier of research requires huge capital.Research-first or product-first?Research-first. It almost automatically creates a good product.Why did you not take equity in OpenAI?That was one of my dumbest decisions. We were a nonprofit, and I didn’t care financially. But it created unnecessary conspiracy theories. It wasn’t worth it.One thing, in spite of all the differences you have with Elon Musk, you admire about him.I am going to think of something but give me a minute. He is extremely good at physical engineering and at getting people to perform incredibly well at their jobs.Should governments ban social media for kids under 16?Heavy restrictions make sense but not a total ban.If the government were to ask AI companies like yours to help figure out war planning, how would you respond? Is the Pentagon just another client for you? Do you draw a line somewhere?AI systems today are not reliable enough for war planning. But they can assist in analysing large volumes of information. There may be defence uses someday. For now, we must be cautious. That said, we certainly want to support the government and there’s a lot of things we can do already.It was used for Nicolás Maduro’s capture. Is that true?.I just don’t know.How far are we from AGI (Artficial General Intelligence) and ASI (Artificial Superintelligence)?AGI feels close. If you had asked people years ago if systems capable of independent research, writing complex programmes and performing professional knowledge work existed, they would have called that AGI. We adapt quickly to new capabilities. Superintelligence may be closer than I once thought.The one thing that would you never ask ChatGPT?How to be happy. I would rather ask a wise person.That’s interesting because that’s one of the most uses of AI is for companionship.For therapy or life advice, AI can be useful. For deeper philosophy of life, I would still turn to humans.Which is less likely: TSMC (Taiwan Semiconductor Manufacturing Company) losing its dominance or you and Musk becoming friends?Both are unlikely.What is it about TSMC that makes it so dominant?Relentless focus and optimisation. They just keep improving at every level.Tell us about your hardware project with industrial designer Jony Ive?Imagine technology that understands your life context, integrates naturally and isn’t intrusive. That’s the goal.What should governments regulate and avoid regulating?Focus on catastrophic risks first. Be more flexible about smaller issues until we understand them better.Biggest mistake corporations make with AI adoption?I was in a meeting recently with a big company that was planning to spend 2026 strategising, 2027 getting the company ready and 2028 deploying. That may work for other kinds of technology. Apparently, if you do like a giant ERP (Enterprise Resource Planning) migration, that’s the kind of timeline it takes. Doing that for AI will be a catastrophic mistake. The nimbleness required, the speed, the commitment required is just totally different.What would you tell global leaders?Democratise AI. Put it in people’s hands. No other strategy will work.Which previous statement do you most regret: a) India building a foundation model with $10 million is hopeless, b) You will remain a not for profit, c) ChatGPT won’t accept advertising revenues.I never said India can’t, but the non-profit one. Rajesh Magow, Founder, MakeMyTripOpenAI is described as a research-first company. Researchers are driven by breakthroughs and discoveries. Given AI’s growing power, shouldn’t responsible AI receive equal focus?Yes, absolutely. Responsible AI has been part of our DNA from the beginning. As we move closer to extremely capable systems — potentially superintelligent ones — that responsibility becomes even more critical. What I’m proud of is that our researchers genuinely internalise this. The people who succeed at OpenAI aren’t just pushing capability forward; they are constantly thinking about safety and impact at the same time.Rajesh MagowFounder, MakeMyTrip Hazel Siromoni, Chitkara University, Pro Vice ChancellorAs AI advances, are your safety checks and balances evolving at the same pace?Safety is a major focus for us — both as a company and for me, personally. There’s always tension. Sometimes we may be too conservative and restrict access more than necessary. Other times, critics argue we are moving too fast.Balancing democratisation with safety is difficult. Our principle has been to start conservatively and then broaden access as we gain confidence. So far, that approach has worked well and we intend to continue refining it.Hazel SiromoniPro-Vice Chancellor, Chitkara University Abhishek Khaitan, Managing Director, Radico KhaitanWhich professions do you believe are most at risk because of AI?Many professions, as currently defined, will largely disappear. For example, I trained as a software engineer. The way I learned to write software — manually coding line by line — is now largely obsolete.That doesn’t mean software engineering disappears. It evolves. The work changes. There will be entirely new professions created. Some jobs will transform dramatically. Some may change very little. But large categories of work will need to adapt in fundamental ways.Abhishek KhaitanManaging Director, Radico Khaitan Stuti Gupta, Director, Terrasoul PolymerAs a creator, I use ChatGPT often. What concerns you more: AI becoming too powerful, or humans becoming passive and overly dependent on it?I don’t think humans will become too passive. What I’m seeing, particularly among creators, is faster iteration. AI shortens the loop between idea and feedback. You try something, refine it, improve it, repeat. That produces better outcomes. When image generation first appeared, people predicted the end of creativity. We have seen more experimentation. I used to worry about passivity but that doesn’t seem to be the dominant pattern.Stuti GuptaDirector, Terrasoul Polymer