Five Companies Are Spending $450 Billion in 2025 to Control How You Think

Wait 5 sec.

In 2025, five corporations will pour over $450 billion into the infrastructure that mediates your thoughts. Every search, every feed, every answer passing through systems designed not just to serve information, but to structure how you process it.Why it matters AI isn’t becoming a tool you use. It’s becoming the operating system for human cognition itself and a handful of American corporations are spending trillions to own it.What’s happening nowThe term “Human Operating System” has emerged across tech circles in the past two years as AI capabilities have scaled dramatically. It’s not an academic concept gathering dust, it’s the framing being used by AI executives, historians, and policy makers to describe what’s actually being built.The core idea: Just as Windows or iOS mediates between you and your computer, AI is becoming the infrastructure that mediates between your mind and the world…how you access information, generate ideas, make decisions, and act.Think about your last week:You probably asked ChatGPT or Claude to help draft somethingYou might have used AI to summarize articles or analyze dataPerhaps AI helped you code, design, or solve a problemEach interaction trained the system and deepened your dependencyThat’s not a tool relationship. That’s an operating system relationship.What the experts are sayingIn his latest book “Nexus: A Brief History of Information Networks from the Stone Age to AI”, Yuval Noah Harari, the historian who predicted many of today’s tech disruptions, warns that “AI has hacked the operating system of human civilization.” In his work on “Dataism,” he argues we’re entering an era where “the universe consists of data flows” and “we may interpret the entire human species as a single data processing system, with individual humans serving as its chips.”Translation: Humans are becoming the hardware. AI is becoming the software that runs us.Mustafa Suleyman, Microsoft’s AI CEO, recently launched a “Humanist Superintelligence” initiative—AI that he says should remain “subservient to humans” and keep “humans at the top of the food chain.“Stop and think about that phrasing. He’s remarkable that such a statement even needs to be made. The fact that the head of Microsoft AI feels compelled to assert humans should stay in control tells you everything about where we’re heading.Anthropologist David A. Palmer describes the Human Operating System more technically: “the interface between your mind, your body, and the things in the world”— the structures and systems that mediate how we perceive and act.The convergence pointHere’s what’s new and urgent: These aren’t disconnected metaphors. They’re describing the same phenomenon from different angles.Historians see it as a civilizational shiftTech executives are building it with trillion-dollar investmentsAnthropologists are studying how it changes human cognitionPolicy makers are realizing it’s already hereThe “Human Operating System” concept emerged specifically because AI has reached a scale and integration level that demands new language. We’re not talking about software anymore. We’re talking about cognitive infrastructure—the substrate on which human thinking, creativity, and decision-making now runs.The critical question isn’t whether AI becomes our operating system. It already is.The questions that matter:Who owns and controls this infrastructure?What values are encoded into it?Does it amplify human flourishing or extract value from it?Can nations and individuals maintain sovereignty, or are we locked into dependencies controlled by a handful of companies?These aren’t hypothetical future concerns. The operating system is being written right now, with the largest capital deployment in human history.The web’s cautionary tale: Tim Berners-Lee’s warningIn 1989, Tim Berners-Lee invented the World Wide Web with a radical vision: a decentralized, open platform that would democratize information and empower individuals.He could have patented it, become a billionaire like his contemporaries Bill Gates and Steve Jobs. Instead, he gave it away freely to humanity, believing that universal access to knowledge would transform civilization for the better.In his recent book This Is for Everyone: The Unfinished Story of the World Wide Web, Berners-Lee reflects on how his creation launched a new era of creativity and collaboration while unleashing powerful forces that imperil truth, privacy, and democratic discourse.The web’s promise of human empowerment has been largely captured by what he describes as “rapacious corporations and authoritarian governments” that have turned his open platform into an extraction machine.Now we stand at a similar crossroads with artificial intelligence. AI promises to become our cognitive operating system i.e.  infrastructure for how we think, create, work, and make decisions.But if we’re honest about the trajectory, we’re not building toward Berners-Lee’s vision of technology serving humanity. We’re building toward a future where a handful of American corporations – and perhaps a few Chinese ones – control the infrastructure of human cognition itself, extracting value at every keystroke while we surrender agency in exchange for convenience.The question isn’t whether AI will amplify human capability. It already does. The question is whether that amplification serves human flourishing or becomes another chapter in what critics call “surveillance capitalism” – a system where our data, our thoughts, our creative output, and ultimately our autonomy are harvested for profit by companies accountable to shareholders, not citizens.The Trillion-Dollar battle for cognitive dominanceTo understand the scale of what’s being built, we need to grasp the staggering investment flowing into AI infrastructure. This isn’t incremental technological development. It’s one of the largest capital deployment events in human history.The numbers are breathtakingThe AI data center industry, worth $13.62 billion in 2024, is projected to grow at a remarkable 28.3% compound annual growth rate through 2030 – significantly outpacing the traditional data center market’s 11.24% CAGR. But that’s just the market value. The actual investments dwarf these figures.There at least five big AI companies battling to become “Humanities Operating System” or “HOS”:Microsoft Amazon Google Meta Apple And they will be spending in excess of $450 billion in 2025 alone. AI demand drove a record $57 billion in global data center investment in 2024, and eight hyperscalers expect a 44% year-over-year increase to $371 billion in 2025 for AI data centers and computing resources.To put this in perspective: Amazon, Meta, Microsoft, Alphabet, and Oracle spent $241 billion in capex in 2024 – that was 0.82% of US GDP for that year. In the second quarter of 2025, the tech giants spent $97 billion – 1.28% of the period’s US GDP.If this pace continues, it will exceed peak annual spending during some of the most famous investment booms in the modern era, including the Manhattan Project, NASA’s spending on the Apollo Project, and the internet broadband buildout that accompanied the dot-com boom.But there is more…The Stargate Project: Half a Trillion DollarsThe scale becomes even more staggering when examining specific initiatives. OpenAI, Oracle, and SoftBank announced the Stargate project with a $500 billion commitment through 2028, aiming for 10 gigawatts of AI data center capacity. By late 2025, they had already secured nearly 7 gigawatts of planned capacity and over $400 billion in investment across five new U.S. sites.Individual company commitments are equally massive:Microsoft plans to invest $80 billion in AI data centers by 2025, with more than half in the United StatesAmazon has allocated $86 billion for expanding its AI infrastructureOpenAI signed a seven-year, $38 billion strategic partnership with AWS in November 2025BlackRock’s AI Infrastructure Partnership announced a $40 billion acquisition of Aligned Data Centers in October 2025Meta committed to multiple massive facilities, including plans for facilities approaching one gigawatt of capacityMcKinsey estimates that companies across the compute power value chain will need to invest $5.2 trillion by 2030, while Nvidia CEO Jensen Huang estimated that between $3 trillion and $4 trillion will be spent on AI infrastructure by the end of the decade.The physical infrastructure challengeThese aren’t just financial abstractions. They represent unprecedented physical infrastructure demands:By 2025, 33% of global data center capacity will be dedicated to AI, expected to reach 70% by 2030. The United States currently hosts 51% of the world’s hyperscale AI data centers.The technical requirements are staggering:The average AI training workload requires approximately 30 megawatts of continuous powerRack power densities in AI data centers are increasing from 40 kW to 130 kW, with projections reaching 250 kWThe average cost per AI rack is expected to escalate to $3.9 million in 2025Traditional air cooling systems are becoming obsolete for AI workloads. Liquid cooling is 3,000 times more efficient than air cooling for AI hardwareHyperscalers make up around 80 percent of all data center demand, with colocation operators in North America seeing their supply grow by more than 40 percent in 2024 alone, from 12.4GW in 2023 to more than 18GW in 2024.The energy crisisPerhaps most concerning is the energy demand. According to Deloitte’s “TMT Predictions 2025” report, data centers comprise only 2% of global electricity consumption, or 536 terawatt-hours in 2025, but global data center electricity consumption could double to about 1,065 TWh by 2030.Electric and gas utility capex is expected to jump 22% year over year to $212 billion in 2025 across 47 utilities – a sharp rise from the 7.6% CAGR over the past decade. Electric and gas utility capex is expected to surpass $1 trillion cumulatively within the next five years (2025-2029) for the 47 biggest investor-owned utilities.Companies are pursuing radical solutions, including reopening nuclear plants like Three Mile Island to power Microsoft’s data centers and exploring geothermal and small-scale nuclear reactors.Winner-take-all or oligopoly? The emerging power structureWith such massive capital requirements, a crucial question emerges: Is AI infrastructure becoming a winner-take-all market dominated by a single company, or an oligopoly controlled by a handful of players? The answer appears to be decisively the latter – and that may be even more concerning than outright monopoly.The oligopoly structure is already locked inThe market concentration is stark at every layer of the AI stack:Cloud computing infrastructure: AWS is the dominant provider with more than 30% market share – approaching 40% in some assessments. Azure comes in second at nearly 20%, and Google Cloud and others run further behind. Hence, the market for cloud computing service is characterized by oligopoly: “With identical services comes commoditization, and only big vendors that can deliver huge economies of scale with margins will survive in this space”.AI models and applications: ChatGPT dominates with massive market share, receiving an estimated 2.5 billion prompts per day and generating $1 billion per month in revenue. Perplexity ranks in a distant second, followed by Microsoft’s Copilot and Google’s Gemini.Semiconductor hardware: The monopolist Nvidia manufactures most of the chips needed for AI development. Nvidia’s market cap hovers near $4.6 trillion, with large cloud providers (likely Amazon, Google, and Microsoft) making up a reported 50% of its total data center revenue.The web of partnerships: UK Competition and Markets Authority chief executive Sarah Cardell identified concerns about an “interconnected web” of over 90 partnerships and strategic investments established by Google, Apple, Microsoft, Meta, Amazon and Nvidia in the market for generative AI foundation models.Why oligopoly, not monopolySeveral factors prevent outright monopoly while ensuring concentrated control:Massive Capital Requirements: The trillion-dollar investment needed by 2030 represents an unprecedented scale of capital – only the largest tech companies and sovereign wealth funds can participate at this level.Vertical Integration: Federal Trade Commissioner Alvaro Bedoya noted that big technology companies have engaged in vertical integration wherein they own or control the overwhelming majority of resources necessary to dominate, from semiconductors to cloud computing infrastructure, foundation models, and the user interface.Strategic Interdependencies: Microsoft’s $10 billion partnership with OpenAI gives Microsoft privileged access to OpenAI’s technology while locking its dependence on Microsoft’s cloud computing infrastructure. Similarly, leading startups including Hugging Face (Amazon), Cohere (Google, Nvidia), Stability AI (Amazon) and Inflection AI (Microsoft, Nvidia) have inked major deals with Big Tech firms.Virtually all the major tech companies and their executives are connected through institutions and professional networks, including the start-up incubator Y Combinator, joint research projects, corporate boards and social relationships.The game theory trapBig Tech faces a game theory problem. While the optimal strategy would be moderate, coordinated investment, each company fears being left behind. This forces all players into aggressive spending, potentially destroying the collective profit pool even if individual firms succeed technologically.The AI race collapses previously separate markets (search, social media, shopping) into one winner-take-all competition, eliminating the comfortable oligopoly structure that made these companies so profitable.Yet this competitive dynamic doesn’t prevent oligopolistic coordination. The cozy relationships among tech executives are reminiscent of the Gilded Age “money trust” of key banks and financial institutions that both supplied capital to the era’s industrial giants and colluded with them and one another.Is competition possible?There are dissenting views. When Chinese startup DeepSeek demonstrated it could train world-class AI models using a fraction of the computing resources required by industry leaders – reportedly spending just $5 million on compute compared to budgets of leading AI labs – some argued this revealed that “dominance is more fragile than markets and regulators believed”.However, this doesn’t mean we should ignore legitimate antitrust concerns in AI markets, and the DeepSeek example may be the exception proving the rule. The overwhelming evidence points to an entrenched oligopoly structure with massive barriers to entry.Civilization’s risk This oligopoly structure creates multiple civilization-level vulnerabilities that exceed even traditional monopoly concerns:Geopolitical weaponization: A conflict between the US and China could see AI access become a sanction tool. Countries dependent on foreign AI suddenly lose critical infrastructure. Healthcare systems can’t process patient data. Financial institutions can’t run risk models. Research halts. This isn’t hypothetical – we’ve watched this pattern with semiconductor export controls.Economic domination: Companies and nations using inferior or outdated AI systems fall behind competitors with latest-generation models. The gap compounds. Eventually, entire industries and countries become uncompetitive not because of any inherent disadvantage, but because they’re operating with obsolete cognitive infrastructure. The digital divide becomes a cognitive divide.Cultural imperialism: AI systems trained on data and values from their country of origin make decisions reflecting foreign cultural assumptions, economic priorities, political biases. Content moderation trained on one nation’s norms suppresses speech legal elsewhere. Medical AI trained on Western populations misdiagnoses other genetic backgrounds. Loan algorithms deny credit based on patterns irrelevant to local contexts. As Harari notes, “the idea of ‘free will’ is under threat” as Dataism takes hold.Systemic risk: If something disastrous should happen to one of those providers – a prolonged outage, a cyber attack, whatever it was – it would have devastating effects for the whole economy, comparable to what happened to European gas prices after the war in Ukraine left it without Russian-provided natural gas.Regulatory capture and innovation control: A handful of companies with vast resources, political connections, and regulatory sophistication can shape AI governance to entrench their positions. They fund academic research. They provide “advisory” services to governments. They set “industry standards.” They claim safety concerns require centralization. The result: innovation becomes gated by incumbent gatekeepers.The Promise: AI as true cognitive liberationYet we shouldn’t lose sight of what’s possible. An AI-powered cognitive infrastructure could genuinely amplify humanity in unprecedented ways.The enhancement is already visible. A researcher synthesizes knowledge across thousands of papers in hours rather than weeks. A doctor accesses diagnostic pattern recognition across millions of cases. A student receives infinitely patient, personalized tutoring. A small business owner gains capabilities once requiring expensive specialists. A writer externalizes half-formed thoughts into dialogue that clarifies thinking.This isn’t replacement – it’s scaffolding. The AI handles the mechanical, the routine, the computational, freeing humans for judgment, creativity, ethics, and meaning-making. A pianist spending years on technique versus spending those years on interpretation. An architect freed from drafting to focus on design vision.The deeper promise is democratization. Capabilities once requiring years of training become accessible utilities. Legal reasoning, software development, complex analysis, language translation, creative production – all increasingly available to anyone. A farmer in rural India accessing agricultural expertise. A curious teenager exploring quantum physics at 3 AM with a patient tutor.AI as an operating system could be civilization’s great equalizer. The printing press democratized knowledge. The internet democratized communication. AI could democratize expertise itself.This is the vision. This is what we’re promised. And in carefully curated demos and promotional materials, it looks magnificent.Cognitive steroids But here’s what Berners-Lee learned watching his open web become captured: good intentions don’t protect against systems designed for extraction. The free, open access communications paradigm did not arrive like magic – it was the product of political wrangling, and ultimately it was lost to forces more interested in monetization than empowerment.We watched this happen with the web. Google started as “don’t be evil” and gave us incredible free services. Facebook connected the world. Amazon made everything available. And then, gradually, we realized we weren’t customers – we were products. Our attention was the commodity. Our data was the raw material. Our behavior was the thing being optimized, shaped, and sold.The AI operating system is Web 2.0 on cognitive steroids.Every query to ChatGPT, every document analyzed by Claude, every image generated, every code commit assisted – all of this represents data flowing through corporate infrastructure designed first and foremost to maximize shareholder value. The business model isn’t mysterious: our queries train their models, our creative output teaches their systems, our workflows reveal valuable patterns, and our dependencies create captive markets.Consider what you’re actually surrendering when AI becomes your operating system:Your intellectual property flows to competitors. Product designs, strategic plans, proprietary research – all potentially training data for systems that might serve your rivals tomorrow. The terms of service promise privacy, but they’re written by corporate lawyers optimizing for corporate interests.Your cognitive patterns become corporate assets. How you think, what problems you tackle, what solutions you find creative – this meta-data is often more valuable than the specific content. It reveals market opportunities, competitive intelligence, innovation directions.Your dependencies become leverage. The more essential AI becomes to your workflow, the more power shifts to whoever controls that infrastructure. Pricing changes. Terms of service updates. Access restrictions. Platform decay. You’ll accept it all because switching costs are too high.Your autonomy erodes incrementally. Not in dramatic ways that trigger resistance, but through small surrenders. Using AI’s suggestion instead of thinking it through. Accepting the generated version because editing is harder than approving. Losing the skill to work without the tool.This is dysfunctional capitalism at its most insidious – a system that doesn’t serve human flourishing but extracts value from human activity while creating illusions of empowerment. You feel enhanced. You’re actually becoming dependent. You feel productive. You’re actually becoming a data source.Berners-Lee’s warning: Betrayal of the original promiseWhat makes this particularly galling is that it’s a repeat of the web’s corruption. Berners-Lee initially believed people would share good things and avoid bad content, but he reckoned without the insidious power of manipulative and coercive algorithms on social networks.His response has been to create the Solid project – a web decentralization initiative that aims to radically change how web applications work, resulting in true data ownership and improved privacy. The core idea: users store personal data in “pods” (personal online data stores) hosted wherever they desire, with applications only accessing data if the user grants permission.It’s an elegant solution in theory. In practice, Berners-Lee’s big idea has become his latest personal obsession: to restore data sovereignty to every individual by redesigning the web. But Solid faces the classic chicken-and-egg problem: users won’t adopt it until there are applications, and developers won’t build applications until there are users. Meanwhile, the corporate platforms enjoy massive network effects and switching costs that make migration nearly impossible.The lesson from Solid’s struggle should terrify us about AI: businesses discovered how to monetize and monopolize online time, and they’re doing the same with AI, except faster and with higher stakes. When Berners-Lee envisions switching from “the attention economy to the intention economy” – where computers and services do what users want with information users want them to have – he’s describing exactly what AI should be but almost certainly won’t be under current corporate structures.The web was supposed to empower individuals. It ended up empowering platforms. AI is being sold with the same empowerment rhetoric. Why should we expect a different outcome?The corporate greed machine: Surveillance capitalism goes cognitiveLet’s be explicit about the business model driving AI development, because understanding the incentives explains everything.The extraction economics are straightforward:Offer free or subsidized services to build user base and collect training dataHarvest every interaction to improve models that become competitive moatsCreate dependency through integration and workflow captureMonetize through tiered access, enterprise licensing, and API usageLeverage data advantages to enter adjacent markets and crush potential competitorsShape regulation to favor incumbents under guise of safety and responsibilityThis isn’t a conspiracy theory. It’s how platform business models work. It’s the playbook from social media, cloud computing, and digital advertising. It maximizes shareholder value. It just doesn’t maximize human flourishing.The human costs accumulate:Cognitive deskilling. Students who never learn to write struggle to think clearly. Programmers who never learn to debug can’t build robust systems. Researchers who never learn to read deeply can’t synthesize genuinely novel ideas. The scaffolding becomes a crutch, then a dependency, then a disability.Privacy illusion. Terms of service promise protection while carving out exceptions for “service improvement,” “research,” “security,” and other categories that functionally mean “we’ll use your data however we want while maintaining plausible deniability.”Innovation suppression. Concentrated corporate control means AI development serves corporate priorities – scaling existing models, improving engagement metrics, maximizing revenue – not necessarily advancing science, solving social problems, or empowering individuals. The most transformative research happens in corporate labs under NDA, not in open scientific communities.Democratic deficit. Decisions about AI development, deployment, safety, and access are made by unelected corporate boards optimizing for shareholder returns, not by democratic institutions accountable to citizens. When these systems start making consequential decisions about credit, employment, healthcare, and justice, the lack of democratic control becomes existential.Psychological manipulation. Just as social media algorithms optimize for engagement regardless of wellbeing, AI systems will optimize for usage regardless of whether that usage genuinely serves human interests. The addiction design patterns from social media will migrate to cognitive tools. The result: AI that makes us feel productive while actually making us dependent, engaged while actually being manipulated.This is capitalism doing what capitalism does – finding new frontiers for accumulation, new commons to enclose, new human capacities to monetize. The problem isn’t individual companies being evil. The problem is the system rewarding extraction over empowerment, dependence over autonomy, shareholder value over human flourishing.Despite OpenAI’s move to diversify its cloud providers, committing $38 billion to AWS (and hundreds of billions to other providers) creates significant long-term dependencies, which could limit future flexibility. The enormous financial and computational requirements for frontier AI development could lead to a highly concentrated market, potentially stifling competition from smaller players and creating an “AI oligopoly”.Without regulation – real regulation, not the captured regulatory theater that platforms lobby for – we’re building toward a future where a handful of companies own the infrastructure of human thought. They’ll rent it back to us, monitor how we use it, shape what’s possible within it, and terminate access when convenient to their business model.The Dataism trap: Algorithms as new authorityHarari predicts that the logical conclusion of Dataism is that “eventually, humans will give algorithms the authority to make the most important decisions in their lives, such as whom to marry and which career to pursue”. The Dataist worldview “is very attractive to politicians, business people and ordinary consumers because it offers groundbreaking technologies and immense new powers”.Yet as Harari warns, “when consumers have to choose between keeping their privacy and having access to far superior healthcare – most will choose health“. This is the insidious bargain: we trade autonomy for convenience, sovereignty for capability, one small surrender at a time.According to Dataists, “freedom of information is the greatest good of all” – but this is “not to be confused with freedom of expression.” Freedom of information “is not given to humans, it is given to information. The right of information to circulate freely”. In this worldview, humans aren’t the protagonists – we’re the medium through which data flows.Dataism’s critics worry that “once authority shifts from humans to algorithms, the humanist project may become irrelevant. Dataism threatens to do to homo sapiens, what homo sapiens did to other animals“.How to reclaim our cognitive sovereignty before it’s too lateSo what do we actually do? How do we gain AI’s benefits while protecting against corporate capture and geopolitical vulnerability?The answer requires action at multiple levels – individual, organizational, national, and global. No single layer provides complete protection, but together they can create meaningful sovereignty.Personal level: Individual cognitive autonomyA friend of mine recently developed a unique and powerful Custom GPT application to research and find the best medical advice for his cancer. Without warning ChatGPT closed his account. His IP lost and his second brain shutdown,So how do we start to protect our IP and data when we start to create our own “Human Operating System” powered by AI? Here are some ideas:Recognize what you’re building on. Every AI interaction is a choice about who to trust with your thoughts. Treat AI tools with the same privacy consciousness you’d apply to any sensitive communication. Would you discuss this in a public space monitored by competing companies? Then perhaps don’t send it to a cloud AI system.Diversify dependencies. Don’t build your entire workflow around a single provider. Learn multiple tools. Understand their different strengths, limitations, and ownership structures. Maintain the ability to work without any particular system – even if less efficiently.Maintain human capability. Use AI to amplify skills, not replace them. The student who can’t solve problems without AI assistance hasn’t been enhanced – they’ve been disabled. The writer who can’t construct arguments without AI hasn’t been empowered – they’ve been made dependent. Enhancement means building on foundation, not substituting for it.Support open alternatives. When possible, use and contribute to open-source AI tools. They’re not as polished as corporate offerings, but every user and contributor makes them more viable. This is how we prevent complete corporate capture.Organizational level: Strategic independenceBusinesses and organisations are building their collective internal intelligence and creativity and intellectual property and essential internal and core systems usually on top of just one single AI chatbot. That is a problem. It is a single point of failure for your organisation’s IP, data and intelligence.  Multi-provider strategy. Don’t standardize on a single AI platform. Yes, this creates friction. That friction is the price of maintaining real switching capability. Test alternatives regularly. Build institutional knowledge about multiple systems.Data classification and routing. Not all work carries equal sensitivity. Routine tasks might use convenient cloud AI. Sensitive operations use locally deployed models or trusted providers. Critical work avoids AI assistance entirely until truly sovereign options exist.Build internal capability. Don’t just use AI – understand it. Train teams not as AI users but as AI practitioners who comprehend system limitations, can evaluate alternatives, and recognize when systems are steering you toward outcomes serving provider interests over yours.Demand contractual protections. Data residency requirements. Clear deletion rights. Portability guarantees. Audit capabilities. Organizations have negotiating power – use it to create protective frameworks even when providers resist.National level: sovereign infrastructureNations and regions must think bigger. They need capability that can’t be denied, infrastructure that can’t be shut off, expertise that can’t be blocked.Compute infrastructure as critical infrastructure. Just as nations maintain strategic reserves of oil, food, and medical supplies, sovereign AI capability requires computer infrastructure under national or allied control. Not every country needs its own GPU farms, but regional alliances need collective capacity that doesn’t depend on potential adversaries.Aggressive open-source investment. The only realistic path to AI sovereignty for most nations is funding open-source alternatives. This means supporting model development, creating open training datasets, building legal frameworks encouraging open release, and cultivating communities of practice around open models. These models won’t match proprietary cutting edges, but they need to be good enough for most purposes and available when proprietary systems aren’t.Regional cooperation over national isolation. No country except the US and China can build comprehensive AI capability alone. But coalitions can. The EU pooling resources. The Commonwealth sharing expertise. ASEAN creating collective infrastructure. Latin American research networks. These regional approaches can create viable alternatives to superpower dependence.Talent ecosystems, not brain drain. Infrastructure without people is useless. Nations need programs to train AI researchers, retain talent against Silicon Valley recruitment, and create environments where world-class AI work happens locally. This is expensive. Strategic dependence is more expensive.Data sovereignty frameworks. Legal structures ensuring certain categories of data – healthcare, defense, critical infrastructure, government operations – are processed through sovereign or trusted AI systems. Not every email needs this protection. Some things do.Regulatory courage. Regulate for competition and interoperability, not corporate convenience. Mandate data portability. Enforce interoperability standards. Prevent lock-in through technical requirements. Break up concentrations of power before they become unbreakable. This will face fierce corporate opposition. Do it anyway.Global level: New social contract for AIWe need international frameworks recognizing AI infrastructure as a matter of collective security and human rights, not just commercial competition.Interoperability as a human right. Just as net neutrality argued for equal treatment of internet traffic, we need standards preventing AI lock-in. Technical requirements for model interfaces, data formats, API compatibility. If systems can communicate regardless of provider, switching costs decrease and monopoly power weakens.Norms against weaponization. Just as we have conventions against weaponizing civilian infrastructure, we need norms against weaponizing AI dependencies. Denying AI access should be recognized as economic warfare with appropriate costs. This requires international agreements with enforcement mechanisms.Technology transfer for allies. Mechanisms for partner nations to share AI capabilities, training techniques, model weights. Not forcing companies to give away competitive advantages, but creating paths for strategic partners to access critical capabilities. Think more like nuclear technology sharing among NATO allies, less like unconditional IP transfer.Democratic governance mechanisms. AI development is too important to leave to corporate boards. We need institutions that bring democratic accountability to AI governance – not to micromanage research, but to ensure development serves public interest alongside private profit.Final thoughtsBerners-Lee’s book is ultimately a warning and a cautionary tale that we should learn from the history of the web.History doesn’t repeat but it rhymes. So we need to heed its lessons and its rhythms. He gave humanity an incredible gift – a platform for universal knowledge sharing and collaboration. Within decades, it was captured by forces more interested in extraction than empowerment.Berners-Lee is explicit that social conditions shape how technologies are deployed. The web’s corruption wasn’t inevitable – it resulted from specific choices about business models, regulatory frameworks, and power structures. We chose surveillance capitalism. We chose platform monopolies. We chose to let a handful of companies capture the infrastructure of human communication.Now we face the same choice with AI, except the stakes are higher. This isn’t just about how we communicate – it’s about how we think, create, decide, and function. The AI operating system will shape human capability more profoundly than any previous technology.We can choose differently this time. But only if we act while we still can.Every day that passes, dependencies deepen. Every workflow built around proprietary AI, every system integrated with foreign infrastructure, every researcher trained only on commercial tools creates switching costs and path dependencies making future sovereignty harder.The trillion-dollar infrastructure being built right now – the data centers consuming entire power plants, the chip foundries requiring sovereign wealth fund investments, the model training runs costing hundreds of millions – all of this is locking in an oligopolistic structure that will be extraordinarily difficult to challenge once complete.The post Five Companies Are Spending $450 Billion in 2025 to Control How You Think appeared first on jeffbullas.com.