Open-Source AI Is Being Embraced By China and the US

Wait 5 sec.

Stanford’s AI Index 2025 reports that the performance gap between open and closed models narrowed to single-digit percentage points on multiple benchmarks in one year. Open models are now “good enough” for many production tasks – and orders of magnitude cheaper than closed ones.\At the same time, open AI models are increasingly becoming geopolitical assets. In the past year, Chinese labs leaned hard into open weights by releasing competitive models the world can download and run. In recent weeks, the Silicon Valley giants that began the closed-API era began to respond in kind. The result is a price-and-innovation war that could benefit every builder.Who Is Opening Up?In China, Alibaba has been quickly releasing its Qwen LLM models under open, developer-friendly licenses, while Baidu recently made ERNIE 4.5 freely available on GitHub and Hugging Face. These companies treat building developer communities as a core strategy, not just a PR move. And who can forget when DeepSeek shocked everyone by releasing a powerful open-weight reasoning model that forced U.S. labs to step up?\Now, American giants have responded. OpenAI, after years of silence on open weights despite its name, released its first new open-weight models since the GPT-2 era. Notably, it has explicitly been pitching “run anywhere” customization. Elon Musk’s xAI posted Grok-1’s 314B-parameter MoE weights and has promised to post weights for Grok-3 soon. Meta continues to ramp up the Llama line with the 3.x wave.What Governments Are DoingBeijing’s industrial playbooks (compute subsidies, model approvals, and “AI+” initiatives) tilt toward domestic capability and open ecosystems resilient to export controls.\The Trump administration’s AI Action Plan now frames open-weights as having “geostrategic value,” a notable pivot from cautious rhetoric in 2023-24. Lawyers are warning enterprises to get smart about license terms as open-weight adoption grows. In the future, open models will be both public-goods and soft-power instruments.Why Organizations Increasingly Prefer Open WeightsOpen models have several benefits for user organizations:Local and inexpensive: No per-token tax, better latency, and tighter control over uptime and data paths; these characteristics are especially useful for regulated industries or edge/air-gapped deploymentsCustomization: You can fine-tune on narrow domain data without shipping data off-prem to a third-party API.Avoiding vendor whiplash: Open weights reduce the risk that a vendor product or pricing change breaks your roadmap overnight.\But there are traps, too:Security: Local models expand the attack surface (supply-chain poisoning of checkpoints, prompt-injection in internal tools, side-channel leaks), so organizations may want to treat model artifacts like binaries by verifying hashes, controlling versions, and sandboxing models.Governance: “Open” has many meanings, since some licenses restrict use cases or redistribution, while others are permissive (Apache-2.0/MIT)Organizations Should Seek IP ClarityOrganizations should ensure the following when using open models:Derivatives are yours. If you fine-tune and produce a “child” model, you should own that derivative to the fullest extent allowed by the base license. Choose permissive licenses when possible and memorialize ownership in SOWs and DPAs.Inputs are protected. Contractually bar providers from using your prompts, corpora, or embeddings to improve their public models. Require at-rest/in-use encryption and strict data-retention windows.Outputs are yours. Ensure the license grants commercial rights to generated outputs and that no usage restrictions (sector bans, user caps) sneak in via “open but not OSI-open” terms. (For example, Llama’s license is open-weight but not fully OSI-open).What To Expect In The Years AheadOrganizations should remember that licenses matter: “open weights” can still carry strings that complicate redistribution or certain verticals.\As top American and Chinese labs compete in open models, prices fall, reproducibility rises, safety and eval tooling improves, and the long-tail of use cases (on-prem, edge, low-connectivity) stops being second-class. This is probably good news.\But geopolitics can fragment ecosystems – for example, with export controls, sanctions, or data-localization rules forcing region-specific forks.\Organizations will need to carefully consider which stacks they can download, inspect, and run – while minimizing political risk.--Shaan Ray\