China's Zhipu Debuts GLM-4.5, Outperforming Rivals With Leaner and Faster AI Model

Wait 5 sec.

AsianFin — Zhipu AI, one of China’s leading foundational model developers, launched its next-generation flagship model series GLM-4.5 on Sunday, as competition in the domestic large language model (LLM) space intensifies.Built on a Mixture of Experts (MoE) architecture and optimized for AI Agent scenarios, GLM-4.5 has set new benchmarks among open-source models, outperforming key rivals in reasoning, coding, and agent intelligence. In overall global evaluations, GLM-4.5 ranks third worldwide, first among Chinese models, and first among open-source models, ahead of Stepverse’s Step-3, DeepSeek-R1-0528, and Moonshot’s Kimi K2.The model series includes two variants: the full GLM-4.5, with 355 billion total parameters (32B active), and the lighter GLM-4.5-Air, with 106 billion parameters. Both are fully open-sourced via Hugging Face and Alibaba’s ModelScope, with APIs accessible through the Zhipu Open Platform. The complete feature set is available for free through Zhipu Qingyan and the z.ai official site.“The road to AGI has only just begun,” CEO Zhang Peng said. “Current models are far from reaching human-level capability.”Zhipu’s push into open-source comes as China’s LLM market undergoes rapid iteration. In the past month alone, the country has seen the release of MiniMax M2, Kimi K2, and Stepverse’s Step-3. Meanwhile, global heavyweight OpenAI is reportedly preparing to launch GPT-5—a closed-source, multimodal model—as early as late July.Zhipu’s GLM-4.5 is pre-trained on 15 trillion tokens of general data and refined with 8 trillion tokens of specialized domain data focused on code, reasoning, and agents. The model is further enhanced with reinforcement learningtechniques for complex task execution. According to internal benchmarks, GLM-4.5 uses just 50% of the parameters of DeepSeek-R1 and one-third of those in Kimi-K2, while delivering superior performance in key LLM evaluation tests.In real-world performance metrics—including 52 development tasks across software, game, and web development—GLM-4.5 delivered results comparable to Claude-4-Sonnet, while offering better tool invocation reliability and task completion rates.The model’s token pricing is highly competitive, with input costs as low as RMB 0.8 per million tokens and RMB 2 for output—approximately one-tenth the cost of Anthropic’s Claude. Zhipu also claims the high-speed version of GLM-4.5 can generate over 100 tokens per second, supporting low-latency, high-concurrency environments for enterprise-grade deployment.Zhipu, founded in 2019, is one of China’s earliest developers of large-scale pre-trained models. Since releasing its first ChatGLM model in March 2023, the company has iterated four times and launched over 20 AI products. By year-end 2023, Zhipu reported more than 2,000 ecosystem partners, 1,000 enterprise applications, and over 25 million userson its Qingyan platform. Paid features have helped Zhipu cross an ARR of over 10 million yuan.On the funding side, Zhipu recently announced a RMB 1 billion strategic investment from Shanghai’s state-owned capital as it moves closer to a domestic IPO. Prior rounds included backing from Hangzhou Urban Investment, Shangcheng Capital, and Zhuhai Huafa, with a total raise exceeding RMB 10 billion. Zhipu’s investors now span top VCs such as Hillhouse, Qiming, and Legend Capital, alongside internet giants Alibaba, Meituan, Tencent, and Xiaomi.The launch of GLM-4.5 also kicks off what the company calls its “Year of Open Source”, with plans to roll out a full suite of foundational, inference, multimodal, and agent models.Zhipu’s ambitions underscore a broader trend in China's AI strategy—doubling down on open-source at a time when U.S. models increasingly tilt toward closed platforms. Analysts say this divergence could reshape the global LLM landscape.“Open-sourcing domestic models injects fresh momentum into the AI ecosystem,” one industry insider told TMTPost. “It’s likely to trigger a new phase of global model realignment.”Zhipu’s release coincided with another headline from rival Alibaba, which on Sunday introduced Tongyi Wanxiang 2.2, a cinematic-grade video generation model with more than 60 tunable visual parameters. Last week, Alibaba also unveiled Qwen 3, Qwen3-Reasoning, and Qwen3-Coder, strengthening its position across base, reasoning, and code-generation models.Meanwhile, Stepverse’s Step-3, announced at the World Artificial Intelligence Conference (WAIC), is the company’s first native multimodal model and boasts 321 billion parameters using MoE architecture—reflecting the industry-wide shift toward large, efficient multi-expert systems.As the pace of innovation accelerates, the open-source release of GLM-4.5 marks a pivotal moment not only for Zhipu, but for China's LLM ambitions at large. With technical superiority, cost-efficiency, and ecosystem momentum, the company is positioning itself as a serious challenger—not just at home, but globally.更多精彩内容,关注钛媒体微信号(ID:taimeiti),或者下载钛媒体App