Chinese artificial intelligence start-up DeepSeek has launched an "experimental" version of its V3 foundation model ahead of the country's National Day holiday, as the Hangzhou-based company accelerates its product releases.On Monday, DeepSeek released V3.2-Exp and open-sourced it on developer platforms Hugging Face and Alibaba Group Holding-backed ModelScope. According to DeepSeek, this model enhanced training and inference efficiency while reducing application-programming-interface costs by more than 50 per cent compared with previous versions.V3.2-Exp is available on DeepSeek's website and app.AdvertisementAdvertisementAdvertisementDo you have questions about the biggest topics and trends from around the world? Get the answers with SCMP Knowledge, our new platform of curated content with explainers, FAQs, analyses and infographics brought to you by our award-winning team.The move followed closely on the heels of DeepSeek's previous release, DeepSeek-V3.1-Terminus, which debuted just a week earlier, and came two months after V3.1 was introduced. The original V3 model was launched in December.The industry is paying close attention to DeepSeek's new products after the start-up said last month that it would tailor its models for next-generation AI chips developed in China.In January, just before the Lunar New Year, DeepSeek launched its R1 reasoning model, which generated considerable excitement at home and abroad, and prompted domestic competitors to keep employees at work during one of the country's most important holidays of the year.AdvertisementAdvertisementAdvertisementThat had fuelled speculation ahead of this year's eight-day National Day holiday, which starts on Wednesday, that DeepSeek might unveil a major update to its flagship V3 or R1 models, potentially named V4 or R2.Instead, DeepSeek introduced a new "sparse attention mechanism" as an "intermediate step" towards its next-generation model architecture, according to a post on Hugging Face.Sparse attention refers to a technical innovation that enhances model efficiency by reducing the computational costs associated with training.The start-up said V3.2-Exp incorporated a new "DeepSeek Sparse Attention" mechanism, which improved efficiency when handling long inputs. It achieved performance "on par" with V3.1-Terminus, despite being significantly cheaper to use, according to DeepSeek.AdvertisementAdvertisementAdvertisement"This experimental release represents our ongoing research into more efficient transformer architectures," the post said.According to leading AI benchmarking firm Artificial Analysis, DeepSeek's V3.1-Terminus tied with OpenAI's gpt-oss-120b - released in August - as the two strongest open-source models globally. The firm noted that DeepSeek was "slightly ahead" of Alibaba Cloud's Qwen3-235B-2507, making it the strongest Chinese AI model.Alibaba Cloud is the AI and cloud services unit of Alibaba, owner of the Post. The company is also exploring smaller but more efficient models. Earlier this month, it said that models built on its new Qwen3-Next model architecture, its most efficient to date, served as a preview of its next generation of models.DeepSeek previously outlined plans to enhance the agentic capabilities of its base models as a "first step towards the agent era".AdvertisementAdvertisementAdvertisementAI agents are designed to autonomously execute tasks on behalf of users. One major bottleneck in developing such software is the limited "context windows" of current AI models, which restrict their ability to perform a series of actions over extended periods.On social media, Huang Zhipeng, an AI researcher at Dutch institution Utrecht University, who predicted last week that DeepSeek would continue with incremental updates, said V4 was likely to be released next year, with R2 expected around the Lunar New Year.This article originally appeared in the South China Morning Post (SCMP), the most authoritative voice reporting on China and Asia for more than a century. For more SCMP stories, please explore the SCMP app or visit the SCMP's Facebook and Twitter pages. Copyright © 2025 South China Morning Post Publishers Ltd. All rights reserved.Copyright (c) 2025. South China Morning Post Publishers Ltd. All rights reserved.