๐ŸŽง Listen to this article

Narrated by Talon ยท The Noble House

On February 16, 2026 โ€” the eve of Chinese New Year โ€” Alibaba Cloud released Qwen3.5, an open-weight AI model with 397 billion total parameters (17 billion active, via mixture-of-experts architecture) and a 256,000-token context window. Per CNBC's February 17 coverage, the timing was deliberate. Per Wikipedia's Qwen article, the open-source version released on February 16, with Qwen3.5-Plus (a closed API model) simultaneously available.

The South China Morning Post called it the sharpening of the global AI race. CNBC framed it as a shift from chatbots to AI agents. Both descriptions are accurate. What they don't fully capture: the export controls were specifically designed to prevent this.

๐ŸŽ™๏ธ Listen: Audio version

The Numbers That Matter

The open-source Qwen3.5 model: 397 billion parameters, 256K token context, MIT license โ€” downloadable by anyone, deployable on local hardware, modifiable without restrictions. Qwen3.5-Plus, the closed API version, claims performance "on par with state-of-the-art leading models" and offers a one-million-token context window, per GlobalData's analysis.

The competitive open-source leaderboard at the time of release, per the original article's referenced data: Zhipu's GLM-5 at 744 billion parameters sat at the top of the Onyx open-source leaderboard. Below it: Moonshot's Kimi K2.5 at a trillion parameters, MiniMax M2.5 at 230 billion. These are not ChatGPT clones running on toy hardware. These are frontier-class models available under open licenses from Chinese research labs.

Moonshot AI's pricing makes the economic competition concrete: their flagship model runs at roughly one-seventh the per-token cost of Claude Opus. A developer building a production application who can tolerate the data sovereignty implications of Chinese-hosted inference has access to near-frontier performance at 1/7th the cost. That is a real competitive option that didn't exist two years ago.

China open source AI model parameter scale comparison leaderboard Qwen Moonshot GLM visualization
Chinese open-source AI leaderboard, February 2026: Qwen3.5 (397B params, MIT license), GLM-5 (744B), Kimi K2.5 (1T), MiniMax M2.5 (230B). All open-weight, all deployable locally. The export controls targeting NVIDIA H100 shipments to China did not prevent this development.

Why the Export Controls Didn't Stop This

The Biden and Trump administrations' export controls targeting advanced semiconductor exports to China โ€” specifically H100 and A100 GPUs from NVIDIA โ€” were designed to prevent Chinese labs from achieving AI parity. The underlying theory: if Chinese labs can't access the training compute, they can't build frontier models.

The theory was partially wrong. Chinese labs have been acquiring chips through alternative channels including Hong Kong intermediaries (widely reported), using lower-spec chips more efficiently, and investing heavily in model architecture research that reduces training compute requirements. DeepSeek R1's reported $5.9 million training cost โ€” orders of magnitude below frontier American models โ€” validated that training efficiency research can partially compensate for compute restrictions.

More importantly, the controls targeted training compute. They didn't target the research output. Chinese labs publish their architectural innovations. The open-source releases make the weights globally available. Export controls on chips don't export-control knowledge about how to build models that require less compute. The insight diffuses regardless.

The Open-Source Distribution Strategy

Why would Chinese labs release their best models as open weights? The American AI business model is proprietary: OpenAI, Anthropic, Google build closed models and sell API access. Revenue comes from per-token pricing and enterprise contracts. The model is kept secret to capture its value.

China's leading labs have adopted the inverse strategy: release the weights, let anyone deploy and modify the model, and monetize the ecosystem rather than the artifact. Alibaba monetizes through cloud computing consumption on Alibaba Cloud. Baidu monetizes through Baidu's search and enterprise services. The open-source release is a distribution strategy that maximizes adoption, builds developer ecosystems, and creates infrastructure-level stickiness that transcends the model itself.

Chinese open source AI distribution strategy versus American proprietary model economics comparison
American AI: closed model โ†’ per-token API revenue โ†’ enterprise contracts. Chinese AI: open-weight release โ†’ ecosystem adoption โ†’ cloud/infrastructure monetization. Different business models with different optimization targets: American labs optimize for model revenue; Chinese labs optimize for platform dominance.

What This Means for the Global AI Market

The global developer community now has access to frontier-class models from Chinese labs at dramatically lower costs than American alternatives. The data sovereignty tradeoffs are real and documented โ€” models hosted on Chinese infrastructure route through Chinese legal jurisdiction, with data access implications that are material for sensitive applications.

But "sensitive applications" is a narrower category than "all applications." The majority of AI use cases โ€” content creation, code generation, data analysis, customer service automation โ€” don't involve data whose sensitivity rises to the level where Chinese hosting creates meaningful risk for the applications' operators. For those applications, cost-per-performance is the operative metric. Chinese labs are winning that competition.

The Washington framing of AI competition focuses on preventing China from catching up. The February 2026 model releases suggest a different question is now more relevant: what does it mean for the global AI market that the best models are available for free from Chinese labs, when American labs charge frontier prices?


Sources: CNBC, "Alibaba unveils Qwen3.5 as China's chatbot race shifts to AI agents," February 17, 2026; Wikipedia, "Qwen" article (release date February 16, 2026); GlobalData, "Alibaba Qwen 3.5 release dominates influencer discussions," February 2026; ForkLog, Qwen3.5 parameter specifications; South China Morning Post, February 17, 2026; DeepSeek R1 $5.9M training cost (DEV Community / multiple sources)


Sources