MiniMax closes its weights as China’s open-source era fades
The self-evolving model matches frontier US benchmarks at a fraction of the cost. It may also mark a strategic pivot. Could Chinese AI’s open-source era be ending?
MiniMax’s M2.7 matches top US models in software engineering at roughly 1/50th the API cost
MiniMax kept model weights closed for the first time, following z.ai’s lead
M2.7 autonomously improved its own training process over 100+ iterations
MiniMax stock has risen roughly 475% since its January 2026 Hong Kong IPO
Two months after going public at a $6.5B valuation built on open-source credibility, MiniMax has locked its latest model behind a paywall. M2.7, released March 18, is the company’s first proprietary model.
It matches top US systems in software engineering at 1/50th the cost. And it helped build itself.
MiniMax is one of China’s “AI Tigers,” a cohort of foundation model startups racing to match US frontier labs. Yan Junjie, former vice president of SenseTime, founded the Shanghai-based company in 2022.
Its products span text, video, speech, and music AI, reaching 27.6M monthly active users across more than 200 countries. Over 70% of revenue comes from outside China, with the US accounting for roughly 20%.
MiniMax listed on the Hong Kong Stock Exchange on January 9, 2026, raising approximately $620M at HK$165 per share, according to Bloomberg. Shares doubled on debut, backed by cornerstone investors including Alibaba and Abu Dhabi’s sovereign wealth fund. Rival Zhipu AI had debuted one day earlier.
The company built its developer reputation on open-source models. M2, released in October 2025 under an MIT license, ranked first among all open-source systems on the Artificial Analysis Intelligence Index. M2.5 followed in February 2026, also open-source.
M2.7 breaks that pattern.
The shift carries implications for developers, investors, and the competitive landscape of Chinese AI.
Open-source to proprietary, a strategic U-turn
Before M2.7, MiniMax was a standard-bearer for open-source AI in China. M2 and M2.5 both shipped under permissive open-source licenses. Any developer could download, modify, and self-host the models.
Now, MiniMax has kept M2.7 closed. Developers can access the model through the MiniMax API and third-party model providers. They cannot download, fine-tune, or self-host it.
MiniMax is not the only Chinese lab making this shift. Zhipu AI (z.ai) released GLM-5 Turbo as a proprietary model in recent months. According to VentureBeat, Alibaba’s Qwen team is also reportedly moving toward proprietary development following the departure of senior leadership.
DeepSeek remains a notable holdout, keeping its V3 and R1 models open-source. The pivot is selective, not universal.
The pattern reflects a maturing business logic. Open-source was an acquisition strategy. When Chinese labs needed developer mindshare, freely available models attracted users who might otherwise default to OpenAI or Anthropic.
The strategy worked. MiniMax’s M2 became the top-ranked open-source model globally within weeks of release.
Proprietary is a monetization strategy. MiniMax is now publicly traded with a market capitalization of approximately HK$294B (roughly $38B). Shareholders expect defensible revenue streams. Giving away frontier models undercuts the API business that generates that revenue.
ZhenFund founding partner Huang Mingming invested in 6 consecutive MiniMax funding rounds. He framed the company’s appeal in terms of the “impossible triangle” of high performance, low cost, and commercial scalability, according to TechNode. Closing model access is one way to hold that triangle together while protecting margins.
At the IPO, Yan Junjie pledged to “ensure cutting-edge AI truly serves everyone.” M2.7 tests how far that mission stretches when shareholders expect returns.
For the past year, a useful shorthand described the global AI landscape. Chinese labs were open. US labs were closed.
The framing no longer holds. Global buyers now face proprietary options on both sides. Selection criteria increasingly center on capability, cost, ecosystem fit, and jurisdictional risk.
For enterprises that built production workflows on M2 and M2.5, the switch raises a practical question. Open-source models allowed self-hosting. M2.7 requires API dependency on Chinese infrastructure.
Teams that adopted MiniMax for its openness must now re-evaluate their vendor risk.
The model that helped build itself
M2.7’s most significant technical claim is not a benchmark score. It is the model’s role in its own creation.
MiniMax used earlier M2 versions to build an automated research system. The system manages data pipelines, training environments, and evaluation infrastructure. According to MiniMax, M2.7 then ran entirely autonomously through an iterative improvement loop.
The model analyzed its own failure patterns, planned code changes, and modified its training infrastructure. It ran evaluations, compared results, and decided whether to keep or revert each change. It executed more than 100 rounds of this cycle without human intervention.
During the process, M2.7 discovered optimizations that human engineers had not prioritized. These included parameter tuning, automated bug-pattern detection across files, and loop prevention in the training workflow.
The result, according to MiniMax, was a 30% performance improvement on internal evaluation sets. The company claims the system handled 30-50% of the operational work that would normally require human ML engineers.
The innovation is structural, not incremental. Most AI models are passive artifacts. Humans write the training code, tune the parameters, and fix the failures.
M2.7 inverts part of that process. The model actively manages portions of its own development pipeline. It identifies issues and applies fixes faster than a human team could cycle through them.
These claims deserve scrutiny. The 30-50% figure refers to operational tasks such as pipeline monitoring and evaluation reruns. It does not mean the model performed half of its own fundamental research.
Independent benchmarks also show mixed signals. On at least one coding test, according to VentureBeat, M2.7 actually scored lower than its predecessor M2.5. Self-evolution does not guarantee improvement on every axis.
MiniMax is not alone in exploring this approach. OpenAI recently described a similar process for GPT-5.3 Codex, according to The Decoder. The Codex team used early model versions to find bugs, manage deployment, and evaluate results during training.
Why it matters for enterprise leaders: If self-evolving training loops deliver consistent gains, the economics of AI research change. Labs with effective self-improvement systems could reduce the headcount needed per model generation. They could also compress development timelines.
The competitive moat would shift from model quality to iteration speed. For organizations evaluating AI vendors, the key question becomes whether a provider’s R&D engine can sustain compounding improvements.
Matching US frontier models at 1/50th the cost
The cost structure is the sharper competitive edge. M2.7 is priced at $0.30 per million input tokens and $1.20 per million output tokens. According to WaveSpeed AI, M2.7 costs roughly 1/50th as much as Claude Opus 4.6 on input tokens. On output, it costs roughly 1/60th as much.
In practical terms, an agent workflow that costs $100 to run on Opus would cost roughly $2 on M2.7. With automatic cache optimization, the effective blended cost drops to approximately $0.06 per million tokens.
M2.7’s efficient architecture activates only 10B parameters per query, keeping compute costs low while delivering high reasoning capability. The model runs at approximately 100 tokens per second, about 3 times faster than Opus.
The benchmark results support the pricing story. On SWE-Pro, a benchmark for real-world software engineering, M2.7 scored 56.22%, according to MiniMax. The score matches GPT-5.3-Codex.
On SWE-bench Verified, the model scored 78%, outperforming Claude Opus 4.6’s 55%.
For office tasks such as spreadsheet and document editing, M2.7 achieved the highest ELO score (1,495) on GDPval-AA among open-API models. In one demo, M2.7 independently read TSMC annual reports, built a sales forecast, and generated a presentation.
M2.7 integrates with major AI coding tools, including Cursor and Claude Code. Developers can configure existing environments to point at MiniMax’s API endpoint. The switching cost is low.
The ease of integration cuts both ways. It lowers the barrier to adoption. It also means developers can leave just as easily if a competitor matches the price.
The post-IPO valuation test
Shares opened at HK$235.40 on January 9, 2026, a 42.67% premium over the IPO price. They closed the first day at HK$345, up 109%, according to CNBC. The IPO was oversubscribed 1,837 times by retail investors.
As of early April 2026, the stock trades around HK$949.50, according to Investing.com consensus estimates. All 6 covering analysts rate it a buy, with an average 12-month target of HK$1,092.
The fundamentals tell a more complicated story. According to PitchBook, trailing 12-month revenue as of December 2025 was $79M. Revenue in the first 9 months of 2025 grew 170% YoY, with gross margins of 69.4%, according to TechNode.
Net losses in the same period reached $512M.
A $38B market cap on $79M in trailing revenue translates to a revenue multiple above 480 times. Even adjusting for growth, that pricing requires sustained rapid growth. No Chinese foundation model company has yet demonstrated a clear path to profitability.
The bull case rests on three pillars. First, MiniMax generates over 70% of revenue outside China, reducing single-market risk. Second, its API pricing undercuts every major competitor by an order of magnitude.
Third, self-evolving model development could compress R&D cycles and lower the cost of each successive model release.
The bear case is equally clear. The company allocated 90% of IPO proceeds to R&D, signaling years of heavy spending ahead. The AI model market is commoditizing rapidly. Multiple open-source alternatives exist at the same capability tier.
The Disney, Universal, and Warner Bros. Discovery copyright lawsuit against Hailuo AI, filed in September 2025, adds legal risk.
Geopolitical exposure adds a structural concern. Over 70% of MiniMax’s revenue comes from outside China. The US alone accounts for roughly 20%, according to eWeek.
With M2.7 now proprietary and API-only, enterprise customers have no option to self-host. If US export controls or data governance restrictions expand to cover Chinese model API access, MiniMax’s fastest-growing revenue stream faces regulatory risk.
For enterprises in regulated or government-facing industries, a proprietary Chinese API with no self-hosting fallback raises vendor risk. Expect risk committees to scrutinize these dependencies closely.
From model weights to training flywheels
M2.7 is a test case for whether AI labs can escape the commoditization trap by building models that improve themselves. If self-evolving training systems scale, the competitive moat shifts from model quality to training infrastructure.
For investors, the measurable proxy is release cadence. MiniMax shipped M2 in October 2025, M2.5 in February 2026, and M2.7 in March 2026. The gap between releases is shrinking.
Whether that pace holds will indicate whether self-evolution is a real structural advantage or a one-cycle narrative. A sustained acceleration would validate the thesis. A plateau would suggest the gains are front-loaded.
The question for strategists watching China’s AI sector has changed. The question is no longer who can build the best model. It is who can build the model that builds the next model.

