US startups adopt Chinese AI models for performance and cost gains
Airbnb has shifted heavily to Alibaba's Qwen over OpenAI, while 80% of American AI startups now deploy Chinese AI models including DeepSeek and Kimi.
Airbnb CEO Brian Chesky disclosed in October 2025 that the company relies heavily on Alibaba’s Qwen, calling it “very good, fast and cheap”
16-24% of US AI startups now use Chinese open-source models according to a16z data, representing 80% of startups deploying open-source solutions.
Qwen won Alpha Arena trading competition with 22.32% returns while four US models posted losses of 30% to 62% in identical conditions.
Cost differentials reach 10x to 40x with Chinese models priced under $0.50 per million tokens versus $3-15 for US closed-source alternatives.
Social Capital migrated workloads to Kimi K2 citing superior performance and lower costs, according to CEO Chamath Palihapitiya.
Chinese AI models have gained measurable traction in Silicon Valley’s startup ecosystem during 2025, challenging assumptions about the AI competitive landscape. The adoption reflects economic pressure on cash-constrained startups combined with improving technical capabilities of open-source Chinese alternatives. What began as cost-conscious experimentation has evolved into production deployment at notable companies and venture-backed startups.
The trend gained visibility in October 2025 when executives at major technology companies publicly endorsed Chinese models. The endorsements came as live trading competitions demonstrated Chinese models’ practical capabilities in high-stakes financial environments, providing empirical validation beyond benchmark scores.
For venture-backed startups operating under cash constraints, the economics are straightforward. API costs for closed-source models can consume 20% to 40% of monthly burn rates for AI-focused companies.
Chinese open-source alternatives reduce this expense to near-zero when self-hosted, extending runway by months without requiring additional fundraising.
Airbnb publicly endorses Qwen over ChatGPT
Brian Chesky revealed in October 2025 that Airbnb relies heavily on Alibaba’s Qwen models to power its AI-driven customer service agent, describing the model as very good, fast and cheap.
The CEO told Bloomberg and CNBC that OpenAI’s ChatGPT wasn’t quite ready for Airbnb’s needs, with the software development kit not robust enough for the company’s requirements.
The statement carried weight given Chesky’s friendship with OpenAI founder Sam Altman, representing a departure from typical Silicon Valley diplomacy around AI providers.
Airbnb uses 13 different AI models including those from OpenAI, Alibaba’s Qwen, Google and open-source providers, but Chesky’s specific praise for Qwen highlighted performance parity with cost advantages.
Chesky told Bloomberg the company uses OpenAI’s latest models but typically doesn’t use them much in productionbecause there are faster and cheaper models. For companies processing millions of queries monthly, open-source models eliminate per-token API fees, translating to cost reductions of 80% to 90% compared to GPT-4 or Claude pricing structures.
Airbnb’s AI system reduced average customer resolution time from nearly three hours to six seconds, cutting the need for human support by 15%. The operational improvements demonstrate that Chinese models meet enterprise-grade performance requirements for customer-facing applications at major technology companies.
Open-source startups show 80% adoption of Chinese models
Martin Casado, a partner at Andreessen Horowitz, revealed in The Economist that 80% of startups pitching to them that use open-source models are now using Chinese AI models.
Casado later clarified the statistic referred to 80% of the 20-30% of new applicants running open-source models, translating to 16-24% of all startups using Chinese open models.
Martin Casado is a general partner at a16z in charge of a $12.5 billion infrastructure fund, having invested in AI startups including World Labs, Cursor, Ideogram and Braintrust. His background includes co-founding network virtualization company Nicira, which sold to VMware for $1.2 billion in 2012.
The venture capital validation carries strategic weight. Firms like a16z conduct extensive technical diligence before investing, including architecture reviews and performance testing. Their willingness to back startups built on Chinese models indicates confidence in the technology’s scalability and long-term viability.
Nathan Lambert, a machine learning researcher, stated he has personally heard of many high-profile cases where the most valued American AI startups are training models on Qwen, Kimi, GLM or DeepSeek.
The adoption reflects changing requirements as AI applications move from proof-of-concept to production, where reliability and cost predictability matter more than cutting-edge capabilities.
Chinese models like Kimi K2 charge just 15 cents per million input tokens and $2.50 per million output tokens, while Alibaba’s Qwen3-Max costs as little as $0.459 per million input tokens.
For early-stage startups operating under cash constraints, this dramatically extends runway.
Qwen dominates Alpha Arena live trading competition
The first AI Crypto Trading Competition held on the AIpha Arena platform by US-based Nof1.ai laboratory ran from October 18 to November 3, gathering six AI models: DeepSeek Chat V3.1, Qwen3 Max from Alibaba, GPT-5 from OpenAI, Gemini 2.5 Pro from Google, Claude Sonnet 4.5 from Anthropic, and Grok 4 from X AI.
Each model received $10,000 and traded autonomously on Hyperliquid decentralized exchange with real capital, executing trades without human intervention. All models used identical prompts and market data, trading six major cryptocurrencies.
Qwen 3 Max won with 22.32% returns while DeepSeek secured 4.89%, and the four American models posted losses of 30% to 62%. The competitive gap demonstrated fundamentally different decision-making capabilities in high-volatility environments.
DeepSeek displayed traits of a quantitative fund manager, rapidly constructing a multi-coin, low-leverage diversified portfolio, strictly following buy-on-pullback discipline. This mirrored the quantitative hedge fund background of DeepSeek’s parent company.
Qwen’s strategy was aggressive, focusing on Bitcoin with high leverage and going all-in during the October 23 market rebound, pushing returns to 51%. The peak showdown occurred October 27 when Qwen shifted to ETH with 25x leverage, resulting in a $4,150 single-day loss as prices declined, while DeepSeek secured $7,463 profit.
The results challenge AI capability hierarchies. GPT-5 and Gemini rank highest on traditional benchmarks like MMLU and GPQA but exhibited poor judgment in financial markets. The disconnect between benchmark scores and real-world performance raises questions about existing evaluation frameworks.
Cost-performance gap reshapes competitive dynamics
In an analysis published by AllianceBernstein in February, DeepSeek’s pricing was estimated at a fraction of the cost of OpenAI’s.
American closed-source providers charge premium rates for custom model variants, often requiring enterprise contracts with minimum commitments exceeding $100,000 annually, while open-source Chinese models allow unlimited fine-tuning at no marginal cost beyond compute resources.
Developers such as Beijing-based Z.ai and Hangzhou-based DeepSeek reported using older-generation chips not subject to US export controls in relatively small quantities, dramatically reducing training and running costs.
To an average startup, what really matters is speed, quality and cost at scale, according to Aman Sharma, co-founder of Lamatic, a South Florida firm helping businesses build AI solutions. Chinese models consistently balance these three factors effectively.
Open-source models create strategic advantages beyond cost. Companies can inspect code, verify security properties and ensure data privacy in ways impossible with API-based services. For regulated industries or companies handling sensitive user data, these transparency benefits often outweigh pure performance considerations.
Margin pressure on American AI companies intensifies as Chinese competition commoditizes model capabilities. OpenAI and Anthropic built business models assuming customers would pay premium prices for superior intelligence. As Chinese alternatives demonstrate comparable capabilities at fractions of the cost, the closed-source value proposition erodes.
Social Capital validates execution advantage
Chamath Palihapitiya revealed in October 2025 that Social Capital migrated much of its work to Moonshot’s Kimi K2 as it was way more performant and a ton cheaper than models from OpenAI and Anthropic.
Palihapitiya made these comments while co-hosting the All-In podcast, providing public validation of Chinese model capabilities in production environments.
As a former Facebook executive, Palihapitiya helped scale the platform from 45 million to 700 million users. His venture capital success with Social Capital established reputation for identifying technology inflection points ahead of broader market recognition.
The investor perspective emphasizes deployment excellence. Chinese labs ship new models every few weeks with measurable gains, while American labs maintain semi-annual or annual release cycles. Shayne Longpre, an MIT researcher, emphasized the paradigm-shifting pace of Chinese model releases with weekly or biweekly iterations offering choices that American labs don’t match.
Data from Hugging Face platform compiled by the ATOM Project confirmed Chinese models overtook the US in cumulative downloads, with Qwen reaching 385.3 million compared to Llama’s 346.2 million by October 2025.
Derivative systems built on Qwen now account for more than 40% of new language models on Hugging Face, while Meta’s share fell to 15%.
Nathan Benaich, founder of Air Street Capital, noted that the biggest factor where Chinese model adoption matters is for government and high-stakes enterprise applications where security is paramount, with concerns over training data provenance. For many commercial applications, however, the cost-performance equation drives adoption decisions.
Chinese AI companies iterate faster, moving from research to production in months rather than years. They maintain closer connections between model development teams and end-user applications, enabling rapid feedback loops that create ecosystem advantages complementing technical capabilities.
Market displacement remains uncertain despite Chinese model gains
The adoption of Chinese AI models by US startups represents competitive pressure rather than market displacement.
Cost economics favor open-source alternatives when performance reaches acceptable thresholds, creating pricing ceilings that challenge American closed-source providers’ business models. American companies retain advantages in capital access, research talent and government partnerships, but these strengths matter less if markets value deployment efficiency and cost economics over capability ceilings.
What remains uncertain is whether this represents temporary dislocation or sustained dynamics. Bloomberg analyst Michael Deng stated he doesn’t expect Chinese models to take over the market anytime soon, but they will maintain competitive pressure and price ceilings on AI costs.
For government and high-security applications, data sovereignty concerns continue limiting Chinese model adoption regardless of cost advantages.
The pattern mirrors historical technology transitions where open-source alternatives gained traction in price-sensitive segments while proprietary solutions maintained positions in premium markets.
Whether OpenAI and Anthropic can evolve business models to compete with open-source economics will determine the next phase of AI market structure.
For now, Chinese models have established themselves as credible alternatives that startups and cost-conscious enterprises increasingly deploy in production.


This article comes at the perfect time, what an insightful read! It realy highlights how practical needs often drive adoption, especially in the open-source world.