China Innovation Watch

China Innovation Watch

Share this post

China Innovation Watch
China Innovation Watch
Chinese humanoids gain edge in retail, logistics

Chinese humanoids gain edge in retail, logistics

VLA-powered humanoids and vertical data moats are driving hundreds of commercial deployments, cutting costs and reshaping service work.

Aug 19, 2025
∙ Paid

Share this post

China Innovation Watch
China Innovation Watch
Chinese humanoids gain edge in retail, logistics
Share
  • China’s humanoid robot sector scales up from prototypes to commercial deployment, with retail and industrial models reaching hundreds of units shipped in 2025.

  • Vertical data accumulation emerges as the key competitive moat—logistics-focused robots generate 200+ data points per parcel, driving rapid iteration.

  • Vision–Language–Action (VLA) models push functional gains, enabling robots to execute complex, unstructured tasks such as autonomous bed-making and mixed-product retail picking.

  • Price compression accelerates market penetration, with entry-level humanoids for entertainment or guidance now costing US$9,600–11,000 on par with a worker’s annual wage in Beijing.

  • Export interest rises, with Japanese and Korean executives scouting Chinese robot suppliers at the 2025 World Robot Conference.

China's World Robot Conference (WRC) 2025, held August 8–11 in Beijing, marked a turning point for the country's humanoid robotics sector. Over 200 domestic and international exhibitors, more than 50 specializing in humanoids, unveiled over 100 new products.

Unlike previous years' novelty-focused demonstrations, this edition centered on commercially viable deployments in research, manufacturing, and service industries.

Industrial humanoids integrate into factory workflows

In 2024, most industrial robots at WRC ran isolated demos; in 2025, vendors emphasize cluster collaboration—multiple humanoids coordinating with automated lines.

UBTECH’s Walker S2, integrated via its Group Brain 2.0 scheduling system with mobile robots and unmanned logistics vehicles, showcased end-to-end workflows from warehouse intake to intelligent sorting. Its hot-swappable battery design cuts downtime to three minutes, enabling 24/7 operations.

For manufacturers, this is a productivity leap. UBTECH VP Jiao Jichao stresses that integration with client industrial backend systems—not just mechanical capability—is the real barrier to adoption. This is consistent with international manufacturing automation trends, where seamless MES (Manufacturing Execution System) integration dictates ROI.

Industrial buyers now ask two questions first: "Can it do the job?" and "What's the runtime?". Battery life and charging logistics are now as critical as load capacity or degrees of freedom.

Retail robots become revenue generators

Service and retail applications are now the most visible—and monetizable—segment. Coffee-making, shelf-picking, and snack bar robots demonstrated full automation from QR code ordering to delivery.

Galaxy Universal’s Galbot, for example, handles customer reception, ordering, product retrieval, and multi-language engagement in its themed “space capsule” stores.

The economics are compelling: Beijing service wages (~¥10,000/month fully loaded) mean a three-person 24/7 team costs ~¥300,000/year. A robot in the ¥70,000–300,000 range pays for itself within two years.

This labor substitution model mirrors patterns seen in Japan’s convenience-store automation and U.S. quick-service kitchens, but Chinese vendors benefit from integrated supply chains, allowing faster iteration and lower BOM costs.

Research-grade platforms under ¥200,000

Research institutions prioritize structural safety, modularity, and cost efficiency over autonomy.

LimX Oli, a full-size humanoid by LimX Dynamics, offers 31 degrees of freedom, Python programmability, and simulation compatibility at ¥158,000 (US$21,700)—the first sub-¥200k full-DOF research-grade humanoid.

This aggressive pricing positions Chinese suppliers as alternatives to higher-cost Japanese and U.S. research platforms.

VLA models redefine robot “intelligence”

The breakthrough of the past year is the shift from traditional Vision–Action (VA) models to Vision–Language–Action (VLA) architectures.

These add a language-based reasoning layer between perception and control, allowing robots to generalize across novel environments without explicit reprogramming.

  • Zhipingfang’s "Ai Bao" uses a dual-system GOVLA model: a "slow" planner for complex logic and a "fast" system for real-time reactions.

  • Xinghaitu’s G-0 maps visual inputs directly to 23 joint actuators; in tests, it autonomously planned and executed bed-making in arbitrary room layouts.

  • Xingchen’s Astribot S1, equipped with DuoCore-WB, performed a full “day in the life” scenario—making breakfast, running a café shift, and co-creating traditional lacquer fans with visitors.

VLA’s generalization capacity is illustrated in retail: while JD.com’s robots handle only uniform beverage SKUs, Galaxy Universal’s system picks from a complex mix of chips, bread, instant noodles, and fragile puffed snacks, closer to supermarket-level variability.

However, some executives caution against over-reliance on synthetic data for VLA training. UBTECH’s Jiao Jichao warns that real-scene randomness—unpredictable lighting, human interruptions, and packaging defects—remains essential for robust model performance.

Data as moat and barrier

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Incitez Pte Ltd
Publisher Privacy ∙ Publisher Terms
Substack
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share