China’s AI Gambit: Code as Standards
Today we’re running a guest translation by the great of the Sinification substack which translates leading Chinese thinkers.
Below, Liu Shaoshan — a leading figure in China’s embodied AI research with a PhD from UC Irvine and an official state-designated “high-end overseas talent”— proposes a roadmap for Chinese AI dominance cueing off America’s successful diffusion of TCP/IP protocols in the late 20th century. Just as influence over the internet afforded the USA “a truly global mechanism of discursive control,” Liu argues, AI diffusion and the standards exported along with AI systems will be key to power projection in the 21st century.
He outlines four strategic levers for achieving open source dominance — technological competitiveness, open-source ecosystem development, international standard-setting, and talent internationalisation. Acknowledging that China’s engagement in open source AI still has a long way to go, he advocates for the creation of a comprehensive “China HuggingFace” that maximizes market share by publishing toolkits for model training, embodied AI implementation, and everything in between. Finally, the author argues that Beijing should encourage Chinese AI talent to live and work abroad, especially in Belt and Road participant countries, rather than encouraging them to come back to China.
This piece is particularly resonant at a time when leading White House AI advisors are tweeting stuff like this:
For the policy answer to the challenges Liu raises, check out Nathan Lambert’s call for action to replicate Deepseek in America.
Key Points
US tariffs and export controls heighten global uncertainty but create a strategic opening for China’s AI industry to expand internationally and reshape “the global technological order”.
Global adoption of US or Chinese technology — not domestic technological prowess alone — is becoming the key battleground for great-power status in AI.
America’s success with TCP/IP’s global rise in the 1980s shows centrally-led government policies, open-source, mandatory standards and talent “exports” can turn national tech into the global default.
Rogers’ diffusion model suggests four steps for China: woo “innovators” with cutting-edge tech, attract “early adopters” through open-source, secure an “early majority” by setting international standards, and reach late adopters through Chinese talent going global.
Thus, China’s first objective should be to match US-level capabilities so that its AI-related technologies are credible and attractive to global “innovators” and “early adopters”.
Work with Us – Project-Based, Flexible Commitment
Requirements: HSK6 or equivalent; native English (≠ near-native); strong writing and translation skills; and a solid background in China-related studies/work. Please send your resume to:
China’s DeepSeek-R1 and other LLMs now rival OpenAI in maths, coding and/or reasoning, demonstrating their technical credibility for such global uptake.
Moreover, China’s embodied AI sector shows strong international competitiveness, with robust upstream manufacturing, rapidly improving midstream technologies and world-leading pilot deployments, forming a highly competitive end-to-end ecosystem that supports rapid technological advancement.
Nevertheless, China still trails the US in the maturity of its open-source community reach and its influence over international standards systems.
Recommended actions:
Back talent-going-abroad schemes that help place Chinese experts in key positions in emerging markets such as in universities, labs and start-ups, thereby supporting the spread of Chinese technologies and standards internationally.
Establish an “Open-Source Co-construction Fund” to boost China's influence in global technical governance and standards-setting.
Shift from ISO-centric competition to a “code-as-standard” strategy: export products pre-loaded with open-source standards, ensuring that adoption effectively becomes standardisation.
Promote collaborative hardware-software stacks that embed Chinese standards directly in already established code repositories like GitHub and Hugging Face, thereby easing global adoption.
Build a “Chinese Hugging Face”: an integrated and globally influential open-source hub for models, middleware and applications, covering the full pipeline from model development to deployment.
The Author
Name: Liu Shaoshan (刘少山)
Year of birth: est. 1984 (age: 40/41)
Position: Director of the Embodied Intelligence Centre at the Shenzhen Institute of Artificial Intelligence and Robotics (AIRS); Founder and CEO of intelligent robotics company PerceptIn.
Previously: Technology Leadership Panel Advisory Group Member, National Academy of Public Administration (2023-4); Senior Autonomous Driving Architect, Baidu USA (2014-6); Senior Software Engineer, LinkedIn (2013-4); Software Development Engineer, Microsoft (2010-3)
Other: IEEE Senior Member; Recognised as a “national high-level overseas talent” (国家高层次海外人才) by the Chinese government (under its broader Overseas High-Level Talent Recruitment Programme, historically known as the Thousand Talents Plan)
Research focus: Embodied AI; Autonomous driving; Computing systems; Technology policy
Education: BSc, MSc and PhD (2010), UC Irvine; MPA, Harvard University.
CHINA’S “GOING GLOBAL” STRATEGY FOR AI — OPEN-SOURCE TECHNOLOGY PROVISION, STANDARDS BUILDING AND THE RESTRUCTURING OF THE GLOBAL TECH ECOSYSTEM
Liu Shaoshan (刘少山)
Published by GBA Review on 23 June 2025
Translated by Paddy Stephens
(Illustration by OpenAI’s DALL·E 3)
1. Introduction
As we moved into 2025, the Trump administration reintroduced tariffs across the board [统一关税] on global goods while simultaneously implementing stricter controls on technology exports, delivering a dual blow to global trade and technological ecosystems. This latest round of trade protectionism and technological blockade policies has significantly increased systemic uncertainty across the global economy and technology sectors. In March 2025, the Organisation for Economic Co-operation and Development (OECD) revised its forecast for global growth in 2025 from 3.3% down to 3.1%, and further downgraded it to 2.9% in June, citing “trade policy uncertainty” and “structural barriers” as factors dragging down global investment and supply chain stability. The United Nations Conference on Trade and Development (UNCTAD) also warned that escalating trade tensions and policy volatility could slow global growth to 2.3%, leading to stagnation in both investment and innovation. Against this backdrop, a strategic window of opportunity has opened for Chinese AI to “go global”. As the United States increases obstacles to trade and to the sharing and export of technology [技术壁垒], China should participate actively in the restructuring of the global technological ecosystem. It can do this by exporting its technology, building an open-source ecosystem, setting standards [标准制定] and encouraging the global mobility of Chinese AI talent [人才国际流动]. This would then mark a shift from a passive posture to a proactive strategic approach. This article offers a systematic analysis of the strategic pathways and concrete steps that China should take across these four key dimensions.
2. Historical Lessons From [Seizing] an Opening For Technological Leapfrogging
On 8 May 2025, during a hearing of the US Senate Committee on Commerce titled “Winning the AI Race”, Microsoft President Brad Smith issued a warning: “The number-one factor that will define whether the US or China wins this race is whose technology is most broadly adopted in the rest of the world.” His statement underscored a fundamental shift in the strategic landscape: against the backdrop of Trump's increasingly stringent technology and global trade policies, the extent to which [a country’s technology] is adopted globally has become the key determinant of [that country’s] great power status in AI. It is precisely within this climate of mounting restrictions that China’s AI industry has encountered a window of opportunity to “go global” and reshape the international technological ecosystem.
Looking back at the 1960s to 1980s, the United States successfully capitalised on the opportunity offered by its technological ascendancy in IT. First, in terms of technology export, in 1969 the US Defence Advanced Research Projects Agency (DARPA) launched ARPANET [a forerunner to the Internet], and by 1980 had incorporated TCP/IP [its new communications protocol] into its defence communications system. On 1 January 1983, a full network-wide transition to TCP/IP was completed, laying the foundation for the global internet. Next came the construction of the open-source ecosystem: DARPA’s TCP/IP protocols were incorporated into the open-source BSD system, thereby initiating an “open-source means of dissemination” [开源即传播] model. In 1986, the NSFNET project extended this model to the academic community. This made networking functions readily accessible at the operating system level and further stimulated broad participation from the research and developer communities. It was precisely this “code usable is code used” [代码可用即可用] design philosophy that accelerated the internet’s diffusion from the laboratory to commercial and civilian domains. The evidence demonstrates [实践证明] that only when core communication protocols move from closed-loop research to open-source community development can exponential technological breakthroughs occur — moving from source code to [forming the core of] global infrastructure [从源代码走向全球基础设施].
In addition to its advanced technology research and development (R&D) and open-source contributions, the United States also achieved widespread adoption of TCP/IP by standardising outputs and aligning government policy with open-source practice to establish a globally compatible basic infrastructure [全球兼容基础]. In March 1982, the US Department of Defence officially designated TCP/IP as the standard for military communications and announced a nationwide transition scheduled for 1 January 1983 — the so-called “flag day” policy. This was not merely a technical upgrade, but a form of “compatibility mandate”: all hosts connected to the network were required to support TCP/IP, or else they would be disconnected on the day of the switch. This compulsory standardisation not only enforced synchronised upgrades across the US military-industrial and research systems but also fostered a nationwide consensus on protocol standards.
More importantly, this standards rollout did not take place in a closed-off system [输出非封闭]. Rather, it advanced in tandem with the open-source ecosystem: DARPA awarded contracts in phases to institutions such as BBN, Stanford and Berkeley to develop TCP/IP implementations for major platforms including Unix BSD, IBM systems and VAX, subsequently incorporating the code into the 4.2BSD version of Unix for public release. In 1986, the NSFNET project further accelerated [进一步推动] the widespread deployment of this protocol across the national academic network, effectively achieving near 100% coverage.
This series of measures served collectively as the blueprint [机制范本] for the internationalisation of a US-developed communications standard. The government played a leading role by setting a compatibility timeline to guide synchronised upgrades to [this new] standard. Open-source practices facilitated multi-platform availability, enabling “use on demand” by research institutions and enterprises. The infrastructure for global system compatibility was deployed in parallel with the release of open-source code. This strategy — combining policy, open-source and platform integration — not only accelerated the adoption of the standard [缩短了标准推广路径], but also rapidly established TCP/IP as the default protocol for international communication.
From the 1960s to 1980s, the United States established its leading role through the export of technical standards. Of even greater long-term significance was [its strategy of] sending IT talent abroad [人才走出去], which profoundly influenced the global internet architecture. This experience offers valuable lessons for Chinese AI’s [ambition to] expand overseas [中国AI出海]. America’s engineers and researchers didn’t close themselves off from the world [封闭体系]— large numbers participated actively in international standards organisations and met with different parts of the academic community [社区会议]. For example, the International Network Working Group (INWG), founded in 1972 by American scholars such as Vint Cerf and Steve Crocker, played a key role in designing global network protocols and laid the groundwork for the birth of TCP/IP.
Subsequently, the Internet Engineering Task Force (IETF), established in 1986, held its first meeting in San Francisco [Note: its first meeting was held in San Diego, not San Francisco] with 21 American researchers and received government funding and support. These platforms became key frontlines through which American researchers exercised technological discourse power [成为美国人才传递技术话语权的前沿阵地]. American experts held core technical and leadership roles within the INWG, the IETF and its parent body, the Internet Architecture Board (IAB) [ — which together oversaw the development of the internet’s technical architecture]. Figures such as Vint Cerf, Jon Postel, and David Clark participated in meetings for many years, published RFC documents, and oversaw the registration of technical parameters and the standardisation process. These efforts not only ensured the professionalism of the adopted standards, but also reinforced the central role of the United States within global internet governance.
More importantly, the institutional design adopted by these organisations was one of open collaboration, enabling engineers from around the world to influence the direction of standards through voluntary participation. By leveraging their first-mover advantage and influence within the research community [社区影响力], American researchers led the development of key standards. In doing so, the US not only exported the technology itself, but also [American] internet culture, projecting its governance discourse power [overseas]. Furthermore, this strategy of internationalising talent meant that the US exported not just protocols, but also the capability to shape their evolution and the associated rule-making process — thereby establishing a truly global mechanism of discourse control over the Internet.
3. Strategic Lessons for Chinese AI “Going Global”
Technology diffusion is a systemic endeavour [系统工程]. In this section, I will draw on the theory of “innovation diffusion” to analyse the historical experience of US internet technology diffusion, offering systematic strategic recommendations for how Chinese AI can go global.
The theory of “innovation diffusion” was introduced by Everett Rogers in his 1962 book Diffusion of Innovations. Its main aim is to explain how new technologies and ideas spread and are adopted within a social system. This process does not happen instantaneously but unfolds gradually over time, spreading slowly between different groups in society through various channels.
The diffusion of innovation is the result of the interaction of multiple factors, including the innovation itself, the channels through which it spreads, the social environment and time. In his theory, Rogers proposed that members of society vary significantly in their willingness to adopt new ideas and can be categorised into five groups: “innovators”, “early adopters”, “early majority”, “late majority” and “laggards”. A technology’s influence gradually accumulates [缓慢累积影响] until it begins to be adopted by the “early majority”. Once it reaches this point of critical mass, its adoption accelerates rapidly. By observing the adoption process of key actors and leveraging their demonstration effect [借助他们的示范效应], an innovation can move beyond niche uptake and achieve widespread acceptance.
In fact, this theory emphasises the evolutionary process whereby an innovation adapts continuously to the needs of [different] social groups [创新自身不断适应群体需求]. It not only explains why some innovations enter the mainstream successfully while others stall due to lack of trust from “latecomers”, but also highlights the critical role of communication strategies, social networks and opinion leaders in the diffusion process.
In short, whether an innovation can truly achieve widespread adoption often depends on its ability to bridge the gap between marginal experimentation and mass adoption. This theory offers important insights for understanding technology export and international uptake: true diffusion is not a one-off success, but a gradual process of acceptance.
Step One: attract [the interest of] “innovators” through technological sophistication [通过技术先进性获取“先驱者”]. Innovators focus on the breakthrough advantages of the technology itself. Only by possessing technological capabilities [技术实力] on a par with the global frontrunners—which means, for example, having Large language models (LLMs) approaching the performance of OpenAI’s, or surpassing them in reasoning ability—can Chinese AI technologies be deemed worthy of adoption by [these early] global users.
Step Two: attract early adopters through open-sourcing. Early adopters are not only interested in how cutting-edge a technology is but also in usability and real-world application scenarios. During the internet era, the United States fostered an early ecosystem by open-sourcing the TCP/IP protocol, enabling the global research and engineering communities to use and experiment with it directly. If China, at this stage, open-sources large model code, interface documentation, inference APIs and the like, it would enable overseas developers to adopt and test these models quickly in local environments, thereby building recognition and participation in an international community.
Step Three: Reach the “early majority” by promoting global standards. These users place greater value on whether a technology has become an industry standard and whether it is widely compatible. The United States established global compatibility and interoperability by implementing a “flag-day” — when all connected systems automatically switched over to the new TCP/IP standard — thereby establishing it [统一] as the standard communication protocol. If Chinese AI could similarly embed interface protocols, model formats and evaluation metrics into international standard frameworks, it would help bridge the gap between the peripheral [early users] and the mainstream, thus accelerating widespread adoption.
Finally: [China needs to] gain the remaining user groups by exporting talent [人才输出]. This stage involves reaching the “late majority” and the “laggards”, and relies heavily on “talent going global” [人才走出去]. This is not limited to technical personnel going abroad to implement projects. A more important part is the active participation of Chinese AI engineers, researchers and policymakers in international standards bodies, community conferences and open-source projects—writing forum articles, submitting technical specifications and serving as working group leads. Just as Vint Cerf and Jon Postel once brought American protocol concepts into the IAB/IETF [which set technical standards for the internet], shaping global rules through RFCs and governance, China must also foster deep expert-level international engagement in order to shape norms at the conceptual level and establish its technologies as a credible global standard.
4. Is Chinese AI Cutting-edge Enough to “Go Global”?
Based on the innovation diffusion theory discussed in the previous section, this next part explores whether Chinese AI technology is cutting-edge enough to “go global” [是否具备“走出去”的领先性], which is fundamental for technology diffusion.
First and foremost, DeepSeek has already clearly demonstrated its ability to compete alongside the world’s leading foundational models. Multiple independent benchmark tests show that DeepSeek-R1 performs comparably to OpenAI’s o1 model in mathematics, programming and reasoning tasks. In some tests, such as the MATH-500 question set, it even slightly outperforms it. [Additionally, based on DeepSeek’s own research], it also outperforms on the SWE [coding] question set. Its reasoning ability has been widely recognised by its community of users. Although DeepSeek-R1 falls slightly short of OpenAI’s o1 on some complex reasoning problems, its overall performance remains at a high level. As the first open-source large model with comparable capabilities — and with its code freely available and easily accessible — it is quickly gaining traction globally among developers. Even more important are its cost advantages and open approach. By using a mixture-of-experts architecture and reinforcement learning techniques, DeepSeek significantly reduces training and inference costs, achieving international top-tier performance not only in capability but also in efficiency.
Second, in the field of embodied AI enabled by foundational models, China has demonstrated clear global competitive advantages. The supply chain for embodied intelligence encompasses upstream core components, midstream system technologies—including foundation models and computational power—and downstream application scenarios. On the upstream side, the rise of the new energy smart vehicle industry has propelled China’s component manufacturing to a high level of localisation and mass production capacity, spanning key elements from sensors and LiDAR to servo motors. This supply chain advantage provides a solid foundation for the large-scale deployment of embodied intelligence systems.
Turning to midstream system technologies, as previously mentioned, products like DeepSeek are at the global cutting edge. In the area of computing chips, companies such as Huawei and Rockchip continue to make significant progress. Although they are still trailing behind their American counterparts for now, they are closing the gap rapidly.
In downstream application scenarios, China currently leads the world—particularly in large-scale pilot deployments of robotics projects initiated by local governments. A notable example is the “robotics district” in Longgang, Shenzhen. Robots are being deployed in urban management, manufacturing, logistics, elderly care and other sectors. Governments in cities like Shanghai and Shenzhen are providing subsidies and development platforms to support the implementation of these projects.
Overall, China has established a comprehensive closed-loop [完整闭环] AI ecosystem spanning upstream, midstream and downstream segments. Although midstream technologies still lag slightly behind, the gap is narrowing rapidly. This industrial structure enables Chinese products to be competitive in international markets and provides a solid foundation for the future diffusion of its AI technologies.
5. The Current State of China’s AI Open-Source Ecosystem
The previous section concluded that China’s AI technology is sufficiently cutting-edge to “go global”. This next section will examine the current state of China’s AI open-source ecosystem—specifically, whether the open-source infrastructure is sufficiently prepared to attract the world’s “early adopters”.
So far, China has achieved certain foundational milestones [取得初步成就] in building an open-source ecosystem for AI and embodied intelligence. However, it still lags significantly behind the United States in terms of global reach, community governance and platform influence—factors that constrain the depth and breadth of its technology’s international expansion. In terms of LLMs, more than ten Chinese open-source models with over 100 billion parameters—such as DeepSeek—have been released. Some of these models have matched or even surpassed OpenAI’s o1-level performance in MATH and SWE-bench, demonstrating strong technological momentum.
By contrast, the US continues to dominate the open-source ecosystem. The LLaMA series of models released by Meta have fostered a broad and robust ecosystem. The LLaMA2 series alone has amassed more downloads and citations on the Hugging Face platform than all Chinese open-source models combined [Note: the time period for this comparison is not specified]. Moreover, the US has established comprehensive mechanisms for model publication, data annotation, benchmark testing and community engagement — particularly through platforms like Hugging Face — affording its models the status of “default standard” within the open-source community.
In the field of embodied AI, China has leveraged its service robotics and smart vehicle industries to establish leading advantages in sensor technology, full-machine platforms and scenario integration. For example, the Shenzhen Institute of Artificial Intelligence and Robotics has released the AIRSHIP, AIRSPEED and AIRSTONE series of open-source projects, which together form an end-to-end open-source system connecting the model, computing and application [Note: Liu is involved in the first two of these projects]. However, given the relatively short time this ecosystem has had to develop, there remains a significant gap in core middleware and the broader development ecosystem when compared with the US-led ROS (Robot Operating System) framework.
ROS, [an open-source framework for building robot software,] was released in 2007 by the US-based Willow Garage, and its core developers remain concentrated primarily in the United States and Europe. It boasts a clear contributor structure and abundant training resources. By contrast, domestic Chinese alternatives to ROS—such as CyberRT and XBot—have achieved partial system compatibility. However, their visibility on GitHub [a code repository], global developer participation and completeness of development documentation remain low. A strong and influential open-source community around these projects has yet to emerge.
Based on this, as China promotes the upgrade and international expansion of its open-source ecosystem, [my] recommendation is to focus on the following three areas:
A “Chinese Hugging Face” with global influence should be established. That is, an integrated open-source platform that unifies foundational models, system middleware and application scenarios, forming a complete pipeline from model training to deployment.
The Chinese government could set up an “Open-Source Co-construction Fund” to support universities, research institutes and enterprises in participating in technical governance and standards-setting within international organisations such as IEEE, ACM, RFC and IETF, thereby amplifying China's voice on the global stage.
Finally, leveraging China’s current supply chain advantages, efforts should be made to promote “collaborative open-source” for embodied intelligence hardware and software platforms, establishing a standardised open-source stack across hardware and software to truly realise the global deployment and ecosystem integration [potential] of Chinese technologies.
6. The Current State of the International Standardisation of China’s AI Technology
The previous section concluded that China’s AI open-source ecosystem still has significant room for development. In this section, I will examine the current state of the international standardisation of China’s AI technologies, with a particular focus on whether Chinese technologies have formed internationally recognised standard systems capable of attracting the global “early majority” of users.
China’s participation in international standardisation in the fields of AI and embodied intelligence is gradually increasing, but its overall influence still lags significantly behind that of the United States. In recent years, China has actively engaged with international standards organisations such as ISO/IEC JTC 1, IEEE, and ITU, and has submitted proposals on topics including “AI Terminology”, “AI Risk Management” and “General Requirements for Intelligent Service Robots”. However, by comparison, the United States continues to hold a dominant position in international bodies such as ISO and IEEE SA, retaining crucial discourse power on core issues [核心议题中仍掌握关键话语权] such as AI ethics, model interpretability, trustworthy AI and algorithmic transparency.
Most importantly, ISO/IEC JTC 1/SC 42 is the core body for setting international standards in artificial intelligence. Its secretariat is the US National Institute of Standards and Technology (NIST), operating through ANSI (the American National Standards Institute). Since 2018, this body has led the drafting of several key standards, including the AI Management System (ISO/IEC 42001:2023) and AI Risk Management Guidelines. SC 42 comprises five working groups (WGs), each responsible for a distinct technical area. Among these, Chinese experts hold the convenorship of only one group—WG5 on “Computational Approaches and Computational Characteristics of AI Systems”. The convenors of the other groups come from Canada, the US, Ireland and Japan. As such, in terms of strategic direction, control of the secretariat, and agenda-setting, China still lags far behind the United States. This structure highlights that while China is making sustained efforts in international standardisation, there remains considerable room for it to increase its presence in organisational leadership and more generally in senior roles [within international standard-setting bodies].
Given the current situation, I recommend that China promote the internationalisation of embodied intelligence standards by moving beyond the traditional mindset of “intra-ISO competition” and shifting towards an “ecosystem-led” model driven by technology and open-source platforms. Standards should be embedded into the open-source implementations of leading platforms such as AIRSHIP, with reference implementations and toolchains published on global mainstream technical communities like GitHub and Hugging Face—thereby enabling external adoption through a “code-as-standard” model.
At the same time, standards should be tightly integrated with Chinese hardware and software systems, accompanying product deployments abroad into emerging markets, thereby establishing these standards as the de facto choice. Only when Chinese standards are widely adopted in technical applications and development can China truly build global discursive power — elevating itself from “participant” to “rule-maker”.
7. The International Discourse Power and Influence of Chinese AI Talent
Due to how and when it developed [历史发展原因], China lags far behind the United States in both the maturity of its AI open-source ecosystem and its influence over international standards systems. This section will discuss the international discourse power and influence of Chinese AI researchers. In particular, it will look at how global influence of individuals can help China reach mainstream user groups worldwide.
In recent years, Chinese AI experts have significantly increased their standing within — and become distributed more widely across — the global academic community, laying a solid human capital foundation for China's technological internationalisation. According to MacroPolo’s 2023 report, 38% of top AI researchers in the US were originally from China—slightly surpassing the 37% with domestic US backgrounds. In top international AI and robotics conferences and journals, Chinese scholars are approaching US levels in terms of both their output of papers and participation in peer review. Data from Guide2Research also show a steady rise in China’s share among the world’s top-ranked computer scientists, indicating that the academic calibre of Chinese AI talent is now world-leading.
Building on this, national and local authorities in China have introduced numerous policies in recent years to encourage talent repatriation [Note: of which Liu is a beneficiary] and to support the development of research platforms—strengthening the country’s AI R&D capabilities. However, boosting the global influence of Chinese AI cannot rely solely on “talent returning home”; it also requires a systematic strategy for “talent going abroad”. More outstanding Chinese AI professionals should be encouraged to work in Belt and Road countries and other emerging markets. This means actively engaging in the development of local education systems and industry by taking up academic positions, establishing research institutes, launching joint laboratories, or founding tech companies. This will not only support the global diffusion of Chinese technologies and standards but also help build a China-centred global network of scientific and technological cooperation.
Although systematic data on the number of Chinese scholars teaching in Belt and Road countries is still lacking, the activity level of cooperation platforms such as the University Alliance of the Silk Road suggests that China has considerable potential in terms of talent export and knowledge diffusion. Accordingly, while continuing to encourage “talent repatriation”, the state should also establish institutional frameworks to support “talent going abroad”. These might include setting up dedicated overseas teaching and research cooperation funds, advancing joint training mechanisms with Belt and Road countries, and promoting localisation of technology standard certifications and curriculum development—thereby enabling the global expansion of Chinese AI and embodied intelligence and strengthening China’s international discourse power.
8. Conclusion
This article has provided a systematic examination of the current international landscape and the technological, open-source, standards-based and talent-related foundations underpinning the global expansion of Chinese AI.
In the face of a new wave of trade protectionism and technological blockades launched by the United States in 2025 — which are leading to the restructuring of the global technological order — China is entering a strategic window for technology diffusion. Historical experience shows that while technological leadership is a prerequisite, without the combined strength of early adopters, standard disseminators and consensus-builders in governance, it remains difficult to establish a dominant position within the global technology system. Therefore, the development of Chinese AI must not be confined to domestic applications and talent repatriation; instead, it must shape the global ecosystem proactively, foster international consensus and construct a China-led pathway for innovation diffusion.
To this end, the article recommends that China formulate and implement an integrated internationalisation strategy built around “technology–open-source–standards–talent”. Specifically, this means:
Strengthening technological leadership by focusing on critical areas in foundation models and embodied intelligence systems, and promoting continuous technological [upgrades] through iteration;
Building an open-source ecosystem portal, enabling the international release of systems, models and toolchains via platforms to capture greater market share;
While striving for greater influence [话语权] within the existing international standards system, China should simultaneously promote an “open source is the standard” approach—embedding standards into hardware–software product systems and exporting them to emerging markets as part of their global deployment;
Encouraging the international mobility of top-tier talent, supporting their development in Belt and Road countries and beyond, and building a global network for R&D and standards dissemination—thereby securing meaningful international discourse power and a structural advantage for Chinese AI technologies and governance frameworks.
ChinaTalk is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.