Reading view

There are new articles available, click to refresh the page.

AI Geopolitics in the Age of Test-Time Compute w Chris Miller + Lennart Heim

Does America need a Manhattan Project for AI? Will espionage make export controls obsolete? How can the U.S. foster an open innovation ecosystem without bleeding too much intellectual property?

To discuss, ChinaTalk interviewed Lennart Heim, an information scientist at RAND, and Chris Miller, author of Chip War.

We get into…

  • The growing importance of inference scaling and what it means for export controls,

  • Regulatory oversights that have allowed China to narrow the gap in AI capabilities,

  • China’s best options for keeping up in a low-compute environment,

  • Methods to secure model weights and associated tradeoffs,

  • Partnerships in the Middle East and the tension between export controls and economies of scale,

  • Whether autocracies are better suited for facilitating AI diffusion.

Listen on Spotify, Apple Podcasts, or your favorite podcast app.


Compute Domination and the Worst of All Worlds

Jordan Schneider: Let’s start with algorithms. Lennart, what’s your take on what DeepSeek’s models do and don’t mean for US-China AI competition, and maybe more broadly, what scaling on inference means for export controls?

Lennart Heim: To some degree, many were taken by surprise because, basically, what we’ve seen this year is that the gap between U.S. and Chinese AI models has been narrowing slightly. Of course, this always depends on how you measure it. Benchmarks are the best way to assess this, though whether they’re the most accurate success metric is a separate discussion we can have later.

It’s wrong to conclude that export controls aren’t working just because DeepSeek has developed a model that’s as good or nearly as good as OpenAI’s. That conclusion would be mistaken for two reasons.

First, export controls on AI chips were only implemented in 2022, and the initial parameters were flawed. Nvidia responded by creating the H800 and H800, chips that were just as effective as the restricted U.S. chips. This oversight cost us a year until the rules were updated in October 2023. DeepSeek, meanwhile, reportedly acquired 20,000 A100 chips before export controls were tightened and may have also obtained a number of H800s. These are still powerful chips, even if they’re not the latest. Because they have such a large stockpile, they’ll remain competitive for the foreseeable future.

Second, export controls don’t immediately stop the training of advanced AI systems. Instead, they influence the long-term availability of AI chips. For now, if someone needs sufficient chips to train a competitive AI system, they can likely still access them. However, as scaling continues, future systems may require millions of chips for pre-training, potentially widening the gap again.

For current systems, which use around 20,000 chips, we shouldn’t expect immediate impacts from export controls, especially given their short duration so far. The real question might be whether these controls affect the deployment phase. If Chinese users start extensively interacting with these systems, do they have enough compute resources to deploy them at scale? That remains to be seen.

Jordan Schneider: Let’s underline this. Inference scaling has the potential to exponentially increase compute demands. Can you explain why?

Lennart Heim: Yes. Once we have a well-trained system — developed with a significant number of chips — we can enhance its efficiency with techniques like reinforcement learning. This improves the system’s reasoning capabilities.

What do we mean by reasoning? Essentially, the model generates more outputs or tokens, which demands more compute power. For instance, if a model previously responded to queries in one second, it might now take 10 seconds. During that time, GPUs are processing data, and no other users can access those resources. This increased compute time per user significantly impacts overall resource requirements.

Not everyone has the necessary compute resources to handle these demands, and that’s a major factor. If DeepSeek or others open-source their models, we could gain better insights into the total compute resources required.

Chris Miller: Would you say this reflects a shift in the rationale for export controls? In our interview two years ago, we were thinking about AI progress primarily in terms of model training. Now, inference is an important driver of progress too. What does this imply for calibrating export controls going forward?

Lennart Heim: It’s a new paradigm, but not one that replaces the old approach — they coexist. We’ve seen trends like chain-of-thought reasoning, where models are asked to think step by step, and larger models tend to perform better at it.

I expect this pattern to continue. Bigger models may achieve better reasoning, though there could be a ceiling somewhere. In the semiconductor industry, transistors got smaller over time, but different techniques emerged to achieve that goal. We observe overarching trends with multiple enabling paradigms in AI also.

I don’t think this complexity fundamentally challenges export controls. As long as pre-training remains important and deployment depends on compute resources, export controls still matter.

If new architectures emerge that don’t rely heavily on compute, that would be a bigger challenge. Because if compute is no longer the main determinant of capabilities, current export controls become ineffective. But I think many, many parameters need to change for that to happen. Regulators can reasses over time.

Jordan Schneider: Let’s emphasize that point about compute. If models, after training, take significantly longer to produce answers — whether it’s three minutes, 10 minutes, or even an hour — that extended “thinking time” consumes compute resources. Compared to older systems that responded in seconds, this shift means nations will need far more compute capacity to achieve productivity, national defense, or other goals. For now, inference scaling makes the case for export controls even stronger. Compute prowess will be key for any government wanting to excel in AI technology.

Lennart Heim: Exactly. This also means that the distinction between training and deployment will become increasingly fuzzy over time. We already use existing trained models to create synthetic data and give feedback to train new systems. Early AI systems like AlphaGo and AlphaZero employed an element of self-play, where the model played against itself. That is training and deployment occurring simultaneously.

We’re likely to see similar trends with large language models and AI agents. This makes it harder to maintain clear-cut categories, and compute efficiency will play an even larger role.

As AI capabilities improve, they’ll require fewer resources to achieve the same benchmarks. A model might now need 100 GPU hours for a task that once took 500 GPU hours. This efficiency is part of the broader technological trend.

It’s hard to frame a national security conversation around specific capabilities, because any given capability becomes easier to access over time. That is the reality that policymakers need to deal with.

Chris Miller: Is it generally true that the contours of the national security argument around export controls are shifting, given the focus on inference infrastructure and test-time compute? If it was all about training, regulators could say “We don’t want them to train this type of AI application.” But if it’s actually about whether there’s infrastructure to run a model that can produce a million different use cases, it becomes more about productivity and less about discrete national security use cases.

Lennart Heim: It depends. The reasoning behind export controls has evolved. In 2022, export control discussions didn’t really mention frontier AI. By 2023, they began addressing it, and this year’s revised export controls took it a step further.

For example, the revised controls now include high-bandwidth memory units, which are key for AI chips. Why is this significant? HBM is especially important for deployment rather than training. Training workloads are generally less memory-intensive compared to deployment, where attention mechanisms and similar processes require more memory.

Banning HBM, and thereby limiting companies like Huawei from equipping AI systems with HBM, likely has a greater impact on deployment than training. However, I don’t think this motivation is explicitly stated in the documents.

Jordan Schneider: To draw a parallel from semiconductor history, there was a big debate about RISC versus CISC architectures back in the ’80s and ’90s. Pat Gelsinger pushed x86 as the dominant architecture, arguing that software wouldn’t need to be super efficient because hardware would continue improving exponentially. Essentially, Moore’s Law would clean up inefficiencies in code.

Fast forward to today, and it seems like there are enough AI engineers finding creative ways to use compute that algorithmic innovations will expand to match the available compute. Engineers at places like Anthropic, DeepMind, and OpenAI are the first to play with these resources. Would you agree this is the trend we should expect?

Chris Miller: Yes, that sounds about right. If compute is available, we’ll find ways to use it. An economist might ask, “What’s the marginal benefit of an additional unit of compute?” Ideally, we want the most algorithmic bang for our buck with each unit of compute.

In the last few years, we’ve seen GPU shortages in certain market segments, indicating strong economic output from every GPU deployed — or at least that’s the assumption behind the investments. It’s uncertain whether this trend will persists in the long-term.

The trajectory of Moore’s Law has historically been steady, but estimating improvements in algorithmic efficiency is much harder. There’s no reason to believe these improvements will follow a linear or predictable trend, so I’d remain agnostic about where we’re heading.

Lennart Heim: Even as compute efficiency improves, there’s still the question of how these breakthroughs are discovered. Are they serendipitous — like having a great idea in the shower — or do they emerge from systematically running hundreds of experiments?

Often, breakthroughs in compute efficiency come from large-scale experimental setups. Sometimes, these ideas are even inspired by the models themselves, like using a GPT model to develop the next iteration.

Leading AI companies have an ongoing internal competition for compute resources. With AI becoming commercialized, the competition intensifies because allocating more compute for research means less is available for deployment.

I’d be curious to see load graphs for these companies. Are experiments run at night when fewer people use ChatGPT? These are the types of strategies companies likely adopt when managing limited resources.

Jordan Schneider: To Chris’s point, as long as these systems are profitable and there’s value in increasing their intelligence, demand for them will persist. The smarter the systems, the more value they provide across sectors.

Looking at China, if export controls are working and TSMC can’t produce unlimited chips for Huawei, leaving the Chinese AI and cloud ecosystem with one-third of the capacity needed, what does that mean for research directions engineers in the PRC might take?

Chris Miller: Two things stood out to me this year.

First, rental prices for GPUs in China were reportedly lower than in the U.S., which is surprising in a supposedly GPU-constrained environment. This could suggest either that China isn’t actually GPU-constrained or that there’s lower demand than expected.

Second, Chinese big tech firms — ByteDance excluded — haven’t shown the same steady increase in capital expenditures on AI infrastructure as U.S. firms like Google or Microsoft. Charting capex trends from ChatGPT’s launch to now, Alibaba, Tencent, and Baidu don’t display the same commitment to scaling AI infrastructure.

Why might this be?

  1. Fear of chatbots saying something politically sensitive about Xi Jinping.

  2. Doubts about market demand for their products.

  3. Lingering caution from the 2019–2020 regulatory crackdown, making massive investments seem unwise.

But there does seem to be a striking difference between how Chinese big tech firms are responding to AI relative to U.S. big tech firms. I wonder what that tells us more generally about compute demand in China going forward.

Subscribe now

Lennart Heim: China’s venture capital ecosystem is quite different from the US. America’s sprawling VC system provides the risk capital needed to explore bold ideas, like building billion-dollar data centers or reactivating nuclear power plants.

Jordan Schneider: Exactly. In China, there’s less capital available for speculative investments. Investing tens of billions of dollars into cloud infrastructure for training AI models isn’t immediately profitable, so Chinese tech firms hesitate to do it. We recently translated two interviews with DeepSeek’s CEO that explain this with more detail.

There have been large, loss-leading investments in hardware-heavy sectors of the economy, but not many software-focused investments.DeepSeek, by operating more like a research organization and less like an Alibaba-style traditional tech firm, has taken a longer-term approach. It’s unclear whether smaller incumbents with sufficient capital can continue innovating or if progress will depend on stolen algorithmic IP.

Lennart, what’s your perspective on securing model weights and algorithmic IP as we head into 2025?

Lennart Heim: A lack of compute usually means fewer algorithmic insights, which causes the ecosystem to slow down. But stealing model weights is a shortcut. I’m referencing RAND’s recent report, Securing Model Weights, on this question.

Training a system may require tens of thousands of GPUs, but the result is just a file, a few gigabytes or terabytes in size. If someone accesses that file, they reduce or even bypass the need for GPUs entirely.

New ideas are compute multipliers. Publication causes widespread diffusion of these multipliers, which we have seen with transformer architecture, for example.

But this changed about two years ago. In the name of security, OpenAI, DeepMind, and Anthropic no longer publish many detailed architecture papers. OpenAI hasn’t released their model architectures since GPT-4.

If you want to know what the architecture looks like, you have to go to the right parties in San Francisco and talk to the right people. Which is exactly the problem. You could walk out of these parties with huge compute efficiency multipliers.

These companies still mostly have normal corporate environments. But if we see AI as core to national security, these AI labs need to be secure against state actors. These companies will eventually need help from the U.S. government, but they also need to step up on their own. Because this IP leakage completely undermines American export controls.

Chris Miller: How do we know we’re striking the right balance between securing important IP and fostering the free exchanges of ideas that drive technological progress and economic growth? What’s the metric for assessing whether we’ve achieved that balance?

Lennart Heim: Right now, we’re living in the worst of both worlds. OpenAI and DeepMind aren’t going around sharing their research openly with other researchers, publishing on arXiv, or presenting at conferences like NeurIPS or ICML. They’re not diffusing knowledge widely to benefit everyone.

At the same time, their proprietary information is still vulnerable to hacking. So, instead of fostering diffusion within the U.S. ecosystem, we inadvertently enable adversaries or bad actors that are willing to use illicit measures to access this information. This is the worst-case scenario.

Clearly an open ecosystem is beneficial in many ways. That’s why some companies still open-source model weights and infrastructure — it helps push the entire U.S. ecosystem forward.

Assessing the ideal policy balance is hugely complex. There are many reports discussing the trade-offs of open-sourcing versus safeguarding. For now, though, it’s clear that we’re in a bad place — keeping knowledge from U.S. researchers while leaving it vulnerable to theft.

Jordan Schneider: Let me offer a counterargument. Developing algorithmic innovations for frontier AI models isn’t something that happens on an assembly line. The places that succeed most at this have cultivated a unique research culture and can attract top talent from around the world. That includes talent from China, which produces a huge share of advanced AI research and talent.

A highly classified, “skunkworks”-style approach could create two major downsides.

  1. From a talent perspective, it becomes harder to attract people with diverse backgrounds if they need security clearances to access cutting-edge research.

  2. Research in highly classified settings tends to be compartmentalized and siloed. In contrast, the open, collaborative environments in leading labs foster innovation by allowing researchers to share insights, compare experiments, and optimize resources.

Imposing rigid barriers could hinder internal collaboration within firms, making it harder for researchers to learn from each other or gain equitable access to resources like compute.

The Manhattan Project succeeded by isolating talent in the desert until they developed the atomic bomb. That’s not a model we can apply to OpenAI, Anthropic, or other AI labs. The internal openness that has allowed Western labs to thrive could be stifled by the kind of restrictions you’re suggesting.

Lennart Heim: Absolutely. I’m not arguing that security comes without costs. It’s important to consider where to put the walls. We already have some walls in the AI ecosystem — we call them companies. We could achieve a lot by strengthening those existing walls while maintaining openness within organizations.

If someone eventually decides that research must happen in fully classified environments, then of course that would slow down innovation.

For now, though, many measures could enhance security at relatively low cost while preserving research speed. The RAND report referenced earlier outlines the costs and methods of different security levels. Some security measures don’t come at any efficiency cost. Just starting with low-hanging fruit — measures that are inexpensive yet effective — could go a long way.

Jordan Schneider: I have two ideas on this front.

First, if we believe in the future of export controls and assume the U.S. and its allies will maintain significantly more compute capacity than China, it could be worthwhile for the labs or the National Science Foundation to incentivize research in areas where the U.S. is more likely to have sustainable advantages compared to China going forward.

Second, banking on these security measures seems like a poor long-term strategy for maintaining an edge in emerging tech. I mean, think about Salt Typhoon. The Chinese government has been able to the intercept phone calls of anyone they want at almost no cost. Yes, it’s possible to make eavesdropping harder, but I’m not sure any organization can secure all their secrets from China indefinitely.

Source: RAND p. 22

Chris Miller: That raises the question of how to think about algorithmic improvements. Are they like recipes that can be easily copied? Or are they deeply embedded in tacit knowledge, making them hard to replicate even if you have the blueprints?

I’m not sure what the answer is, but replicability seems key to assessing how far to go with security measures.

Lennart Heim: You can draw an interesting connection here to the semiconductor industry. We’re all familiar with cases of intellectual property theft from ASML, the Dutch company building the most advanced machines for chip manufacturing. However, it’s not enough to simply steal the blueprints.

There’s a lot of tacit knowledge involved. For instance, when someone joins a semiconductor company, they learn from experienced technicians who show them how to use the machines. They go through test runs, refining their processes over time. This knowledge transfer isn’t written down — it’s learned by doing.

While this principle applies strongly to the semiconductor industry, it may be less relevant to AI because the field is much younger.

Recently, I’ve been thinking about whether there are still low-hanging fruits and new paradigms to explore. Pre-training has been scaled up significantly over time, and it’s becoming harder to find new ideas in that area. However, test-time compute is an emerging paradigm, and it might be easier to uncover insights there.

I expect academics, DeepSeek, and others to explore this paradigm, finding new algorithmic insights over the next year that will allow us to squeeze more capabilities out of AI models. Over time, progress might slow, but we could sustain it by increasing investment. That’s still an open empirical question.

Jordan Schneider: On that note, Lennart, what does it really mean to be compute-constrained?

Lennart Heim: I’ve been thinking about this more from the perspective of export controls. There’s often an expectation that once export controls are imposed, the targeted country will immediately lose the ability to train advanced AI models. That’s not quite accurate.

To evaluate the impact of export controls, it’s useful to consider both quantity and quality.

The quality argument revolves around cutting off access to advanced manufacturing equipment, like ASML’s extreme ultraviolet lithography machines. Without them, a country can’t produce the most advanced AI chips. For instance, while TSMC is working with 3-nanometer chips, an adversary might be stuck at 7 nanometers.

This results in weaker chips with fewer FLOPS (floating-point operations per second). Due to the exponential nature of technological improvement, the performance gap is often 4x, 5x, or 6x, rather than a simple 10% difference. Export controls exacerbate this gap over time.

The quantity argument is equally significant. Chip smuggling still happens, but access to large volumes of chips is much harder due to export controls and restrictions on semiconductor manufacturing equipment.

Being compute-constrained impacts the entire ecosystem. With fewer chips, fewer experiments can be run, leading to fewer insights. It also means fewer users can be supported. For example, instead of deploying a model to 10 million users, you might only support 1 million.

This has a cascading effect. Fewer users mean less data for training and less revenue from deployment. Lower revenue reduces the ability to invest in chips for the next training cycle, perpetuating the constraint.

Additionally, AI models increasingly assist engineers in conducting AI research and development. If I have 10x more compute than my competitor, I essentially have 10x more AI agents — or “employees” — working for me. This underscores how compute constraints can hobble an entire ecosystem.

Chris Miller: That makes a lot of sense. My theory about the Chinese government’s response — and Jordan, let me know if this resonates — is that they seem less concerned with consumer applications of AI and more focused on using AI as a productive force.

Their strategy appears to prioritize robotics and industrial AI over consumer-facing applications. The hope is that limited compute resources, when deployed toward productive uses, will yield the desired returns.

The problem with this approach is that much of the learning from AI systems comes from consumer applications and enterprise solutions. Without a full ecosystem, their progress will likely be stunted. It’s like trying to balance on a one-legged stool.

Jordan Schneider: Chris, that’s an interesting observation. It’s a reasonable strategy for a country facing resource constraints, but it also highlights the limitations of being compute-constrained.

Chris Miller: Exactly. There’s also a political dimension to consider. In addition to being compute-constrained, China has spent the past five years cracking down on its leading tech companies. This has dampened their willingness to invest in consumer-facing AI products.

After all, a successful product could draw political scrutiny, which isn’t a safe place to be. That dynamic further limits the development of a robust AI ecosystem.

Lennart Heim: That’s a great point. The revenue you generate often determines the size of your next model. OpenAI’s success, for instance, has attracted venture capital and fueled further progress.

China’s state subsidies can offset some of this reliance on revenue. They can fund projects even without immediate returns, challenging the flywheel effect I described earlier.

Still, there are many less compute-intensive AI applications, like AI agents, that are being developed worldwide. These don’t require the same level of resources but still factor into national security concerns.

The key question is, what are we most worried about? For AGI or highly advanced AI agents, compute constraints will likely be a major factor. But China might already be leading in domains like computer vision.

The ideal balance between compute intensity and emerging risks remains an open empirical question. We’ll need to monitor how these dynamics evolve over time.

Middle East Expansions and Cloud Governance

Chris Miller: Another obvious implication of being compute-constrained is that you can’t export computing infrastructure. Perhaps that’s a good segue to discussing the Middle East.

Lennart Heim: Part of the compute constraint story, as you mentioned, is that if you need chips to meet domestic demand, you can’t export a surplus. If the PRC is barely able to meet its own internal demand — assuming that’s true, though we don’t have solid evidence yet — it’s clear that the U.S. and its allies are producing significantly more AI chips. This allows them to export chips, but there’s an ongoing debate about where and how these chips should be exported.

Existing export controls already cover countries like China, Russia, and others in the Country Group D5 category. However, there are also countries in Group D4, like the UAE and Saudi Arabia, which require export licenses for AI chips. These countries are increasingly ambitious in AI, and since early this year, the U.S. government has been grappling with whether and under what conditions to allow chips to be exported to them.

Export licenses offer flexibility. They can come with conditions — such as requiring buyers to adhere to specific guidelines — before granting access to AI chips. There’s clearly demand for these chips, and this debate will likely continue into the next year, as policies and the incoming administration determine where the line should be drawn.

Chris Miller: It’s been publicly reported that upcoming U.S. regulations might involve using American cloud providers or data center operators as gatekeepers for AI chip access in these countries. This approach would essentially make private companies the enforcers of usage guidelines.

Lennart Heim: That’s an intriguing approach. It creates a win-win scenario: these countries get access to AI chips, but under the supervision of U.S. cloud providers like Microsoft, which can monitor and safeguard their use.

It’s important to understand that export controls for AI chips differ from those for physical weapons. A weapon is controlled by whoever possesses it, but AI chips can be used remotely from anywhere. If a country faces compute constraints due to export controls, one solution is to use cloud computing services in other countries or build data centers abroad under shell companies.

Most AI engineers never see the physical clusters they use to train their systems. The data centers are simply wherever electricity and infrastructure are available. This makes it challenging to track chip usage.

There are three layers to this challenge:

  1. Where are the chips? This is the most basic question.

  2. Who is using the chips? Even if you know where they are, it’s hard to determine who is accessing them.

  3. What are they doing with the chips? Even if you know who is using them, you can’t always control or monitor the models they train.

U.S. cloud providers can help address the second layer by verifying customers through “know your customer” regimes. I’ve written about this in a paper titled Governing Through the Cloud during my time at the Centre for the Governance of AI. Cloud providers can track large-scale chip usage and ensure compliance, making them far more reliable gatekeepers than companies in the Middle East.

Chris Miller: There’s broad agreement that no one should be allowed to build an AI cluster for nefarious purposes. But regulations seem to be taking this further, aiming to ensure long-term dominance of U.S. and allied infrastructure.

The idea is to not only set rules today but maintain the ability to enforce them in the future. This makes some people uncomfortable because it positions the U.S. and its allies as the long-term power brokers of AI infrastructure, potentially limiting the autonomy of other countries.

Lennart Heim: That’s a fair criticism, but I would frame it more positively. This is about ensuring responsible AI development. Depending on your geopolitical perspective, some might view it as the U.S. imposing its values, while others see it as necessary for safety and accountability.

Exporting chips isn’t the only option. Countries can be given access to cloud computing services instead. For example, if someone insists on acquiring physical chips, you could ask why they can’t simply use remote cloud services. But many countries want sovereign AI capabilities, with data center protection laws and other safeguards.

The ultimate goal should be to diffuse not only AI chips but also the infrastructure and practices for responsible AI development.

Jordan Schneider: This reminds me of a recent story that struck me. The American Battlefield Trust is opposing the construction of data centers near Manassas, a Civil War battlefield. It’s a tough dilemma — I want those data centers, but preserving historical sites is important too.

Intel’s Future and TSMC Troubles

Jordan Schneider: Speaking of sovereignty in AI, let’s discuss Intel. There’s been a lot of speculation about Pat Gelsinger’s departure and the board’s decision to prioritize product over foundry. Chris, what’s your take on this news?

Chris Miller: It’s a significant development and signals a major strategy shift for Intel, though the exact nature of that shift remains unclear.

There are several possible paths going forward:

  1. Intel could sell some of its design businesses and double down on being a foundry.

  2. It could do the opposite and focus on design while stepping back from foundry ambitions.

  3. It might just try to muddle through, continuing its current strategy until its next-generation manufacturing process proves itself.

None of these options are ideal compared to where expectations were two years ago.

Intel will present a tough challenge for the incoming administration. The company has already received $6–8 billion through the CHIPS Act to build expensive manufacturing capacity, but there’s no guarantee it will succeed. Going forward, Intel will likely require significant capital from both the private and public sectors.

Jordan Schneider: This ties back to the fundamental pitch that Pat Gelsinger made during the CHIPS Act discussions — that America should have leading-edge foundry capacity within its borders.

This is a global industry, and the world would face severe consequences if Taiwan were invaded, regardless of U.S. manufacturing capacity. Taiwan is nominally an ally, and TSMC’s leadership should know better than to antagonize the U.S. government by selling leading-edge chips to Huawei, because Washington is the ultimate guarantor of Taiwan’s current status.

That said, it would certainly be preferable for the U.S. to have Intel emerge as a viable second supplier or even the best global foundry. But how much are you willing to pay for that? Even if you allocate another $50 billion or $100 billion, can you overcome the cultural and structural issues within Intel?

There’s no denying the enormous business challenges involved. Competing in this space has driven many companies out of the market over the past 20 years because it’s simply too hard. Chris, do you want to put a dollar figure on how much you’d be willing to raise taxes to fund a US-owned leading-edge foundry?

Chris Miller: You’re right — it’s not just about money. But it is partly about money because funding gives Intel the time it needs to demonstrate whether its processes will work.

Whether Intel succeeds or fails, it’s clear that if they only have 12 months, they’ll fail. They need 24 to 36 months to prove their capabilities. Money buys them that time.

The other variable is TSMC, which already has its first Arizona plant in early production. Public reports indicate that the yields at this plant are comparable to those in Taiwan, which is impressive given the negative publicity surrounding the Arizona plant in recent years.

TSMC is building a second plant in Arizona and has promised a third. If these efforts succeed, the need for an alternative US-based foundry diminishes because TSMC is effectively becoming that alternative.

The big question is how many GPUs for AI are being manufactured in Arizona. TSMC has publicly stated that 99% of the world’s AI accelerators are produced by them, and currently, that production is confined to Taiwan. Expanding this capability to Arizona would be a game-changer.

Lennart Heim: There has been public reporting that Nvidia plans to produce in the U.S. in the near future, which would be a positive development. But the broader question extends beyond logic dies. What about high-bandwidth memory? What about packaging? Where will those components be produced?

The strategic question is whether the U.S. should carve out a complete domestic supply chain for AI chips. Is it a strategic priority to have every part of the process — HBM, packaging, and more — onshore, or are we content with relying on external suppliers for certain elements?

Chris Miller: Intel has received commitments through the CHIPS Act, but those funds are contingent on them building new manufacturing capacity. The new leadership team at Intel might decide not to proceed with some of those plans.

This raises a critical question — if Intel doesn’t build those facilities, what happens to the allocated CHIPS Act funding? It’s important to note that Intel hasn’t received all the money; they’ve been promised financial assistance if they meet specific milestones.

This decision will likely land on the desk of the next administration, and they’ll need to assess whether additional private and public capital is necessary to ensure Intel’s competitiveness.

Jordan Schneider: Early on, policymakers should evaluate the trade-offs clearly. If we give $25 billion to America’s foundry, it might result in a 30% chance of competing with TSMC on a one-to-one basis by 2028. At $50 billion, maybe it’s a 40% chance. At $75 billion, perhaps it rises to 60%.

Even with massive investment, there’s a ceiling on how competitive Intel can become. The rationale for the initial $52 billion in CHIPS Act funding was compelling, but that was spread across many initiatives — not just frontier chip manufacturing.

For Intel to achieve parity with TSMC by 2028, you’d need to show how increased investment could meaningfully improve the odds. This is a challenge for Intel’s next CEO, the next commerce secretary, and whoever oversees the CHIPS Act moving forward.

Time is critical, and if Intel can’t make it work, we’re left relying on TSMC. That brings us back to the awkwardness of TSMC producing a significant number of chips for Huawei. We need to dive deeper into that story.

Lennart Heim: Great segue. The Huawei story broke a couple of months ago, and it highlights the challenges of enforcing export controls.

The basic premise of export controls is to prevent Chinese entities from producing advanced AI chips at foundries like TSMC. There are two main rules:

  1. If you’re on the Entity List, like Huawei, you can’t access TSMC’s advanced nodes.

  2. AI chips cannot be produced above certain performance thresholds.

TechInsights conducted a teardown of the Huawei Ascend 910B and found it was likely produced at TSMC’s 7-nanometer node. This violates both rules — the Entity List restriction and the AI chip performance threshold.

Shell companies and similar tactics make compliance tricky, but based on the available information, this should have been detected.

What’s even more concerning is TSMC’s role in this. If TSMC is producing an AI chip with a die size of 600 square millimeters — massive compared to smartphone chips — they should have raised red flags.

Any engineer can tell the difference here. There are probably structural issues at TSMC where the legal compliance team doesn't talk to the engineers.

But on the other side is the design teams. It's not like you send them something and then you stop talking. This is a co-design process. There was clearly ongoing communication on these kinds of things. But then they produced the logic dies for the Ascend 910B, although it’s still an open question whether all of these chips were produced at TSMC.

But TSMC’s involvement definitely undermines export controls. A good story you can spin is that this is a sign of some production issues happening at SMIC such that Huawei is still relying on TSMC. Definitely more insights are required here.

Intelligence Failures and Government Follow-on 本末倒置

Jordan Schneider: Speaking of shell companies, what was the U.S. intelligence community doing? The fact that this information had to come from TechInsights is mind-boggling. I can’t imagine there are many higher priorities than understanding where Huawei is manufacturing its chips. For this to break through TechInsights and Reuters feels like an absurd sequence of events. It highlights a glaring gap in what the U.S. is doing to understand this space.

Lennart Heim: We’ve seen this before, like when Huawei’s advanced phone surfaced during Raimondo’s visit to China. There’s clearly more that needs to be done. The intelligence community plays a role, but think tanks, nonprofits, and even private individuals can contribute to filling this gap.

For example, open-source research can be incredibly powerful. People can use Google Maps to identify fabs or check Chinese eBay for listings of H100 chips. There’s a lot you can do with the resources available, and nonprofits can play a critical role in providing this type of information.

The gap in knowledge here is larger than I initially expected, and there’s a lot of room for improvement.

Chris Miller: This also points to the broader challenges of collecting intelligence on economic and technological issues, which the U.S. has historically struggled with.

It’s also worth asking what information the Chinese Ministry of State Security (MSS) is gathering about technological advances in other countries? What conclusions are they drawing? If we’re struggling with this, I wonder what kind of semiconductor and AI-related briefings are landing on Xi Jinping’s desk. Do those briefings align with reality, or are they equally flawed?

Jordan Schneider: It sounds like the solution is just to fund ChinaTalk!

On the topic of MSS, the Center for Security and Emerging Technology (CSET) did some reports of China’s systems for monitoring foreign science (2021) and open-source intelligence operations (2022). But when you read Department of Justice indictments against people caught doing this work, it often seems amateurish and quota-driven.

I don’t have a clearance and can’t say for sure, but it makes me wonder — if the U.S. is struggling to figure out Huawei’s chips, maybe the Chinese are equally bad at uncovering OpenAI’s secrets. This might reflect bureaucratic challenges on both sides, such as bad incentives, underfunded talent pools, and difficulty competing with the private sector.

Subscribe to support ChinaTalk’s independent research.

Lennart Heim: That’s true. But there’s a broader issue here. I come from a technical background — electrical engineering — and transitioning into the policy world has been eye-opening.

One thing I’ve realized is that there are often no “adults in the room,” especially on highly technical issues. In domains like AI and semiconductors, there simply aren’t enough people who deeply understand the technology.

Getting these experts into government is a huge challenge because public-sector salaries can’t compete with private-sector ones. It’s not about hiring international experts — it’s about bringing in people who know these technologies inside and out. They need to be aware of the technical nuances to even track these developments properly.

For example, I’ve used this as an interview question — if you were China, how would you circumvent export controls? Surprisingly few people mention cloud computing. Most assume physical products and locations matter most, but it’s really about compute. It doesn’t matter if the H100 chip is in Europe — you care about how it’s used.

These types of insights require a technical mindset, and we need more of these brains in D.C. — in think tanks, government, and the IC. We’re still in the early days of implementing export controls, and the more technical expertise we bring in, the better we’ll get.

Many of the hiccups we’ve seen so far can be traced back to a lack of technical knowledge and capacity to address these issues effectively.

Jordan Schneider: We need to diffuse technical talent into government, but we also need to diffuse AI into the broader economy. Lennart, how should that happen?

Lennart Heim: Diffusion is a big topic. Earlier, we touched on where data centers should be built — Microsoft expanding abroad is one form of diffusion. Another aspect involves balancing protection, such as export controls, with promotion. These two strategies should go hand in hand.

Diffusing AI has several benefits. It can be good for the world and can also counter the development of alternative AI ecosystems, like those in the PRC. From a national security perspective, it’s better to have American-led AI chips, data centers, and technologies spread globally.

That raises an important question — as the gap between AI models narrows, with China catching up and smaller models improving, are models really the key differentiator anymore? From a diffusion standpoint, what should we focus on if models aren’t the most “sticky” element?

Take GitHub as an example. It previously used OpenAI’s Codex to help users write code but recently switched to Anthropic’s Claude. This shows how easily models can be replaced with a simple API switch. Even Microsoft acknowledges this flexibility, and it’s clear that models may not provide the long-term competitive edge we assume.

If models aren’t the differentiator, what is sticky? What should we aim to diffuse, and how should we go about it?

Chris Miller: The interesting question is which business models will prove to be sticky. Twenty years ago, I wouldn’t have guessed that we’d end up with just three global cloud providers dominant outside of China and parts of Southeast Asia.

Those business models have extraordinary stickiness due to economies of scale. The question now is — what will be the AI equivalent of that? Where will the deep moats and large economies of scale emerge?

These are the assets you want in your country, not in others. They provide enduring influence and advantages. While we don’t yet know how AI will be monetized, this is a space worth watching closely.

Lennart Heim: That’s a great point. It also ties into the idea of building on existing infrastructure. Take Microsoft Word — it’s incredibly sticky. Whether you love it or hate it, most organizations rarely switch away from it.

For example, the British government debated moving away from Microsoft Office for years. The fact that this debate even exists shows how difficult it is to dislodge these systems.

Maybe the stickiness lies in integrating AI into tools like Word, with features like Copilot calling different models. Or perhaps it’s in development infrastructure.

We’ve focused a lot on protecting AI technology, but we haven’t thought enough about promoting and diffusing it. This includes identifying sticky infrastructure and understanding how to win the AI ecosystem, not just by building the best-performing models but by embedding AI into tools and workflows.

Chris Miller: This brings us back to the Middle East and the tension between export controls and economies of scale. If economies of scale are crucial, you want your companies to expand globally as soon as possible.

That raises a question: does this mean relaxing export controls on infrastructure, or do you maintain strict control? Balancing the need for control with the benefits of scaling up globally is a delicate but important challenge.

Lennart Heim: What about smartphones? AI integration into smartphones seems like a big deal. For example, Apple has started using OpenAI models for some tasks but is also developing its own. At some point, I expect Apple to ditch external models entirely.

Interestingly, Apple is also moving away from Nvidia for certain AI tasks, developing its own AI systems instead. With millions of MacBooks and iPhones in users’ hands, Apple could quickly scale its AI.

This shift toward consumer applications — beyond chatbots — will define the next phase of AI. We’ll see if these applications prove genuinely useful. For now, feedback on Apple’s recent AI updates has been underwhelming, but that could change next year.

If Apple’s approach takes off, could it define who wins in AI?

Jordan Schneider: Let me take this from a different angle. AI matters because it drives productivity growth, and that’s what we should be optimizing for.

I trust that companies like Apple, Nvidia, and OpenAI will continue improving models and hardware. My concern is that regulatory barriers will block people from reaping the productivity benefits.

For example, teacher unions might resist AI in classrooms, or doctors might oppose AI in operating rooms. Every technological revolution has brought workplace displacement, but history shows that these changes leave humanity better off in the long run — more productive and satisfied.

The next few years will see political and economic fights between new entrants trying to deploy AI and labor forces pushing back, especially through regulation. These battles will determine how AI transforms industries.

Chris Miller: Agreed. Beyond the firm-versus-labor dynamic, there’s also a competition between incumbent firms and new entrants. This varies by industry but is equally important.

Then there’s the question of which political system — ours or China’s — is better suited to harness innovation rather than obstruct it. You could make arguments for either.

Jordan Schneider: Take Trump, for example. On one hand, he’s concerned about inflation and unemployment but also supports policies like opposing port automation.

Ultimately, I don’t think Trump himself will play a huge role in these decisions. Instead, it’s the diffuse network of organizations — standard-setting bodies, school boards, and others — that will shape the regulatory landscape. Culture also matters here. Discussions about AI’s risks — like safety concerns and job loss — have made it seem more frightening than it should.

These risks are real, but they need to be balanced against the benefits of technological progress. Right now, the negative cultural conversation about AI could influence these diffusion debates.

Xi Jinping might be even more worried about unemployment than Trump. But some parts of China’s non-state-owned economy are probably more willing to experiment and adapt new workflows.

The U.S. may be too comfortable to navigate the disruptions needed to fully harness AI’s potential. This complacency could slow progress compared to China’s willingness to experiment aggressively.

Chris, what do you think about a Manhattan Project for AI?

Chris Miller: The term “Manhattan Project” for AI isn’t quite right. The Manhattan Project was secretive, time-limited, and narrowly focused. What we need for AI is long-term diffusion across society.

The better analogy is the decades-long technological race with the Soviet Union, marked by broad R&D investments, aligned incentives, and breaking barriers to innovation. This kind of sustained effort is what we need for AI.

Lennart Heim: That requires projects that focus on onshoring more fabs and data centers — like CHIPS Act 2.0. It also requires energy and permitting reform.

Compute is key, and building more data centers is a good starting point, but we also need to secure what we build. This includes data centers, model weights, and algorithmic insights. If we’re investing in these capabilities, we can’t let them be easily stolen. Innovation and security must go hand in hand.

Jordan Schneider: One thing I’d add is the importance of immigration reform. The Manhattan Project had over 40% foreign-born scientists. If we want to replicate that success, we need to attract the world’s best talent.

This is a low-cost, high-impact solution to drive growth, smarter models, better data centers, and more productivity. It’s crucial to have the best minds working in the U.S. for American companies.

Lennart Heim: Absolutely. Many of the top researchers in existing AI labs are foreign-born. Speaking personally as a recent immigrant, I’d love to contribute to this effort. If we’re doing this, let’s do it right.

Reading Recommendations

Jordan Schneider: Let’s close with some holiday reading recommendations. Lennart, what was your favorite report of the year?

Lennart Heim: Sam Winter-Levy at Carnegie just published a report called The AI Export Dilemma: Three Competing Visions for U.S. Strategy. It touches on many of the topics we discussed, like how we should approach diffusion, export controls, and swing countries. It has some good ideas.

Jordan Schneider: I’d like to recommend The Gunpowder Age: China, Military Innovation, and the Rise of the West in World History by Tonio Andrade. We’ll be doing a show on it in Q1 2025. It’s an incredibly fun book and addresses a real deficit in Chinese military history. The author dives deep into Chinese sources and frames the Great Divergence through the lens of gunpowder, cannons, and guns.

He uses fascinating case studies, like battles between the Ming and Qing against the Portuguese, British, and Russians, to benchmark China’s scientific innovation during the Industrial Revolution.

The book argues — similar to Yasheng Huang’s perspective from our epic two-hour summer podcast — that the divergence between China and the West happened much later than commonly believed. Into the 1500s and 1600s, China was still on par with the West in military innovation, including boat-building, cannon-making, and gun-making.

The writing is full of flair, which is rare in historical works. It’s military history, technology, and China vs. the rest of the world — all my sweet spots in one book.

What’s your recommendation, Chris?

Chris Miller: For some more deep history, I recommend A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains by Max Bennett.

It’s a history of brains and how they’ve evolved over millions of years, starting with the first neurons. The author is an AI expert who became fascinated by the evolution of intelligence and ended up becoming a neuroscience expert in the process.

The book is extraordinary — more fun than I expected — and thought-provoking in how it explores the history of thinking across all kinds of beings, including humans.

ChinaTalk is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.

跨越欧洲骑行记之五——冷漠的德国人?

关于欧洲各国人的性格,中文世界有“德国人冷漠”之说。曾经有很多年,这也是我对德国人的印象,追溯源头,大都来自阅读和道听途说。这些年,经历了在欧洲长途徒步和骑行,我有机会近距离接触普通德国人,逐渐意识到那种印象跟现实不太相符,大致是一种刻板印象导致的偏见。 

Read more

Pony.ai’s Robotaxis and the Long Road Ahead

Robotaxi observers worldwide believe one thing about the industry in China: It’s moving aggressively. NYT dubbed Wuhan “the world’s largest experiment in driverless cars,” thanks to significant government support both in terms of regulations for testing and data collection.

The hottest Chinese startups are going public in the US, too. Most notably, Pony.ai 小马智行 made its Nasdaq debut the day before Thanksgiving. The company recently said it would expand its robotaxi fleet from 250 to 1,000 in 2025, significantly growing its commercialization prospects.

Pony.ai’s IPO was valued at $5.25 billion. Not peanuts, but less than its $8.5 billion Series D. Compared to its heyday, Pony is being tested by a harsh market reality and may face stringent international regulatory barriers as it tries to bring its technology abroad. To understand the future of Chinese AVs, ChinaTalk dove deep into the industry’s history and interviewed both cofounders of Pony.ai. We get into:

  • Pony.ai’s rise to China’s leading robotaxi company

  • How broader changes in the robotaxi industry and Chinese companies’ fundraising environment affect Pony.ai’s IPO

  • The challenges of commercialization for Pony.ai

  • Pony.ai’s plans for overseas expansion amidst international hesitance to accept Chinese AV

Past: The Top Horse in the Chinese AV Race

In late 2016, James Peng 彭军 and Tiancheng Lou 楼天城 left their jobs at Baidu’s autonomous driving department to found Pony.ai, a company that develops Level 4 autonomous driving technology. L4 refers to the stage of automation where a vehicle can drive without a human driver.

The self-driving industry boomed during those years. Hundreds of startups were founded across a complex web of transportation systems, and autonomous driving was largely believed to be the first momentous application of artificial intelligence in the pre-GPT era. Uber started its self-driving unit in 2015 and acquired Otto, a self-driving truck startup, in 2016. The same year, Google’s self-driving unit, where Lou used to work, became Waymo and publicly demonstrated its technology. Almost every major automaker announced partnerships to develop these technologies or invested in a startup.

In China, Baidu started investing in autonomous vehicle research in 2013 and began the Apollo project to develop its own driverless vehicles in 2017. Many of the big names in China’s robotaxi industry came from Baidu, including the founders of Pony.ai and WeRide.

Thanks to the strong technical backgrounds of Pony’s founders, China’s biggest investors were immediately interested. HongShan ​​红杉中国 (formerly Sequoia China) led Pony’s seed round in 2017. Then in 2020, Toyota participated in Pony’s Series B, and later became a key partner in Pony’s attempts to commercialize its robotaxi tech: Pony develops self-driving software and hardware, and Toyota provides the vehicles.1

Pony also expanded quickly in its first five years. In 2018, Pony became the first company in China to launch a public-facing robotaxi service (ie, fully autonomous vehicles with safety drivers) regularly operating in Guangzhou, while obtaining a permit to test in Beijing. Around the same time, the company started to explore robotrucks and established a trucking division in 2020. In 2021, Pony began to remove safety drivers in some of its robotaxis in Guangzhou.

At the end of 2020, Pony was valued at $5.3 billion and raised $2.5 billion in Series-C funding thanks to investments from sovereign wealth funds such as Ontario Teacher’s Pension Plan and Brunei Investment Agency. By June 2021, the company was on the verge of initiating an IPO, but launch plans came to a grinding halt when SEC asked for a “pause” on US IPOs of Chinese companies.

That marked the beginning of a downturn for many USD venture funds in China and left Pony in an awkward situation: Autonomous vehicles are capital-intensive and R&D-driven, and the commercialization process had only just started. Given the company’s high valuation, there were doubts about whether Pony could drum up more interest from private funds without an IPO.

But Pony wasn’t horsing around — it closed a $1.1 billion Series D at a valuation of $8.5 billion. In October 2023, amid a bonanza of Middle Eastern investments in Chinese EV and AV companies, Pony secured another $100 million from the financial wing of NEOM, Saudi Arabia’s urban desert megaproject.

Present: Pitched Promises Meet Commercialization Challenges

In its IPO filing, Pony disclosed the current scale of its operations. And these numbers are modest, to say the least.

Pony has a fleet of around 250 robotaxis operating across four tier-one cities in China: Beijing, Guangzhou, Shenzhen, and Shanghai. It’s charging fares for fully driverless rides in the first three. During the first half of 2024, each fully driverless robotaxi received an average of 15 orders per day.

By comparison, Waymo currently operates a fleet of over 700 vehicles which complete more than 150,000 rides every week in metro Phoenix, San Francisco, and Los Angeles. Baidu has a fleet of 500 vehicles in Wuhan alone.

In an interview, Tiancheng Lou told me that only three companies have achieved L4 self-driving: Waymo, Baidu, and Pony.

AV companies are facing a tough reality in 2024. General Motors just axed funding for Cruise, the AV startup it acquired a decade ago. Cruise had been burning through money, and after a major accident involving a fully autonomous vehicle, it was only testing robotaxis very slowly, one city at a time, without offering public-facing services. Other automakers have been similarly unable to pony up the necessary cash for AV development.

For Pony.ai, there is an additional level of complexity, because unlike Apollo, Waymo, and even Amazon-owned Zoox, Pony is not backed by a major tech giant. It needs funding more urgently, perhaps, than any other AV company.

“It is my disadvantage,” Lou told me. “I have to wait until the cost structure is stable to add a car. But that also means that I have to do well.”

By “cost structure,” Lou was referring to per-vehicle operating margin. Usually, that’s the difference between passenger fare and per-vehicle cost which includes maintenance, research, and manufacturing. Lou and Peng were optimistic that the margin would turn positive in 2025. In other words, when adding a new vehicle to the robotaxi fleet, the company does not lose money.

But right now, Pony.ai is still a money-losing business: While revenue rose from $68 million in 2022 to $71 million in 2023, net loss attributable to the company was $148 million and $124 million in these two years respectively. R&D still accounts for the highest percentage of Pony’s operating expenses, adding up to over $123 million in 2023.

The good news is that losses have begun to narrow over time. But Pony’s IPO filing promise — that robotaxis would be the company’s main source of revenue — is still far from becoming a reality.

Pony has about 190 robotrucks, which generated 73% of its revenue over the first six months of 2024. Much of this new revenue came from transportation fees collected by Cyantrain, a joint venture founded by Pony and Sinotrans, for freight orders fulfilled by robotrucks.

Lou told me that it’s harder to scale robotrucks than robotaxis. The hardest part about robotaxi is the technology, he said. As long as the technology is mature enough to allow the vehicles to drive fully autonomously, it’s not hard to build tens of thousands of vehicles with that capacity. However, when it comes to trucks, which are larger and faster, there is a higher safety standard to meet.

Another way to diversify revenue is by selling so-called L2++ technology, which Pony started doing in 2022. L2++ refers to technology that assists human drivers instead of replacing them, which is sold to OEMs.

Other Chinese self-driving startups are choosing this road as well. WeRide went public in October but only sold 3 robotaxis and 19 robobuses in 2023. Revenue from these products declined by almost half between 2021 and 2023, from 101 million yuan to 54 million yuan. And even these numbers are padded by government contracts (“local transportation service providers”) as opposed to public-facing, fared robotaxi services. Meanwhile, WeRide’s service revenue (eg, L2 to L4 technical support and R&D services) increased from 36 million yuan to 347 million yuan during the same period.

Regardless, Pony clearly benefits from a Nasdaq IPO: US investors are likely more lenient about the intensive time and capital requirements of self-driving vehicle development. Several investors told me that they didn’t expect a newly minted tech IPO to be profitable.

But investors won’t wait forever — eventually, AV shareholders will push for a path to profitability as proof that they bet on the right horse.

Future: Overseas Expansion vs International Regulation

Pony is already the leading robotaxi company in China, with operations in four tier-one cities, but questions remain about the company’s post-IPO international expansion plan. Pony now has research centers in Silicon Valley and Luxembourg, and potential operations in Hong Kong, Singapore, South Korea, Saudi Arabia, UAE, and Luxembourg.

“We have set up a layout of operations in these places,” James Peng said in an interview. “The ‘layout’ doesn’t necessarily mean that the cars are already there. It’s more about technological partnerships or selling parts. In some of these places, the cars are still on the way, but eventually, we will have them.”

The market is still very new, so Pony is still adjusting their overseas investment, Peng added.

Pony is also partnering with Uber to offer driverless car services in overseas markets, although Uber has also inked cooperation agreements with Waymo, Cruise, and Wayve. Before Pony’s IPO, Bloomberg reported that Uber was in talks to invest in the startup’s offering.

Pony doesn’t have any immediate plan to expand operations to the US. California suspended Pony’s California driverless testing permit in the fall of 2021 after a reported collision in Fremont, a one-party incident where the vehicle hit a street sign. (The state’s Department of Motor Vehicles also temporarily revoked Pony’s permit to test vehicles with a driver in 2022, alleging the company’s failures to monitor the driving records of safety drivers, before the permit was reinstated at the end of 2022.)

“We are mostly focusing on China, because there is a large enough market, sufficient demand, and policymakers are supportive,” Peng told me.

It is not easy to compare self-driving regulations between the US and China. I’ve heard US transportation officials saying that the Chinese government is more proactive in regulating self-driving tech. To me, the main difference is that the Chinese government has adopted a more top-down approach to regulating autonomous vehicles. The first set of major guidelines for testing robotaxis on public roads was released in 2018.

Then in November 2023, China’s Ministry of Industry and Information Technology and three other departments jointly issued the “Notice on Carrying Out Pilot Work for the Access and Road Usage of Intelligent Connected Vehicles.” This notice focuses on conducting access pilot programs and road usage pilot programs for intelligent connected vehicles with L3 (where the driver should still remain in the vehicle) and L4 autonomous driving systems within designated areas.

Subscribe now

More than 30 Chinese cities and provinces have released their own sets of guidelines and permitting schemes for the testing and operation of robotaxis. Some are more supportive than others. Yet, in all cities, robotaxis still can only operate on designated public roads, which could negatively impact attempts to scale.

Wuhan, sometimes called the “robotaxi city of China,” is arguably also the largest testing ground for robotaxis in the world, as the opening testing area for the technology has reached about 3,000 square kilometers (over 1,160 square miles.) Even then, Baidu has yet to reach its goal of deploying 1,000 robotaxis in the city within 2024. (The consensus thus far is that at least 1,000 fully autonomous robotaxis are needed in a city to achieve scalable commercialization.) An expert from Baidu told Huxiu that the peak of robotaxi commercialization will arrive between 2028-2030.

In both US and China, “The policies are more advanced than technological development,” said Lou.

ChinaTalk is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.

Source: Pony.ai
1

In its IPO filing, Pony said it established a joint venture with Toyota, where the latter would supply vehicles as a fleet company. This is similar to Waymo’s partnership with Jaguar. Toyota is the biggest shareholder of Pony.ai.

❌