Reading view

There are new articles available, click to refresh the page.

AI Geopolitics in the Age of Test-Time Compute w Chris Miller + Lennart Heim

Does America need a Manhattan Project for AI? Will espionage make export controls obsolete? How can the U.S. foster an open innovation ecosystem without bleeding too much intellectual property?

To discuss, ChinaTalk interviewed Lennart Heim, an information scientist at RAND, and Chris Miller, author of Chip War.

We get into…

  • The growing importance of inference scaling and what it means for export controls,

  • Regulatory oversights that have allowed China to narrow the gap in AI capabilities,

  • China’s best options for keeping up in a low-compute environment,

  • Methods to secure model weights and associated tradeoffs,

  • Partnerships in the Middle East and the tension between export controls and economies of scale,

  • Whether autocracies are better suited for facilitating AI diffusion.

Listen on Spotify, Apple Podcasts, or your favorite podcast app.


Compute Domination and the Worst of All Worlds

Jordan Schneider: Let’s start with algorithms. Lennart, what’s your take on what DeepSeek’s models do and don’t mean for US-China AI competition, and maybe more broadly, what scaling on inference means for export controls?

Lennart Heim: To some degree, many were taken by surprise because, basically, what we’ve seen this year is that the gap between U.S. and Chinese AI models has been narrowing slightly. Of course, this always depends on how you measure it. Benchmarks are the best way to assess this, though whether they’re the most accurate success metric is a separate discussion we can have later.

It’s wrong to conclude that export controls aren’t working just because DeepSeek has developed a model that’s as good or nearly as good as OpenAI’s. That conclusion would be mistaken for two reasons.

First, export controls on AI chips were only implemented in 2022, and the initial parameters were flawed. Nvidia responded by creating the H800 and H800, chips that were just as effective as the restricted U.S. chips. This oversight cost us a year until the rules were updated in October 2023. DeepSeek, meanwhile, reportedly acquired 20,000 A100 chips before export controls were tightened and may have also obtained a number of H800s. These are still powerful chips, even if they’re not the latest. Because they have such a large stockpile, they’ll remain competitive for the foreseeable future.

Second, export controls don’t immediately stop the training of advanced AI systems. Instead, they influence the long-term availability of AI chips. For now, if someone needs sufficient chips to train a competitive AI system, they can likely still access them. However, as scaling continues, future systems may require millions of chips for pre-training, potentially widening the gap again.

For current systems, which use around 20,000 chips, we shouldn’t expect immediate impacts from export controls, especially given their short duration so far. The real question might be whether these controls affect the deployment phase. If Chinese users start extensively interacting with these systems, do they have enough compute resources to deploy them at scale? That remains to be seen.

Jordan Schneider: Let’s underline this. Inference scaling has the potential to exponentially increase compute demands. Can you explain why?

Lennart Heim: Yes. Once we have a well-trained system — developed with a significant number of chips — we can enhance its efficiency with techniques like reinforcement learning. This improves the system’s reasoning capabilities.

What do we mean by reasoning? Essentially, the model generates more outputs or tokens, which demands more compute power. For instance, if a model previously responded to queries in one second, it might now take 10 seconds. During that time, GPUs are processing data, and no other users can access those resources. This increased compute time per user significantly impacts overall resource requirements.

Not everyone has the necessary compute resources to handle these demands, and that’s a major factor. If DeepSeek or others open-source their models, we could gain better insights into the total compute resources required.

Chris Miller: Would you say this reflects a shift in the rationale for export controls? In our interview two years ago, we were thinking about AI progress primarily in terms of model training. Now, inference is an important driver of progress too. What does this imply for calibrating export controls going forward?

Lennart Heim: It’s a new paradigm, but not one that replaces the old approach — they coexist. We’ve seen trends like chain-of-thought reasoning, where models are asked to think step by step, and larger models tend to perform better at it.

I expect this pattern to continue. Bigger models may achieve better reasoning, though there could be a ceiling somewhere. In the semiconductor industry, transistors got smaller over time, but different techniques emerged to achieve that goal. We observe overarching trends with multiple enabling paradigms in AI also.

I don’t think this complexity fundamentally challenges export controls. As long as pre-training remains important and deployment depends on compute resources, export controls still matter.

If new architectures emerge that don’t rely heavily on compute, that would be a bigger challenge. Because if compute is no longer the main determinant of capabilities, current export controls become ineffective. But I think many, many parameters need to change for that to happen. Regulators can reasses over time.

Jordan Schneider: Let’s emphasize that point about compute. If models, after training, take significantly longer to produce answers — whether it’s three minutes, 10 minutes, or even an hour — that extended “thinking time” consumes compute resources. Compared to older systems that responded in seconds, this shift means nations will need far more compute capacity to achieve productivity, national defense, or other goals. For now, inference scaling makes the case for export controls even stronger. Compute prowess will be key for any government wanting to excel in AI technology.

Lennart Heim: Exactly. This also means that the distinction between training and deployment will become increasingly fuzzy over time. We already use existing trained models to create synthetic data and give feedback to train new systems. Early AI systems like AlphaGo and AlphaZero employed an element of self-play, where the model played against itself. That is training and deployment occurring simultaneously.

We’re likely to see similar trends with large language models and AI agents. This makes it harder to maintain clear-cut categories, and compute efficiency will play an even larger role.

As AI capabilities improve, they’ll require fewer resources to achieve the same benchmarks. A model might now need 100 GPU hours for a task that once took 500 GPU hours. This efficiency is part of the broader technological trend.

It’s hard to frame a national security conversation around specific capabilities, because any given capability becomes easier to access over time. That is the reality that policymakers need to deal with.

Chris Miller: Is it generally true that the contours of the national security argument around export controls are shifting, given the focus on inference infrastructure and test-time compute? If it was all about training, regulators could say “We don’t want them to train this type of AI application.” But if it’s actually about whether there’s infrastructure to run a model that can produce a million different use cases, it becomes more about productivity and less about discrete national security use cases.

Lennart Heim: It depends. The reasoning behind export controls has evolved. In 2022, export control discussions didn’t really mention frontier AI. By 2023, they began addressing it, and this year’s revised export controls took it a step further.

For example, the revised controls now include high-bandwidth memory units, which are key for AI chips. Why is this significant? HBM is especially important for deployment rather than training. Training workloads are generally less memory-intensive compared to deployment, where attention mechanisms and similar processes require more memory.

Banning HBM, and thereby limiting companies like Huawei from equipping AI systems with HBM, likely has a greater impact on deployment than training. However, I don’t think this motivation is explicitly stated in the documents.

Jordan Schneider: To draw a parallel from semiconductor history, there was a big debate about RISC versus CISC architectures back in the ’80s and ’90s. Pat Gelsinger pushed x86 as the dominant architecture, arguing that software wouldn’t need to be super efficient because hardware would continue improving exponentially. Essentially, Moore’s Law would clean up inefficiencies in code.

Fast forward to today, and it seems like there are enough AI engineers finding creative ways to use compute that algorithmic innovations will expand to match the available compute. Engineers at places like Anthropic, DeepMind, and OpenAI are the first to play with these resources. Would you agree this is the trend we should expect?

Chris Miller: Yes, that sounds about right. If compute is available, we’ll find ways to use it. An economist might ask, “What’s the marginal benefit of an additional unit of compute?” Ideally, we want the most algorithmic bang for our buck with each unit of compute.

In the last few years, we’ve seen GPU shortages in certain market segments, indicating strong economic output from every GPU deployed — or at least that’s the assumption behind the investments. It’s uncertain whether this trend will persists in the long-term.

The trajectory of Moore’s Law has historically been steady, but estimating improvements in algorithmic efficiency is much harder. There’s no reason to believe these improvements will follow a linear or predictable trend, so I’d remain agnostic about where we’re heading.

Lennart Heim: Even as compute efficiency improves, there’s still the question of how these breakthroughs are discovered. Are they serendipitous — like having a great idea in the shower — or do they emerge from systematically running hundreds of experiments?

Often, breakthroughs in compute efficiency come from large-scale experimental setups. Sometimes, these ideas are even inspired by the models themselves, like using a GPT model to develop the next iteration.

Leading AI companies have an ongoing internal competition for compute resources. With AI becoming commercialized, the competition intensifies because allocating more compute for research means less is available for deployment.

I’d be curious to see load graphs for these companies. Are experiments run at night when fewer people use ChatGPT? These are the types of strategies companies likely adopt when managing limited resources.

Jordan Schneider: To Chris’s point, as long as these systems are profitable and there’s value in increasing their intelligence, demand for them will persist. The smarter the systems, the more value they provide across sectors.

Looking at China, if export controls are working and TSMC can’t produce unlimited chips for Huawei, leaving the Chinese AI and cloud ecosystem with one-third of the capacity needed, what does that mean for research directions engineers in the PRC might take?

Chris Miller: Two things stood out to me this year.

First, rental prices for GPUs in China were reportedly lower than in the U.S., which is surprising in a supposedly GPU-constrained environment. This could suggest either that China isn’t actually GPU-constrained or that there’s lower demand than expected.

Second, Chinese big tech firms — ByteDance excluded — haven’t shown the same steady increase in capital expenditures on AI infrastructure as U.S. firms like Google or Microsoft. Charting capex trends from ChatGPT’s launch to now, Alibaba, Tencent, and Baidu don’t display the same commitment to scaling AI infrastructure.

Why might this be?

  1. Fear of chatbots saying something politically sensitive about Xi Jinping.

  2. Doubts about market demand for their products.

  3. Lingering caution from the 2019–2020 regulatory crackdown, making massive investments seem unwise.

But there does seem to be a striking difference between how Chinese big tech firms are responding to AI relative to U.S. big tech firms. I wonder what that tells us more generally about compute demand in China going forward.

Subscribe now

Lennart Heim: China’s venture capital ecosystem is quite different from the US. America’s sprawling VC system provides the risk capital needed to explore bold ideas, like building billion-dollar data centers or reactivating nuclear power plants.

Jordan Schneider: Exactly. In China, there’s less capital available for speculative investments. Investing tens of billions of dollars into cloud infrastructure for training AI models isn’t immediately profitable, so Chinese tech firms hesitate to do it. We recently translated two interviews with DeepSeek’s CEO that explain this with more detail.

There have been large, loss-leading investments in hardware-heavy sectors of the economy, but not many software-focused investments.DeepSeek, by operating more like a research organization and less like an Alibaba-style traditional tech firm, has taken a longer-term approach. It’s unclear whether smaller incumbents with sufficient capital can continue innovating or if progress will depend on stolen algorithmic IP.

Lennart, what’s your perspective on securing model weights and algorithmic IP as we head into 2025?

Lennart Heim: A lack of compute usually means fewer algorithmic insights, which causes the ecosystem to slow down. But stealing model weights is a shortcut. I’m referencing RAND’s recent report, Securing Model Weights, on this question.

Training a system may require tens of thousands of GPUs, but the result is just a file, a few gigabytes or terabytes in size. If someone accesses that file, they reduce or even bypass the need for GPUs entirely.

New ideas are compute multipliers. Publication causes widespread diffusion of these multipliers, which we have seen with transformer architecture, for example.

But this changed about two years ago. In the name of security, OpenAI, DeepMind, and Anthropic no longer publish many detailed architecture papers. OpenAI hasn’t released their model architectures since GPT-4.

If you want to know what the architecture looks like, you have to go to the right parties in San Francisco and talk to the right people. Which is exactly the problem. You could walk out of these parties with huge compute efficiency multipliers.

These companies still mostly have normal corporate environments. But if we see AI as core to national security, these AI labs need to be secure against state actors. These companies will eventually need help from the U.S. government, but they also need to step up on their own. Because this IP leakage completely undermines American export controls.

Chris Miller: How do we know we’re striking the right balance between securing important IP and fostering the free exchanges of ideas that drive technological progress and economic growth? What’s the metric for assessing whether we’ve achieved that balance?

Lennart Heim: Right now, we’re living in the worst of both worlds. OpenAI and DeepMind aren’t going around sharing their research openly with other researchers, publishing on arXiv, or presenting at conferences like NeurIPS or ICML. They’re not diffusing knowledge widely to benefit everyone.

At the same time, their proprietary information is still vulnerable to hacking. So, instead of fostering diffusion within the U.S. ecosystem, we inadvertently enable adversaries or bad actors that are willing to use illicit measures to access this information. This is the worst-case scenario.

Clearly an open ecosystem is beneficial in many ways. That’s why some companies still open-source model weights and infrastructure — it helps push the entire U.S. ecosystem forward.

Assessing the ideal policy balance is hugely complex. There are many reports discussing the trade-offs of open-sourcing versus safeguarding. For now, though, it’s clear that we’re in a bad place — keeping knowledge from U.S. researchers while leaving it vulnerable to theft.

Jordan Schneider: Let me offer a counterargument. Developing algorithmic innovations for frontier AI models isn’t something that happens on an assembly line. The places that succeed most at this have cultivated a unique research culture and can attract top talent from around the world. That includes talent from China, which produces a huge share of advanced AI research and talent.

A highly classified, “skunkworks”-style approach could create two major downsides.

  1. From a talent perspective, it becomes harder to attract people with diverse backgrounds if they need security clearances to access cutting-edge research.

  2. Research in highly classified settings tends to be compartmentalized and siloed. In contrast, the open, collaborative environments in leading labs foster innovation by allowing researchers to share insights, compare experiments, and optimize resources.

Imposing rigid barriers could hinder internal collaboration within firms, making it harder for researchers to learn from each other or gain equitable access to resources like compute.

The Manhattan Project succeeded by isolating talent in the desert until they developed the atomic bomb. That’s not a model we can apply to OpenAI, Anthropic, or other AI labs. The internal openness that has allowed Western labs to thrive could be stifled by the kind of restrictions you’re suggesting.

Lennart Heim: Absolutely. I’m not arguing that security comes without costs. It’s important to consider where to put the walls. We already have some walls in the AI ecosystem — we call them companies. We could achieve a lot by strengthening those existing walls while maintaining openness within organizations.

If someone eventually decides that research must happen in fully classified environments, then of course that would slow down innovation.

For now, though, many measures could enhance security at relatively low cost while preserving research speed. The RAND report referenced earlier outlines the costs and methods of different security levels. Some security measures don’t come at any efficiency cost. Just starting with low-hanging fruit — measures that are inexpensive yet effective — could go a long way.

Jordan Schneider: I have two ideas on this front.

First, if we believe in the future of export controls and assume the U.S. and its allies will maintain significantly more compute capacity than China, it could be worthwhile for the labs or the National Science Foundation to incentivize research in areas where the U.S. is more likely to have sustainable advantages compared to China going forward.

Second, banking on these security measures seems like a poor long-term strategy for maintaining an edge in emerging tech. I mean, think about Salt Typhoon. The Chinese government has been able to the intercept phone calls of anyone they want at almost no cost. Yes, it’s possible to make eavesdropping harder, but I’m not sure any organization can secure all their secrets from China indefinitely.

Source: RAND p. 22

Chris Miller: That raises the question of how to think about algorithmic improvements. Are they like recipes that can be easily copied? Or are they deeply embedded in tacit knowledge, making them hard to replicate even if you have the blueprints?

I’m not sure what the answer is, but replicability seems key to assessing how far to go with security measures.

Lennart Heim: You can draw an interesting connection here to the semiconductor industry. We’re all familiar with cases of intellectual property theft from ASML, the Dutch company building the most advanced machines for chip manufacturing. However, it’s not enough to simply steal the blueprints.

There’s a lot of tacit knowledge involved. For instance, when someone joins a semiconductor company, they learn from experienced technicians who show them how to use the machines. They go through test runs, refining their processes over time. This knowledge transfer isn’t written down — it’s learned by doing.

While this principle applies strongly to the semiconductor industry, it may be less relevant to AI because the field is much younger.

Recently, I’ve been thinking about whether there are still low-hanging fruits and new paradigms to explore. Pre-training has been scaled up significantly over time, and it’s becoming harder to find new ideas in that area. However, test-time compute is an emerging paradigm, and it might be easier to uncover insights there.

I expect academics, DeepSeek, and others to explore this paradigm, finding new algorithmic insights over the next year that will allow us to squeeze more capabilities out of AI models. Over time, progress might slow, but we could sustain it by increasing investment. That’s still an open empirical question.

Jordan Schneider: On that note, Lennart, what does it really mean to be compute-constrained?

Lennart Heim: I’ve been thinking about this more from the perspective of export controls. There’s often an expectation that once export controls are imposed, the targeted country will immediately lose the ability to train advanced AI models. That’s not quite accurate.

To evaluate the impact of export controls, it’s useful to consider both quantity and quality.

The quality argument revolves around cutting off access to advanced manufacturing equipment, like ASML’s extreme ultraviolet lithography machines. Without them, a country can’t produce the most advanced AI chips. For instance, while TSMC is working with 3-nanometer chips, an adversary might be stuck at 7 nanometers.

This results in weaker chips with fewer FLOPS (floating-point operations per second). Due to the exponential nature of technological improvement, the performance gap is often 4x, 5x, or 6x, rather than a simple 10% difference. Export controls exacerbate this gap over time.

The quantity argument is equally significant. Chip smuggling still happens, but access to large volumes of chips is much harder due to export controls and restrictions on semiconductor manufacturing equipment.

Being compute-constrained impacts the entire ecosystem. With fewer chips, fewer experiments can be run, leading to fewer insights. It also means fewer users can be supported. For example, instead of deploying a model to 10 million users, you might only support 1 million.

This has a cascading effect. Fewer users mean less data for training and less revenue from deployment. Lower revenue reduces the ability to invest in chips for the next training cycle, perpetuating the constraint.

Additionally, AI models increasingly assist engineers in conducting AI research and development. If I have 10x more compute than my competitor, I essentially have 10x more AI agents — or “employees” — working for me. This underscores how compute constraints can hobble an entire ecosystem.

Chris Miller: That makes a lot of sense. My theory about the Chinese government’s response — and Jordan, let me know if this resonates — is that they seem less concerned with consumer applications of AI and more focused on using AI as a productive force.

Their strategy appears to prioritize robotics and industrial AI over consumer-facing applications. The hope is that limited compute resources, when deployed toward productive uses, will yield the desired returns.

The problem with this approach is that much of the learning from AI systems comes from consumer applications and enterprise solutions. Without a full ecosystem, their progress will likely be stunted. It’s like trying to balance on a one-legged stool.

Jordan Schneider: Chris, that’s an interesting observation. It’s a reasonable strategy for a country facing resource constraints, but it also highlights the limitations of being compute-constrained.

Chris Miller: Exactly. There’s also a political dimension to consider. In addition to being compute-constrained, China has spent the past five years cracking down on its leading tech companies. This has dampened their willingness to invest in consumer-facing AI products.

After all, a successful product could draw political scrutiny, which isn’t a safe place to be. That dynamic further limits the development of a robust AI ecosystem.

Lennart Heim: That’s a great point. The revenue you generate often determines the size of your next model. OpenAI’s success, for instance, has attracted venture capital and fueled further progress.

China’s state subsidies can offset some of this reliance on revenue. They can fund projects even without immediate returns, challenging the flywheel effect I described earlier.

Still, there are many less compute-intensive AI applications, like AI agents, that are being developed worldwide. These don’t require the same level of resources but still factor into national security concerns.

The key question is, what are we most worried about? For AGI or highly advanced AI agents, compute constraints will likely be a major factor. But China might already be leading in domains like computer vision.

The ideal balance between compute intensity and emerging risks remains an open empirical question. We’ll need to monitor how these dynamics evolve over time.

Middle East Expansions and Cloud Governance

Chris Miller: Another obvious implication of being compute-constrained is that you can’t export computing infrastructure. Perhaps that’s a good segue to discussing the Middle East.

Lennart Heim: Part of the compute constraint story, as you mentioned, is that if you need chips to meet domestic demand, you can’t export a surplus. If the PRC is barely able to meet its own internal demand — assuming that’s true, though we don’t have solid evidence yet — it’s clear that the U.S. and its allies are producing significantly more AI chips. This allows them to export chips, but there’s an ongoing debate about where and how these chips should be exported.

Existing export controls already cover countries like China, Russia, and others in the Country Group D5 category. However, there are also countries in Group D4, like the UAE and Saudi Arabia, which require export licenses for AI chips. These countries are increasingly ambitious in AI, and since early this year, the U.S. government has been grappling with whether and under what conditions to allow chips to be exported to them.

Export licenses offer flexibility. They can come with conditions — such as requiring buyers to adhere to specific guidelines — before granting access to AI chips. There’s clearly demand for these chips, and this debate will likely continue into the next year, as policies and the incoming administration determine where the line should be drawn.

Chris Miller: It’s been publicly reported that upcoming U.S. regulations might involve using American cloud providers or data center operators as gatekeepers for AI chip access in these countries. This approach would essentially make private companies the enforcers of usage guidelines.

Lennart Heim: That’s an intriguing approach. It creates a win-win scenario: these countries get access to AI chips, but under the supervision of U.S. cloud providers like Microsoft, which can monitor and safeguard their use.

It’s important to understand that export controls for AI chips differ from those for physical weapons. A weapon is controlled by whoever possesses it, but AI chips can be used remotely from anywhere. If a country faces compute constraints due to export controls, one solution is to use cloud computing services in other countries or build data centers abroad under shell companies.

Most AI engineers never see the physical clusters they use to train their systems. The data centers are simply wherever electricity and infrastructure are available. This makes it challenging to track chip usage.

There are three layers to this challenge:

  1. Where are the chips? This is the most basic question.

  2. Who is using the chips? Even if you know where they are, it’s hard to determine who is accessing them.

  3. What are they doing with the chips? Even if you know who is using them, you can’t always control or monitor the models they train.

U.S. cloud providers can help address the second layer by verifying customers through “know your customer” regimes. I’ve written about this in a paper titled Governing Through the Cloud during my time at the Centre for the Governance of AI. Cloud providers can track large-scale chip usage and ensure compliance, making them far more reliable gatekeepers than companies in the Middle East.

Chris Miller: There’s broad agreement that no one should be allowed to build an AI cluster for nefarious purposes. But regulations seem to be taking this further, aiming to ensure long-term dominance of U.S. and allied infrastructure.

The idea is to not only set rules today but maintain the ability to enforce them in the future. This makes some people uncomfortable because it positions the U.S. and its allies as the long-term power brokers of AI infrastructure, potentially limiting the autonomy of other countries.

Lennart Heim: That’s a fair criticism, but I would frame it more positively. This is about ensuring responsible AI development. Depending on your geopolitical perspective, some might view it as the U.S. imposing its values, while others see it as necessary for safety and accountability.

Exporting chips isn’t the only option. Countries can be given access to cloud computing services instead. For example, if someone insists on acquiring physical chips, you could ask why they can’t simply use remote cloud services. But many countries want sovereign AI capabilities, with data center protection laws and other safeguards.

The ultimate goal should be to diffuse not only AI chips but also the infrastructure and practices for responsible AI development.

Jordan Schneider: This reminds me of a recent story that struck me. The American Battlefield Trust is opposing the construction of data centers near Manassas, a Civil War battlefield. It’s a tough dilemma — I want those data centers, but preserving historical sites is important too.

Intel’s Future and TSMC Troubles

Jordan Schneider: Speaking of sovereignty in AI, let’s discuss Intel. There’s been a lot of speculation about Pat Gelsinger’s departure and the board’s decision to prioritize product over foundry. Chris, what’s your take on this news?

Chris Miller: It’s a significant development and signals a major strategy shift for Intel, though the exact nature of that shift remains unclear.

There are several possible paths going forward:

  1. Intel could sell some of its design businesses and double down on being a foundry.

  2. It could do the opposite and focus on design while stepping back from foundry ambitions.

  3. It might just try to muddle through, continuing its current strategy until its next-generation manufacturing process proves itself.

None of these options are ideal compared to where expectations were two years ago.

Intel will present a tough challenge for the incoming administration. The company has already received $6–8 billion through the CHIPS Act to build expensive manufacturing capacity, but there’s no guarantee it will succeed. Going forward, Intel will likely require significant capital from both the private and public sectors.

Jordan Schneider: This ties back to the fundamental pitch that Pat Gelsinger made during the CHIPS Act discussions — that America should have leading-edge foundry capacity within its borders.

This is a global industry, and the world would face severe consequences if Taiwan were invaded, regardless of U.S. manufacturing capacity. Taiwan is nominally an ally, and TSMC’s leadership should know better than to antagonize the U.S. government by selling leading-edge chips to Huawei, because Washington is the ultimate guarantor of Taiwan’s current status.

That said, it would certainly be preferable for the U.S. to have Intel emerge as a viable second supplier or even the best global foundry. But how much are you willing to pay for that? Even if you allocate another $50 billion or $100 billion, can you overcome the cultural and structural issues within Intel?

There’s no denying the enormous business challenges involved. Competing in this space has driven many companies out of the market over the past 20 years because it’s simply too hard. Chris, do you want to put a dollar figure on how much you’d be willing to raise taxes to fund a US-owned leading-edge foundry?

Chris Miller: You’re right — it’s not just about money. But it is partly about money because funding gives Intel the time it needs to demonstrate whether its processes will work.

Whether Intel succeeds or fails, it’s clear that if they only have 12 months, they’ll fail. They need 24 to 36 months to prove their capabilities. Money buys them that time.

The other variable is TSMC, which already has its first Arizona plant in early production. Public reports indicate that the yields at this plant are comparable to those in Taiwan, which is impressive given the negative publicity surrounding the Arizona plant in recent years.

TSMC is building a second plant in Arizona and has promised a third. If these efforts succeed, the need for an alternative US-based foundry diminishes because TSMC is effectively becoming that alternative.

The big question is how many GPUs for AI are being manufactured in Arizona. TSMC has publicly stated that 99% of the world’s AI accelerators are produced by them, and currently, that production is confined to Taiwan. Expanding this capability to Arizona would be a game-changer.

Lennart Heim: There has been public reporting that Nvidia plans to produce in the U.S. in the near future, which would be a positive development. But the broader question extends beyond logic dies. What about high-bandwidth memory? What about packaging? Where will those components be produced?

The strategic question is whether the U.S. should carve out a complete domestic supply chain for AI chips. Is it a strategic priority to have every part of the process — HBM, packaging, and more — onshore, or are we content with relying on external suppliers for certain elements?

Chris Miller: Intel has received commitments through the CHIPS Act, but those funds are contingent on them building new manufacturing capacity. The new leadership team at Intel might decide not to proceed with some of those plans.

This raises a critical question — if Intel doesn’t build those facilities, what happens to the allocated CHIPS Act funding? It’s important to note that Intel hasn’t received all the money; they’ve been promised financial assistance if they meet specific milestones.

This decision will likely land on the desk of the next administration, and they’ll need to assess whether additional private and public capital is necessary to ensure Intel’s competitiveness.

Jordan Schneider: Early on, policymakers should evaluate the trade-offs clearly. If we give $25 billion to America’s foundry, it might result in a 30% chance of competing with TSMC on a one-to-one basis by 2028. At $50 billion, maybe it’s a 40% chance. At $75 billion, perhaps it rises to 60%.

Even with massive investment, there’s a ceiling on how competitive Intel can become. The rationale for the initial $52 billion in CHIPS Act funding was compelling, but that was spread across many initiatives — not just frontier chip manufacturing.

For Intel to achieve parity with TSMC by 2028, you’d need to show how increased investment could meaningfully improve the odds. This is a challenge for Intel’s next CEO, the next commerce secretary, and whoever oversees the CHIPS Act moving forward.

Time is critical, and if Intel can’t make it work, we’re left relying on TSMC. That brings us back to the awkwardness of TSMC producing a significant number of chips for Huawei. We need to dive deeper into that story.

Lennart Heim: Great segue. The Huawei story broke a couple of months ago, and it highlights the challenges of enforcing export controls.

The basic premise of export controls is to prevent Chinese entities from producing advanced AI chips at foundries like TSMC. There are two main rules:

  1. If you’re on the Entity List, like Huawei, you can’t access TSMC’s advanced nodes.

  2. AI chips cannot be produced above certain performance thresholds.

TechInsights conducted a teardown of the Huawei Ascend 910B and found it was likely produced at TSMC’s 7-nanometer node. This violates both rules — the Entity List restriction and the AI chip performance threshold.

Shell companies and similar tactics make compliance tricky, but based on the available information, this should have been detected.

What’s even more concerning is TSMC’s role in this. If TSMC is producing an AI chip with a die size of 600 square millimeters — massive compared to smartphone chips — they should have raised red flags.

Any engineer can tell the difference here. There are probably structural issues at TSMC where the legal compliance team doesn't talk to the engineers.

But on the other side is the design teams. It's not like you send them something and then you stop talking. This is a co-design process. There was clearly ongoing communication on these kinds of things. But then they produced the logic dies for the Ascend 910B, although it’s still an open question whether all of these chips were produced at TSMC.

But TSMC’s involvement definitely undermines export controls. A good story you can spin is that this is a sign of some production issues happening at SMIC such that Huawei is still relying on TSMC. Definitely more insights are required here.

Intelligence Failures and Government Follow-on 本末倒置

Jordan Schneider: Speaking of shell companies, what was the U.S. intelligence community doing? The fact that this information had to come from TechInsights is mind-boggling. I can’t imagine there are many higher priorities than understanding where Huawei is manufacturing its chips. For this to break through TechInsights and Reuters feels like an absurd sequence of events. It highlights a glaring gap in what the U.S. is doing to understand this space.

Lennart Heim: We’ve seen this before, like when Huawei’s advanced phone surfaced during Raimondo’s visit to China. There’s clearly more that needs to be done. The intelligence community plays a role, but think tanks, nonprofits, and even private individuals can contribute to filling this gap.

For example, open-source research can be incredibly powerful. People can use Google Maps to identify fabs or check Chinese eBay for listings of H100 chips. There’s a lot you can do with the resources available, and nonprofits can play a critical role in providing this type of information.

The gap in knowledge here is larger than I initially expected, and there’s a lot of room for improvement.

Chris Miller: This also points to the broader challenges of collecting intelligence on economic and technological issues, which the U.S. has historically struggled with.

It’s also worth asking what information the Chinese Ministry of State Security (MSS) is gathering about technological advances in other countries? What conclusions are they drawing? If we’re struggling with this, I wonder what kind of semiconductor and AI-related briefings are landing on Xi Jinping’s desk. Do those briefings align with reality, or are they equally flawed?

Jordan Schneider: It sounds like the solution is just to fund ChinaTalk!

On the topic of MSS, the Center for Security and Emerging Technology (CSET) did some reports of China’s systems for monitoring foreign science (2021) and open-source intelligence operations (2022). But when you read Department of Justice indictments against people caught doing this work, it often seems amateurish and quota-driven.

I don’t have a clearance and can’t say for sure, but it makes me wonder — if the U.S. is struggling to figure out Huawei’s chips, maybe the Chinese are equally bad at uncovering OpenAI’s secrets. This might reflect bureaucratic challenges on both sides, such as bad incentives, underfunded talent pools, and difficulty competing with the private sector.

Subscribe to support ChinaTalk’s independent research.

Lennart Heim: That’s true. But there’s a broader issue here. I come from a technical background — electrical engineering — and transitioning into the policy world has been eye-opening.

One thing I’ve realized is that there are often no “adults in the room,” especially on highly technical issues. In domains like AI and semiconductors, there simply aren’t enough people who deeply understand the technology.

Getting these experts into government is a huge challenge because public-sector salaries can’t compete with private-sector ones. It’s not about hiring international experts — it’s about bringing in people who know these technologies inside and out. They need to be aware of the technical nuances to even track these developments properly.

For example, I’ve used this as an interview question — if you were China, how would you circumvent export controls? Surprisingly few people mention cloud computing. Most assume physical products and locations matter most, but it’s really about compute. It doesn’t matter if the H100 chip is in Europe — you care about how it’s used.

These types of insights require a technical mindset, and we need more of these brains in D.C. — in think tanks, government, and the IC. We’re still in the early days of implementing export controls, and the more technical expertise we bring in, the better we’ll get.

Many of the hiccups we’ve seen so far can be traced back to a lack of technical knowledge and capacity to address these issues effectively.

Jordan Schneider: We need to diffuse technical talent into government, but we also need to diffuse AI into the broader economy. Lennart, how should that happen?

Lennart Heim: Diffusion is a big topic. Earlier, we touched on where data centers should be built — Microsoft expanding abroad is one form of diffusion. Another aspect involves balancing protection, such as export controls, with promotion. These two strategies should go hand in hand.

Diffusing AI has several benefits. It can be good for the world and can also counter the development of alternative AI ecosystems, like those in the PRC. From a national security perspective, it’s better to have American-led AI chips, data centers, and technologies spread globally.

That raises an important question — as the gap between AI models narrows, with China catching up and smaller models improving, are models really the key differentiator anymore? From a diffusion standpoint, what should we focus on if models aren’t the most “sticky” element?

Take GitHub as an example. It previously used OpenAI’s Codex to help users write code but recently switched to Anthropic’s Claude. This shows how easily models can be replaced with a simple API switch. Even Microsoft acknowledges this flexibility, and it’s clear that models may not provide the long-term competitive edge we assume.

If models aren’t the differentiator, what is sticky? What should we aim to diffuse, and how should we go about it?

Chris Miller: The interesting question is which business models will prove to be sticky. Twenty years ago, I wouldn’t have guessed that we’d end up with just three global cloud providers dominant outside of China and parts of Southeast Asia.

Those business models have extraordinary stickiness due to economies of scale. The question now is — what will be the AI equivalent of that? Where will the deep moats and large economies of scale emerge?

These are the assets you want in your country, not in others. They provide enduring influence and advantages. While we don’t yet know how AI will be monetized, this is a space worth watching closely.

Lennart Heim: That’s a great point. It also ties into the idea of building on existing infrastructure. Take Microsoft Word — it’s incredibly sticky. Whether you love it or hate it, most organizations rarely switch away from it.

For example, the British government debated moving away from Microsoft Office for years. The fact that this debate even exists shows how difficult it is to dislodge these systems.

Maybe the stickiness lies in integrating AI into tools like Word, with features like Copilot calling different models. Or perhaps it’s in development infrastructure.

We’ve focused a lot on protecting AI technology, but we haven’t thought enough about promoting and diffusing it. This includes identifying sticky infrastructure and understanding how to win the AI ecosystem, not just by building the best-performing models but by embedding AI into tools and workflows.

Chris Miller: This brings us back to the Middle East and the tension between export controls and economies of scale. If economies of scale are crucial, you want your companies to expand globally as soon as possible.

That raises a question: does this mean relaxing export controls on infrastructure, or do you maintain strict control? Balancing the need for control with the benefits of scaling up globally is a delicate but important challenge.

Lennart Heim: What about smartphones? AI integration into smartphones seems like a big deal. For example, Apple has started using OpenAI models for some tasks but is also developing its own. At some point, I expect Apple to ditch external models entirely.

Interestingly, Apple is also moving away from Nvidia for certain AI tasks, developing its own AI systems instead. With millions of MacBooks and iPhones in users’ hands, Apple could quickly scale its AI.

This shift toward consumer applications — beyond chatbots — will define the next phase of AI. We’ll see if these applications prove genuinely useful. For now, feedback on Apple’s recent AI updates has been underwhelming, but that could change next year.

If Apple’s approach takes off, could it define who wins in AI?

Jordan Schneider: Let me take this from a different angle. AI matters because it drives productivity growth, and that’s what we should be optimizing for.

I trust that companies like Apple, Nvidia, and OpenAI will continue improving models and hardware. My concern is that regulatory barriers will block people from reaping the productivity benefits.

For example, teacher unions might resist AI in classrooms, or doctors might oppose AI in operating rooms. Every technological revolution has brought workplace displacement, but history shows that these changes leave humanity better off in the long run — more productive and satisfied.

The next few years will see political and economic fights between new entrants trying to deploy AI and labor forces pushing back, especially through regulation. These battles will determine how AI transforms industries.

Chris Miller: Agreed. Beyond the firm-versus-labor dynamic, there’s also a competition between incumbent firms and new entrants. This varies by industry but is equally important.

Then there’s the question of which political system — ours or China’s — is better suited to harness innovation rather than obstruct it. You could make arguments for either.

Jordan Schneider: Take Trump, for example. On one hand, he’s concerned about inflation and unemployment but also supports policies like opposing port automation.

Ultimately, I don’t think Trump himself will play a huge role in these decisions. Instead, it’s the diffuse network of organizations — standard-setting bodies, school boards, and others — that will shape the regulatory landscape. Culture also matters here. Discussions about AI’s risks — like safety concerns and job loss — have made it seem more frightening than it should.

These risks are real, but they need to be balanced against the benefits of technological progress. Right now, the negative cultural conversation about AI could influence these diffusion debates.

Xi Jinping might be even more worried about unemployment than Trump. But some parts of China’s non-state-owned economy are probably more willing to experiment and adapt new workflows.

The U.S. may be too comfortable to navigate the disruptions needed to fully harness AI’s potential. This complacency could slow progress compared to China’s willingness to experiment aggressively.

Chris, what do you think about a Manhattan Project for AI?

Chris Miller: The term “Manhattan Project” for AI isn’t quite right. The Manhattan Project was secretive, time-limited, and narrowly focused. What we need for AI is long-term diffusion across society.

The better analogy is the decades-long technological race with the Soviet Union, marked by broad R&D investments, aligned incentives, and breaking barriers to innovation. This kind of sustained effort is what we need for AI.

Lennart Heim: That requires projects that focus on onshoring more fabs and data centers — like CHIPS Act 2.0. It also requires energy and permitting reform.

Compute is key, and building more data centers is a good starting point, but we also need to secure what we build. This includes data centers, model weights, and algorithmic insights. If we’re investing in these capabilities, we can’t let them be easily stolen. Innovation and security must go hand in hand.

Jordan Schneider: One thing I’d add is the importance of immigration reform. The Manhattan Project had over 40% foreign-born scientists. If we want to replicate that success, we need to attract the world’s best talent.

This is a low-cost, high-impact solution to drive growth, smarter models, better data centers, and more productivity. It’s crucial to have the best minds working in the U.S. for American companies.

Lennart Heim: Absolutely. Many of the top researchers in existing AI labs are foreign-born. Speaking personally as a recent immigrant, I’d love to contribute to this effort. If we’re doing this, let’s do it right.

Reading Recommendations

Jordan Schneider: Let’s close with some holiday reading recommendations. Lennart, what was your favorite report of the year?

Lennart Heim: Sam Winter-Levy at Carnegie just published a report called The AI Export Dilemma: Three Competing Visions for U.S. Strategy. It touches on many of the topics we discussed, like how we should approach diffusion, export controls, and swing countries. It has some good ideas.

Jordan Schneider: I’d like to recommend The Gunpowder Age: China, Military Innovation, and the Rise of the West in World History by Tonio Andrade. We’ll be doing a show on it in Q1 2025. It’s an incredibly fun book and addresses a real deficit in Chinese military history. The author dives deep into Chinese sources and frames the Great Divergence through the lens of gunpowder, cannons, and guns.

He uses fascinating case studies, like battles between the Ming and Qing against the Portuguese, British, and Russians, to benchmark China’s scientific innovation during the Industrial Revolution.

The book argues — similar to Yasheng Huang’s perspective from our epic two-hour summer podcast — that the divergence between China and the West happened much later than commonly believed. Into the 1500s and 1600s, China was still on par with the West in military innovation, including boat-building, cannon-making, and gun-making.

The writing is full of flair, which is rare in historical works. It’s military history, technology, and China vs. the rest of the world — all my sweet spots in one book.

What’s your recommendation, Chris?

Chris Miller: For some more deep history, I recommend A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains by Max Bennett.

It’s a history of brains and how they’ve evolved over millions of years, starting with the first neurons. The author is an AI expert who became fascinated by the evolution of intelligence and ended up becoming a neuroscience expert in the process.

The book is extraordinary — more fun than I expected — and thought-provoking in how it explores the history of thinking across all kinds of beings, including humans.

ChinaTalk is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.

跨越欧洲骑行记之五——冷漠的德国人?

关于欧洲各国人的性格,中文世界有“德国人冷漠”之说。曾经有很多年,这也是我对德国人的印象,追溯源头,大都来自阅读和道听途说。这些年,经历了在欧洲长途徒步和骑行,我有机会近距离接触普通德国人,逐渐意识到那种印象跟现实不太相符,大致是一种刻板印象导致的偏见。 

Read more

Pony.ai’s Robotaxis and the Long Road Ahead

Robotaxi observers worldwide believe one thing about the industry in China: It’s moving aggressively. NYT dubbed Wuhan “the world’s largest experiment in driverless cars,” thanks to significant government support both in terms of regulations for testing and data collection.

The hottest Chinese startups are going public in the US, too. Most notably, Pony.ai 小马智行 made its Nasdaq debut the day before Thanksgiving. The company recently said it would expand its robotaxi fleet from 250 to 1,000 in 2025, significantly growing its commercialization prospects.

Pony.ai’s IPO was valued at $5.25 billion. Not peanuts, but less than its $8.5 billion Series D. Compared to its heyday, Pony is being tested by a harsh market reality and may face stringent international regulatory barriers as it tries to bring its technology abroad. To understand the future of Chinese AVs, ChinaTalk dove deep into the industry’s history and interviewed both cofounders of Pony.ai. We get into:

  • Pony.ai’s rise to China’s leading robotaxi company

  • How broader changes in the robotaxi industry and Chinese companies’ fundraising environment affect Pony.ai’s IPO

  • The challenges of commercialization for Pony.ai

  • Pony.ai’s plans for overseas expansion amidst international hesitance to accept Chinese AV

Past: The Top Horse in the Chinese AV Race

In late 2016, James Peng 彭军 and Tiancheng Lou 楼天城 left their jobs at Baidu’s autonomous driving department to found Pony.ai, a company that develops Level 4 autonomous driving technology. L4 refers to the stage of automation where a vehicle can drive without a human driver.

The self-driving industry boomed during those years. Hundreds of startups were founded across a complex web of transportation systems, and autonomous driving was largely believed to be the first momentous application of artificial intelligence in the pre-GPT era. Uber started its self-driving unit in 2015 and acquired Otto, a self-driving truck startup, in 2016. The same year, Google’s self-driving unit, where Lou used to work, became Waymo and publicly demonstrated its technology. Almost every major automaker announced partnerships to develop these technologies or invested in a startup.

In China, Baidu started investing in autonomous vehicle research in 2013 and began the Apollo project to develop its own driverless vehicles in 2017. Many of the big names in China’s robotaxi industry came from Baidu, including the founders of Pony.ai and WeRide.

Thanks to the strong technical backgrounds of Pony’s founders, China’s biggest investors were immediately interested. HongShan ​​红杉中国 (formerly Sequoia China) led Pony’s seed round in 2017. Then in 2020, Toyota participated in Pony’s Series B, and later became a key partner in Pony’s attempts to commercialize its robotaxi tech: Pony develops self-driving software and hardware, and Toyota provides the vehicles.1

Pony also expanded quickly in its first five years. In 2018, Pony became the first company in China to launch a public-facing robotaxi service (ie, fully autonomous vehicles with safety drivers) regularly operating in Guangzhou, while obtaining a permit to test in Beijing. Around the same time, the company started to explore robotrucks and established a trucking division in 2020. In 2021, Pony began to remove safety drivers in some of its robotaxis in Guangzhou.

At the end of 2020, Pony was valued at $5.3 billion and raised $2.5 billion in Series-C funding thanks to investments from sovereign wealth funds such as Ontario Teacher’s Pension Plan and Brunei Investment Agency. By June 2021, the company was on the verge of initiating an IPO, but launch plans came to a grinding halt when SEC asked for a “pause” on US IPOs of Chinese companies.

That marked the beginning of a downturn for many USD venture funds in China and left Pony in an awkward situation: Autonomous vehicles are capital-intensive and R&D-driven, and the commercialization process had only just started. Given the company’s high valuation, there were doubts about whether Pony could drum up more interest from private funds without an IPO.

But Pony wasn’t horsing around — it closed a $1.1 billion Series D at a valuation of $8.5 billion. In October 2023, amid a bonanza of Middle Eastern investments in Chinese EV and AV companies, Pony secured another $100 million from the financial wing of NEOM, Saudi Arabia’s urban desert megaproject.

Present: Pitched Promises Meet Commercialization Challenges

In its IPO filing, Pony disclosed the current scale of its operations. And these numbers are modest, to say the least.

Pony has a fleet of around 250 robotaxis operating across four tier-one cities in China: Beijing, Guangzhou, Shenzhen, and Shanghai. It’s charging fares for fully driverless rides in the first three. During the first half of 2024, each fully driverless robotaxi received an average of 15 orders per day.

By comparison, Waymo currently operates a fleet of over 700 vehicles which complete more than 150,000 rides every week in metro Phoenix, San Francisco, and Los Angeles. Baidu has a fleet of 500 vehicles in Wuhan alone.

In an interview, Tiancheng Lou told me that only three companies have achieved L4 self-driving: Waymo, Baidu, and Pony.

AV companies are facing a tough reality in 2024. General Motors just axed funding for Cruise, the AV startup it acquired a decade ago. Cruise had been burning through money, and after a major accident involving a fully autonomous vehicle, it was only testing robotaxis very slowly, one city at a time, without offering public-facing services. Other automakers have been similarly unable to pony up the necessary cash for AV development.

For Pony.ai, there is an additional level of complexity, because unlike Apollo, Waymo, and even Amazon-owned Zoox, Pony is not backed by a major tech giant. It needs funding more urgently, perhaps, than any other AV company.

“It is my disadvantage,” Lou told me. “I have to wait until the cost structure is stable to add a car. But that also means that I have to do well.”

By “cost structure,” Lou was referring to per-vehicle operating margin. Usually, that’s the difference between passenger fare and per-vehicle cost which includes maintenance, research, and manufacturing. Lou and Peng were optimistic that the margin would turn positive in 2025. In other words, when adding a new vehicle to the robotaxi fleet, the company does not lose money.

But right now, Pony.ai is still a money-losing business: While revenue rose from $68 million in 2022 to $71 million in 2023, net loss attributable to the company was $148 million and $124 million in these two years respectively. R&D still accounts for the highest percentage of Pony’s operating expenses, adding up to over $123 million in 2023.

The good news is that losses have begun to narrow over time. But Pony’s IPO filing promise — that robotaxis would be the company’s main source of revenue — is still far from becoming a reality.

Pony has about 190 robotrucks, which generated 73% of its revenue over the first six months of 2024. Much of this new revenue came from transportation fees collected by Cyantrain, a joint venture founded by Pony and Sinotrans, for freight orders fulfilled by robotrucks.

Lou told me that it’s harder to scale robotrucks than robotaxis. The hardest part about robotaxi is the technology, he said. As long as the technology is mature enough to allow the vehicles to drive fully autonomously, it’s not hard to build tens of thousands of vehicles with that capacity. However, when it comes to trucks, which are larger and faster, there is a higher safety standard to meet.

Another way to diversify revenue is by selling so-called L2++ technology, which Pony started doing in 2022. L2++ refers to technology that assists human drivers instead of replacing them, which is sold to OEMs.

Other Chinese self-driving startups are choosing this road as well. WeRide went public in October but only sold 3 robotaxis and 19 robobuses in 2023. Revenue from these products declined by almost half between 2021 and 2023, from 101 million yuan to 54 million yuan. And even these numbers are padded by government contracts (“local transportation service providers”) as opposed to public-facing, fared robotaxi services. Meanwhile, WeRide’s service revenue (eg, L2 to L4 technical support and R&D services) increased from 36 million yuan to 347 million yuan during the same period.

Regardless, Pony clearly benefits from a Nasdaq IPO: US investors are likely more lenient about the intensive time and capital requirements of self-driving vehicle development. Several investors told me that they didn’t expect a newly minted tech IPO to be profitable.

But investors won’t wait forever — eventually, AV shareholders will push for a path to profitability as proof that they bet on the right horse.

Future: Overseas Expansion vs International Regulation

Pony is already the leading robotaxi company in China, with operations in four tier-one cities, but questions remain about the company’s post-IPO international expansion plan. Pony now has research centers in Silicon Valley and Luxembourg, and potential operations in Hong Kong, Singapore, South Korea, Saudi Arabia, UAE, and Luxembourg.

“We have set up a layout of operations in these places,” James Peng said in an interview. “The ‘layout’ doesn’t necessarily mean that the cars are already there. It’s more about technological partnerships or selling parts. In some of these places, the cars are still on the way, but eventually, we will have them.”

The market is still very new, so Pony is still adjusting their overseas investment, Peng added.

Pony is also partnering with Uber to offer driverless car services in overseas markets, although Uber has also inked cooperation agreements with Waymo, Cruise, and Wayve. Before Pony’s IPO, Bloomberg reported that Uber was in talks to invest in the startup’s offering.

Pony doesn’t have any immediate plan to expand operations to the US. California suspended Pony’s California driverless testing permit in the fall of 2021 after a reported collision in Fremont, a one-party incident where the vehicle hit a street sign. (The state’s Department of Motor Vehicles also temporarily revoked Pony’s permit to test vehicles with a driver in 2022, alleging the company’s failures to monitor the driving records of safety drivers, before the permit was reinstated at the end of 2022.)

“We are mostly focusing on China, because there is a large enough market, sufficient demand, and policymakers are supportive,” Peng told me.

It is not easy to compare self-driving regulations between the US and China. I’ve heard US transportation officials saying that the Chinese government is more proactive in regulating self-driving tech. To me, the main difference is that the Chinese government has adopted a more top-down approach to regulating autonomous vehicles. The first set of major guidelines for testing robotaxis on public roads was released in 2018.

Then in November 2023, China’s Ministry of Industry and Information Technology and three other departments jointly issued the “Notice on Carrying Out Pilot Work for the Access and Road Usage of Intelligent Connected Vehicles.” This notice focuses on conducting access pilot programs and road usage pilot programs for intelligent connected vehicles with L3 (where the driver should still remain in the vehicle) and L4 autonomous driving systems within designated areas.

Subscribe now

More than 30 Chinese cities and provinces have released their own sets of guidelines and permitting schemes for the testing and operation of robotaxis. Some are more supportive than others. Yet, in all cities, robotaxis still can only operate on designated public roads, which could negatively impact attempts to scale.

Wuhan, sometimes called the “robotaxi city of China,” is arguably also the largest testing ground for robotaxis in the world, as the opening testing area for the technology has reached about 3,000 square kilometers (over 1,160 square miles.) Even then, Baidu has yet to reach its goal of deploying 1,000 robotaxis in the city within 2024. (The consensus thus far is that at least 1,000 fully autonomous robotaxis are needed in a city to achieve scalable commercialization.) An expert from Baidu told Huxiu that the peak of robotaxi commercialization will arrive between 2028-2030.

In both US and China, “The policies are more advanced than technological development,” said Lou.

ChinaTalk is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.

Source: Pony.ai
1

In its IPO filing, Pony said it established a joint venture with Toyota, where the latter would supply vehicles as a fleet company. This is similar to Waymo’s partnership with Jaguar. Toyota is the biggest shareholder of Pony.ai.

五本新书:从文革到改开

亲爱的读者新年好!

欢迎来到《不如读书》的新栏目:新书讯。每月更新一次,介绍五本新书~

新书定义:最近一两年出版的中文或英文书籍,并非指当月即将出版的预告书讯。比如这期介绍的五本书全部是 2024 年已出版图书,而非 2025 年即将出版的图书。

选书范围:历史学和政治学相关书籍,完全基于我的三个兴趣领域——现代欧洲、冷战、当代中国。

栏目特性:新书讯只对图书做几句话极简介绍。如果一本书读完之后特别喜欢,我会单独再写更详细的书评。

个人力量有限,欢迎大家留言推荐自己所关注的新书,如果兴趣重叠,我会放进下一期。

本期五本新书都涉及中国从文革到改开的旅程。

The Great Transformation

The Great Transformation

Westad, Odd Arne, and Chen Jian. The Great Transformation: China's Road from Revolution to Reform. New Haven: Yale University Press, 2024.

“改革开放”何时开始,于何时结束?

1978 - 2018 ?

从 2008 到 2018 的十年里,改开不是突然死亡,而是逐渐失去生机。

另外,有足够理由认为改开还没死透,它仍在挣扎。中国依然受惠于雅尔塔体系提供的和平和 WTO 体系提供的市场。

打击私营企业的愿望在经济萧条的前景面前戛然而止,依赖出口获得美元的政权也无法彻底关上国门,因此五代并未退回五十年代。亮马河畔褪色的标语牌还在寒风中颤抖:“开放的大门不会关闭”。

但要相信“改革开放”仍在进行,你需要无视以下令人不安的信号:中国政治的日渐封闭、中西交流的不断枯萎和渐强的战争鼓点。

如同改革开放在新时代的逐渐落幕,它的启动同样长于整个 70 年代:

从中苏交恶到中美接近,从林彪坠机到怀仁堂;从周恩来的“四个现代化”到邓小平的“四项基本原则”;其间经历突变、意外和斗争。没有谁是它的总设计师,它也不是突然开始于 1978。

至上而下的政治外交史之外,改革开放的启动也是自下而上的人民史:

百万市民走上街头以悼念周恩来的名义反对毛式文革;天南地北的知青读着手抄本听着美国之音走上觉醒之路;小岗村之前已有无数小岗村;南部边境成群结队的百姓用脚投票大逃港……

“长 70 年代”——中国从文革走向改开的漫长道路,是新书 The Great Transformation 的主题,由两位杰出的冷战史学家联手完成。

挪威人文安立曾于改开早年赴华留学,陈兼是被改开改变命运的中国知青一代,他们写这个题材再适合不过。

“白头知青在,

闲坐说改开。”

《制度基因》

制度基因:中國制度與極權主義制度的起源(平裝版)

許成鋼,制度基因:中國制度與極權主義制度的起源,臺北: 國立臺灣大學出版中心, 2024。

Xu, Chenggang. Institutional Genes: Origins of China's Institutions and Totalitarianism. Cambridge: Cambridge University Press, 2025.

1968 年,18 岁的许成钢在北大荒上山下乡做知青。

1986 年,36 岁的许成钢在哈佛大学听科尔奈讲道。

从北大荒到哈佛,离不开个人奋斗,也要考虑到改革开放这个历史进程。许成钢师从科尔奈研究共产主义国家的政治经济学:为什么会有经济改革,为什么改革是有限的?制度如何变迁?

74 岁的许成钢经历改开每一步,晚年却认定政权本质从未更改,还是那个“极权主义”。

因此这本书注定面临争议,它强调中国的“极权主义”属性,大幅偏离了当前政治学将中国视为“威权”/个人化的一党制 (personalized one-party system, Geddes 分类) 之分析框架。

但许的研究扎根于政治经济学的苏联与东欧经验,受益于比较共产主义研究范式(Comparative Communism)。中国改开僵死,越看越像晚期苏联,所以科尔奈一系的思想今天又能重焕青春。

“身阅兴亡浩劫空,两朝文献一衰翁。

国家不幸诗家幸,赋到沧桑句便工。”

《彭真》

鍾延麟,彭真:毛澤東的「親密戰友」(1941-1966),臺北: 聯經出版公司,2024。

高华身后最杰出的中共党史研究专家在台湾。

2006 年,國立政治大學的鍾延麟师从陳永發开始做博士论文,题目是邓小平 1956-1966,2013 年专书在香港出版《文革前的鄧小平:毛澤東的「副帥」(1956-1966)》,这本书清晰地阐明了邓小平在大跃进开始后的“副帅”和储备接班人身份,帮助我们进一步认清了文革起源中的接班人斗争问题的关窍。

“毛泽东的接班人从来不是(刘少奇)一个人,他有一个接班的梯队,有一个候补”

高华

此后十年,鍾延麟将功夫花在了另一位中共党史关键人物彭真身上,多篇阶段性论文发表在《中國大陸研究》期刊上。等不及拿到新书的朋友可以先读一下论文。

彭真政治生涯很漫长,文革后主管人大和政法,1987 年才退休(三老半退,四老全退);元老排序中地位仅次于邓小平、陈云、李先念。鍾延麟亦有论文涉及彭真在改开初年的角色,写得极好。

(点击链接:人文及社會科學期刊開放取用平台 可免费阅读很多台版期刊)

《中國大陸研究》是个很有趣的刊物,它前身是创刊于 1958 年的《匪情月报》。1985 年更名为 《中國大陸研究》,1996 年改隶于政治大學,自此转型为纯学术期刊。

中央研究院近代史研究所和國立政治大學有着漫长的中共党史研究传统,而鍾延麟是这一学脉的新一代传人。

Zhou Enlai

Zhou Enlai

Chen Jian. Zhou Enlai: A Life. Cambridge, MA: Harvard University Press, 2024.

视频:第 46 分钟 陈兼论断周是“好人”,语境是把周和毛做对比。

《晚年周恩来》之后一本最新的《周恩来传》。

你可以不必认同陈兼对周恩来是 “好人” 这样的论断,也可以读这本书。

一位专业的历史学家会尽力搞清一些事实(基于档案和采访,基于交叉对比证据,这是极耗时间和心力的苦功夫),并把它们有条理的呈现出来。他可能因为自己的身份、价值观和人生经历做出自己的价值判断。

基于相同的史料,读者可以、也有权利去拥有和作者不同的价值判断。而不同的价值判断并不妨碍我们阅读他所提供的史料综述。

浅浅读下来,我觉得这本书是“好书”。

The Conscience of the Party

The Conscience of the Party

Suettinger, Robert L. The Conscience of the Party: Hu Yaobang, China's Communist Reformer. Cambridge, MA: Harvard University Press, 2024.

苏葆立新著《胡耀邦传》,从标题“党的良心”你就能知道他对胡耀邦的评价。

他不只是学者,曾任美国国安会亚太事务主任的苏葆立亲历 80/90 年代的美国对华外交,写作带有政治内部人视角。

“当我更深入研究(胡耀邦)在位期间的作为,我越发意识到胡耀邦比邓小平还像是真正的改革者。”

感谢阅读

关于中国当代史,为什么我们应该读中国书,也应该读外国书?请参考旧文章:

如果喜欢这个专栏,请推荐给家人朋友订阅,感谢您的阅读。

祝大家新年快乐,阖家安康,自由幸福~下期见!

新的一年会好吗?答案在这期播客和这些祝福中

这个世界迎来了2025年1月1日,每到新年总有人会问一问:新的一年会好吗

前几个月我就被问了这个问题,我们专门做了一期长达四个半小时的播客,来回答这个问题。这期播客《活在历史的垃圾时间,如何度过时代的乱纪元》,也是我们run and rebel系列的第四期,可能是有史以来关于这个问题最直面现实的同时还能激情澎湃,高潮迭起的播客内容。它的答案不仅针对刚刚到来的2025年,还可能对未来的好几年甚至几十年都持续有效

因为最大程度地直面了真实,这一期比以往run and rebel的每一期都更疯更勇,所以设置了比以往更高的付费门槛,想最大程度地拦截可能会给我们带来不安的人群。本来录的时候我自己觉得还好,只是畅所欲言之后的酣畅,结果剪辑出来后我在做饭的时候一听,简直心惊肉跳,觉得我自己属实有一种为了说点真话不要命的生猛。最后一个投稿的朋友更是比我还疯,我没敢说的话她比我说得更直接。

所以在此也有一个提议(甚至恳求):倘若你过去在收听播客,阅读文章的时候发现和我们在【自由主义】层面的观点不一致,请你【务必不要花钱解锁和收听】这一期,对你不好,你听了不仅会崩溃,接下来一年乃至一生可能都会痛苦加剧。请千万不要为你不赞同的内容付费和付出时间,于人于己都是灾难。同时也恳请所有解锁的朋友,不要在简中互联网提及,传播,分享这一期播客相关的任何内容,在此致谢

你可以在爱发电进行单期付费解锁(适合国内用户):https://afdian.com/item/90682ea4c68611ef8e645254001e7c00

也可以通过解锁放学以后Spotify月度订阅会员,收听以往所有付费播客:https://creators.spotify.com/pod/show/afterschool2021/episodes/ep-e2sscug

或者通过解锁放学以后Newsletter的月度订阅会员,解锁以往所有付费文章和付费播客:https://open.substack.com/pub/afterschool2021/p/ed2?r=pilpv&utm_campaign=post&utm_medium=web

PS:收听和阅读完毕后如无续订需求也可及时关闭付费,以防产生意料之外的成本。

讲完了答案来讲讲祝福:因为时代不论如何完蛋,个体总有可能在时代的裂缝中为自己构建出一个稳固的自我空间。我前些天阅读自己在两三年前写的文章,发现自己每次面对时代崩塌做出的决定和选择,如今都成为了我生命中最好的决定,发酵成为了我现实生活最葱茏蓊郁,顶天立地的支柱。所以无论任何时刻,我都坚持把如下的祝福送出。

(两年前一篇文章的配图,发布在了Newsletter,游荡者,爱发电中,图源来自网络,我当时保存的时候觉得它恰如我彼时看到群体性痛苦的心态:身在旷野,心如燎原。如今再看已经心境已经完全改变:我身无长物,却满怀自由)

原本今年的祝福形式我想别开生面,耳目一新一下。

因为我在《穷查理宝典》中看到巴菲特的合伙人查理芒格曾经在哈佛毕业演讲给毕业生寄语主题是“确保生活痛苦的处方”,列举了几件大家只要坚持做,人生就能确保无疑深陷痛苦的事情,比如:

为了转变心情使用化学物质(包含毒品和酒精);嫉妒;反复无常,不要虔诚地做你正在做的事;不要借鉴它人灾难的教训等等

我本来读完这个演讲也想在新的一年以此逆向思维,写出我们这一代华人的痛苦处方:哪些事情能把自己推入万劫不复的痛苦。结果做饭的时候在脑子里想了一想,每一条都太狠且太有普遍性了,很多人读到可能会觉得万箭穿心,因为很容易就发现自己每天所做的事情都是在严格执行痛苦的处方

这个时代绝大部分被困在那里的人已经痛苦到无以复加了,而且绝大部分痛苦也不是个体有能力改变的,所以为了大家的新年的心情和我自己安全角度的考量(保护我自己还是优先且重要的事情),我决定还是按照正向思维来写,不写痛苦的处方,写一写诚挚的祝福。这祝福是痛苦处方的倒反天罡版本,希望收到祝福的朋友都能用它逆转痛苦,加重幸福

当然首先是祝福我自己,祝福我自己的话我肯定最真心。我把最真心的话拿出来祝福每一位同路人,来确保同样的诚挚和真心。歧路人就不管了,也不想花时间精力诅咒了,虽然我的诅咒总是屡屡应验。当然应验的原因还是源于每个人的命运都是一场自食恶果的盛宴,选择和加害者站在一起,或者对普遍的加害视而不见,那就终将被害。不过这一两年看到歧路人尝到的恶果多得有点可怕,比我想象的还要更苦一些,按照目前的加速趋势,加下来可能会苦不堪言,所以我决定省点力气,把所有的力气和精气神用来祝福自己和同路人。

这也代表了我接下来生命精力和重心的转变:不再花很多时间和精力去揭露和批判某种错误,而是把全部的激情和活力投注给那些让人时不时就想Aha赞叹的事物

这个转变是我正在进行的行动,也是我对自己更恒久地祝福,祝福自己的行动成功。也把这个祝福送给每一位同路人,因为它本质上是在祝福:人有管理自己注意力的能力,并真的能把注意力投射给 empower自己的东西

我注意到包括我自己在内,很多人都把一部分注意力投射给了自己憎恨的东西。我在过去三年逐年越来越少把注意力这样浪费,是因为我实现了对我憎恨事物的逃离。逃离是能减少自己的注意力被浪费,自我被自己憎恨的东西磨损最有效的办法。当然反抗也是方法之一,但是随着对手能力的指数级增强,而人活着这样对手的统治下的心力无限耗损,这场反抗战役能赢的可能性日渐渺茫。这一方法的有效性也逐渐在个体身上坍塌。

所以第一条祝福是:愿我们都能行使并管理自己的注意力,愿我们不允许自己被自己憎恨的东西磨损。注意力就是爱意,给我们喜欢的好东西,别把它分给我们憎恶的大大小小的垃圾。

对于我们身处其中的系统,在有效逃离之前,任何人都无法做到不注意它。这样注意力的投射无可避免,无论你是爱它还是恨它。因为它统御着你,决定你付诸努力的所有成果,和你所有的权利获得。人无法不呼吸自己所在地域的空气,所以注意空气的质量理所当然。

但是也有无数让我费解的行为是,对于一些事和人,很多人明明有选择,那个事物和人对自己的个人权利没有任何本质影响,甚至并不出现自己可见的日常生活中,人却痴迷地把可怕的注意力投射其中。一个很典型的案例是“黑粉”,专职做某个人或者某个作品的黑粉,每天都为此投注时间和精力,甚至为此吵架诅咒。我想不明白的是怎么会有人这么主动地耗费自己,这简直是人加诸给自己最可怕的酷刑。即使我对我最痛恨的普通活人,我都觉得这个惩罚有点过于可怕。但是很多人却主动给自己施加这样的惩罚。我祝福我自己和所有看到这篇文章的朋友都警惕并远离这样的陷阱。

其次想祝福一下我们都能分辨并屏蔽噪音,稳住道心。最近刚学会了“道心”这个词,发现这就是passion and mission,激情和使命,在修仙界的别名。这个世界纷繁复杂的噪音太多了,人找到自己道心已经难如登天,想要在这样的噪音如北京春天的电钻声一样钻心入脑的世界里,稳住自己的道心,更是难上加难。甚至连分辨哪些是噪音都很艰难,人在垃圾世界活久了,甚至有时候会觉得噪音才是正确和正常的,还会内化这些噪音用来评判和攻击自己。女性在这个世界已经够处在不利的结构中了,倘若我们还用这些噪音弱化自己,才是掉入了真正万劫不复的陷阱

所以反父权的核心的一步,就是别听和别传播,更别发出这些噪音了。不对别人发出,更别对自己发出

以及如何分辨这些噪音呢:只要是你不认同的环境,让你痛苦的群体,命令你做什么,逼迫你选什么,反着做就对了。

倘若你受到了你不认同环境和群体的强烈批评,恭喜你,你一定是做对了很多事情!

但是倘若你在你不认同使你痛苦的环境里广受赞誉,说你温柔脾气好,说你勤劳善良好用有责任心(是的,特别有责任心的人一定会burn out,祝愿我们2025都学会let it burn的时代精神),那警钟今天为你而鸣,你迄今为止的人生都在按照噪音行事。倘若继续下去,你会抑郁,无力,被剥夺和损害,会毫无防线地放弃自己。会终其一生都难找到自己真正想做的事,听不到自己真正的声音,因为噪音会将你吞噬。

而被这些噪音激烈批评的女性,会解放自己,会自由,会从天罗地网的陷阱中脱身,找到并发出自己的声音。

意识到噪音的存在,是屏蔽噪音最关键的一步。让人充满希望的是,无数女性都在陆续意识到这些噪音。2025,祝你我都有辨别更多噪音并将至屏蔽的能力。屏蔽最关键的一环,是物理上的远离。远离那个对你发出噪音的家庭,那个社交媒体,那些垃圾的书籍影视剧,那些人(哪怕是亲近的朋友或伴侣),更有力的是,那整个系统环境。

价值投资的开创者,巴菲特的老师格雷厄姆在《聪明的投资者》一书中介绍一条明确的商业原则:“有勇气相信自己的知识和经验。如果你根据事实得出了结论,而且你知道自己的判断是可靠的,那么就照此行事,即使其他人会迟疑或反对。(众人不同意你的看法,并不能说明你是对还是错。如果你的数据和推理是正确的,你的行为就是正确的。)同样,在证券领域,一旦获得了足够的知识并得出了经过验证的判断之后,勇气就成为最重要的品德。”在屏蔽噪音,稳住道心这件事情中,勇气也是最重要的品德。因为很多已经觉醒的人依然饱受痛苦,缺的并不是辨别噪音的能力,而恰恰是屏蔽噪音,坚持自我的勇气。这噪音有时候甚至来自己最亲密的人,来自那些自己以为志同道合的人,勇气就是你最后的堡垒,请你为自己守住它,也为未来的验证守住它

最后一条祝福来自于我的学习和日常生活的实践,它们能给我带来恒久的平静和欢愉,我前两天看到巴菲特说他的老师格雷厄姆也恰恰是拥有了这三种特质:惊人的记忆力,对新知识一如既往的着迷,并且能够把这些知识重新应用于表面上不相关联的问题

我的记忆力很不错,对数学,物理学,经济学(包括由此衍生的投资理财),心理学,社会学,哲学,语言,以及所有揭示规律和谜底的学问都有不同程度的痴迷,还有能力关联这些知识并对问题进行重新的理解和阐述。这些一起让我可以对事物建立独属于自己的判断,进行理性的选择,并用自己的判断和选择来进行自我塑造和作品的创造。我想,这是我对自己有自我认同非常根本的原因。我的存款不多,此刻也不上班,不拥有任何世俗社会的priviledge,没有其它任何人可以为我托底,但是我却对自己的未来生活充满了信心,觉得自己的人生未来不必为金钱烦扰,这信心也源于这三个特质。

前段时间有一位比我钱多非常多的朋友看了我每天阅读的书单,说:我觉得你以后会非常有钱。

我说:我也觉得!

巴菲特和格雷厄姆恰恰是因为这3个特质能做出准确的判断和理性的选择,并由此获得了财富的报偿,因此我也相信自己:虽然时代来到了垃圾时间,我相信自己在垃圾时间充分地满足自己对新知的着迷和好奇,并用自己的记忆力和把不相关的事物建立起来的能力,能够为自己的生命扩容、储值、充电,继而在时代周期度过垃圾时间过后迎来硕果满枝头

因此在2025年,我不祝福大家暴富,因为逆时代回报很难,这样的祝福只能是骗人骗己的谎言。但是我祝大家拥有好的记忆力,对新知的痴迷,和把事物建立起联系的能力。这是人能自我认同,度过垃圾时间和未来拥有财富的关键

所以2025年的三个祝福总结在一起是:祝我们专注,坚定,大脑有活力,人生有渴望,思维有逻辑和洞察力

以及想和尚在上学或者刚刚毕业的极其年轻的朋友说一句,这个时代对你们实在太不公和残酷了,在你们还没有进入市场储备任何物资的时候,彻骨的严寒就铺天盖地来了。我想了很久都想不出有什么解决方案能帮到你们,或者能安慰到你们。

但是今天写这篇文章的时候想到了一个:倘若你的记忆力良好,对新知有渴望,有建立起事物关联的能力,无论你此刻多么赤手空拳一无所有,甚至觉得自己身无长技,没有任何硬技能,你都不必恐惧。我用我自己的亲身经历和我看过的其它不可胜数的实例向你保证:除非有天灾人祸的意外,你的人生都不会过得糟糕。如果能有机会去到一个相对公平正义的环境,你的生命还会无限辽阔。时代没有为你准备礼物,还把一切变得荒芜,但是有这些特质的你,一定能穿越迷雾。你不会困在原处,不会原地踏步,更不会被驯服。

最后再放一下答案的链接,希望大家收听愉快,并带着这些祝福迎接新的一年!

你可以在爱发电进行单期付费解锁(适合国内用户):https://afdian.com/item/90682ea4c68611ef8e645254001e7c00

也可以通过解锁放学以后Spotify月度订阅会员,收听以往所有付费播客:https://creators.spotify.com/pod/show/afterschool2021/episodes/ep-e2sscug

或者通过解锁放学以后Newsletter的月度订阅会员,解锁以往所有付费文章和付费播客:https://open.substack.com/pub/afterschool2021/p/ed2?r=pilpv&utm_campaign=post&utm_medium=web

PS:收听和阅读完毕后如无续订需求也可及时关闭付费,以防产生意料之外的成本。

活在历史的垃圾时间,我们如何度过时代的乱纪元?

这是放学以后run and rebel系列第4期,不再是逃离某个地域疆域,父权制和极权统治,这一次,我们策划一起逃离这个时代的低点,逃离乱纪元,逃离历史的垃圾时间。

“历史的垃圾时间",这个被官方强烈批判且否认存在的名词,得到了普通人的无限共鸣和认可。下行的趋势,即使是眼盲心盲装盲的人,此刻也能感受到,凛冬在过去几年从来不是将至未至,它已经明目张胆地在场。在接下来的数年或者数十年,可能还会无期限地攻城掠地。

全球加剧变暖,生活加速凛寒。我们这些时代的普通人,如何在不知拐点何时到来的历史低点,应对时代的风暴,安放自我的恐惧绝望和不知如何度过每一天的空茫?在这个历史的垃圾时间的感受,想法,应对的策略和计划,是我们这一期播客最主要探讨的内容。

Do not go gentle into that good nig…

Read more

《好东西》:走吧,走到冰原的尽头|Dear 2024

图片

我会好好活着,等你们长大,建立一个新的游戏。

—— 《好东西》

亲爱的媎妹:

见字如面!

2024是危机四伏的一年,如同一场企图杀死女权主义的漫长寒冬。

经济萧条的时代下,女性的生存空间日渐逼仄。男权宗教极权统治下的阿富汗颁布了泯灭人权的法令,让数以百万计的女性彻底失去了权利和自由;

美国黑人亚裔女性总统候选人哈里斯的落选再一次戳破新自由主义的幻梦,而保守派的胜利让本就蠢蠢欲动的反女权阵营有了卷土重来之势;

向男性倾斜的社会制度持续忽视女性的人身安全,近几个月多起骇人听闻的针对女性的刑事案件只换来司法机关的无视和粉饰太平。在经济低迷的“多事之秋”,手无寸铁的女人变成了社会情绪的宣泄口、一切暴力行为的“导火索”。而与此同时,在我们无法触及的角落,依然有无数“被收养”的女性身陷牢笼、挣扎求救……

2023年的年度关键词是“新生”,象征着我们终于脱胎换骨、整装待发。而今年,我们的关键词是「走出冰原」——无论现实多不完美、不公平,觉醒的我们都会像最初一样满怀热忱、风雨兼程。

2024年的现实犹如一望无际的冰原,在这里,我们见证了风雪中无数苦难和困顿,听见了风暴中心的女性悲伤而愤怒的呼号,也看到了横亘在理想与现实间的巨大鸿沟。曾经,我们满怀希望地出发,却也难免在风雪飘摇中为女权主义的前途感到担忧。然而,“置之死地而后生”。尽管身处凛冬,依然有越来越多的姐妹正点亮顿悟的火光。点点微光映出女权先行者们坚定的身影,指引我们寻找属于自己的生路。

在今年最后的这封信中,我们想和大家聊聊电影《好东西》——邵艺辉导演送给女权主义者的特别礼物。虽然电影自上映以来争议不断,邵导也深陷被审判的风波,我们依旧认为《好东西》是一部里程碑式的国产女性主义电影。她充满关怀又绵里藏针,如同一束炽热的火焰带给我们温暖的慰藉,让在冰原中长途跋涉的我们重拾前行的勇气与希望。

电影《好东西》海报,2024

1、 走出冰原第一步:卸下重担、轻松上路

《好东西》中精妙的台词一针见血地揭示了父权规训是如何化作沉重的枷锁、阻碍了我们前进的脚步。我们曾在第二十二封信中提到,轻松舒适地启程是抗争的第一步,这也和邵导借电影传达的思想不谋而合——在启程之前,女权主义者先要敏锐地识别出那些让我们不舒适的「坏东西」:传统性别规范、厌女原生家庭带来的创伤、母职惩罚与荡妇羞辱、生育焦虑和月经羞耻、性同意的模糊界限、情感劳动和美役的无形负担、社会对女性言论自由的打压和对女男道德的双重标准、父权制与资本主义的合谋、“恐弱”的新自由主义对所谓“独立女性”的标榜……

因此,为了走出这片险恶的冰原,我们必须相信自己的潜力和无限可能。《好东西》用一句“你怎么打鼓,女孩就怎么打鼓”告诉我们「你就是这个世界的规则」。这既是我们鼓励年轻一代女性践行女权主义的方法,也是 “以柔克刚”、让父权规则不攻自破的最好方式。

“你怎么打鼓,女孩就怎么打鼓”

所以,当王茉莉勇敢地说出“我不想当观众”,拒绝扮演为男权服务的“辅助角色”;当小叶发觉自己恋爱脑和厌女的原生家庭有关、决心不再辗转情场而是专注自己;当王铁梅退出投入十几年心血的行业、不再做“女权斗士“,转而以一个普通女性的身份写下自己的生活故事;当千千万万的女性卸下男权社会加诸于身的层层重担、撕碎中伤女性的标签,不再征求“父权”的认可,而是坚定追求自我价值的实现……我们才得以卸下包袱、轻装上阵,以“我即是主体”的心态找回那个斗志昂扬、雌心万丈的自己。此时,路在脚下,而出口就在前方。

2、走出冰原第二步:抱团取暖、停止审判

除此之外,《好东西》还告诉我们要团结一切能团结的力量对抗严寒、不再割席或审判。影片中的 “举报”情节充满了导演的巧思:举报王茉莉的男同学仿佛跳梁小丑,这种行为背后的心理更是可笑。考虑到近两年女权主义者之间相互举报、党同伐异的现象愈演愈烈,我们不得不思考:“举报”真的有利于女权主义的发展吗?

对于那些我们不认同的声音(仅限女权言论),“举报”看似是最快速、解气的痛击对手的方式,可以立刻让对方“闭嘴”、再无还口之力。但实际上,女权主义者动辄举报打破了紧密相连的女性命运共同体,让本应并肩同行的女性同盟被戾气割裂、冲散。另一方面,“举报”本就是父权等级逻辑的产物。高位者通过“捂嘴”这一惩罚机制管控言论自由,结果无疑会巩固偏袒男性的公权力。我们每多举报一篇不合心意的文章、投诉一个有“瑕疵”的发声者,资本为王的平台就多了一分决定权,男权社会的最高权力就多了一分对公众言论的约束和把控——这无疑是在为敌人递刀,而这把刀也终将变做指向每一位无辜女性的残忍武器。

邵艺辉导演用《好东西》告诫大家:不要将男权社会“打压、审判、噤声”的手段带到属于女性的言论空间。愤怒不应成为中伤同伴的借口,而应是我们抱团取暖、对抗严寒的力量来源。

女性主义漫画《命运共同体》

3、走出冰原第三步:忍耐严寒、保持乐观

在卸下重担、团结盟友之后,我们必然要经历最艰难也最重要的抗争阶段——明知现实残酷,却不能不继续在结构里挣扎求生、努力向前。在觉醒后,我们更容易洞察社会的不公,一边为“唤不醒“的受害者扼腕叹息,一边又对难以撼动的现实感到绝望。而这时我们难免会感到厌倦沮丧,只希望立刻获得一剂灵丹妙药、一张迅速通关的“加速卡”帮我们飞越争权的漫漫长路、直接到达胜利的彼岸。在这种心境下,我们面对女性主义作品很可能禁不住质疑: 只提出问题却不提出解决方案有什么意义?只批判结构性压迫却不告诉我们如何推翻结构,不是隔靴搔痒吗?而这也正是《好东西》受到批评的主要原因。

可是,现实难道不应该由我们共同努力争取改变吗,要求一部电影开出治愈千年顽疾的药方是不是太过强人所难了?何况,性别平等的愿景本不可能一蹴而就:法国女性争取投票权历时约96年,中国74年前才废除一夫多妻制,美国女权运动经由数十年时间才为女性争取到堕胎权(把堕胎确立为宪法权利的“罗伊诉韦德案”判决却仍在2022年被推翻)……女性争权的过程如此艰难足以证明男权社会有多么残忍无情、既得利益者有多么贪得无厌,而这不应归咎于女权主义者反抗的速度不够快、发声的方式不够好、没提出足够多的解决方案。

此外,交叉性理论指出女性权益是多维度的社会问题,除性别外还涉及种族、阶级、性取向等多方面因素,要在短时间内想出一个有利于全体女性的方案无异于天方夜谭。而相比开出药方,当下更迫切的应该是让更多人意识到这个社会已经病入膏肓、呼吁她们加入反抗的同盟。

事实上,我们在承受一场由立法到司法、由经济到文化、由社会到家庭无孔不入的结构性压迫。《好东西》对现实的批判是一把直击父权要害的利刃——用真相划破男权编制的迷网,让更多女性从幻梦中醒来、和我们一起踏上走出冰原的旅程。

不过,虽然揭示了现实,影片的底色还是温柔的。邵艺辉导演不惧怕诉述苦难,但更想把满满的希望呈现给观众、构筑一个女性乌托邦。“正是因为我们足够乐观和自信,才可以直面悲剧。” 她让铁梅承受的母职惩罚变成了宇宙的律动、自然的声音,让茉莉在一个类似母系的环境里成长,将女性经历的悲剧拍成欢笑连连的喜剧呈现在荧幕上,在寒冬时节为我们演奏了一曲荡气回肠的「女权欢乐颂」。

《好东西》的经典台词:“正是因为我们足够乐观和自信,才可以直面悲剧。”

当然了,这并不代表《好东西》是完美的。电影的这份温柔和乐观也意味着它不免有些悬浮。故事中上海中产阶级高知女性的生活对大部分女性来说是可望而不可及的。事实上,这个世界上不只有王铁梅这样可以“一切靠自己”的精英女性,还有更多领着微薄薪水、日夜赶工却难逃边缘化的普通职场女性;不只有小叶这样才貌双全、只为“真情”所困的女性,还有在两性关系中饱受骚扰、剥削的受害女性;不只有茉莉这样享受优质教育的天才少女,还有被原生家庭压榨生存空间、失去受教育机会的底层女孩……

保持乐观可以帮助我们对抗严寒。但欢笑过后,我们依旧要正视现实才能走得长远,否则铁梅的温馨小屋终究也不过是芭比乐园一样的糖衣炮弹。《好东西》当然是好东西,她有犀利的批判、有温柔的治愈,却也如冬夜的火把,只能为我们提供短暂的温暖和光明,而真正的抗争之路仍在我们脚下。所以,让我们把《好东西》当作一场理想主义者间的围炉夜话,然后带着信心和力量去创造「全新的游戏规则」,在太阳升起之前向着冰原的尽头跋涉吧——向着黎明、向着春天!

就此搁笔,期待下一次和大家见面!

陌生女人2号*

二〇二四年十二月三十一日

*本文由陌生女人2号主笔,陌生女人1号编辑

图片
❌