Tarun Chhabra on the Stakes of AI Competition
Tarun Chhabra is the head of national security policy at Anthropic and previously served as the Deputy Assistant to the President and Coordinator for Technology and National Security on Biden’s NSC.
Today, our conversation covers…
Why the US needs to maintain an advantage in the race for AI development against China,
Whether the US’s AI industry is prepared for future competition from China,
The lawyers vs. engineers debate, and what the US needs to build AI supply chains,
How government and industry can work together to across the AI development process.
Listen now on your favorite podcast app.
Race for the AI Frontier
Jordan Schneider: Part of the original justification for banning exports of American chips and semiconductor manufacturing equipment was the idea that these pieces of technology could directly contribute to PLA capabilities. We’re recording this during the week of a military parade, and I’d love to hear you give your most convincing unclassified case for why these technologies directly contribute to arming a strategic adversary.
Tarun Chhabra: Thanks again for having me, Jordan. The Wall Street Journal just published a good piece this week about how the PLA is using commercial AI technology. But this really goes back to when Jack Ma 马云 disappeared — that was essentially the end of any dissent or pushback from China’s tech companies regarding support for the national security apparatus.
It’s been a safe assumption since then that dual-use technologies enabling capabilities for the PLA or China’s intelligence services will be used that way. You can ask your favorite LLM for examples, and you’ll find plenty. One that obviously comes to mind — particularly because it was a focal point of the outbound investment restrictions — is cyber capabilities.
A straightforward example is Chinese hyperscalers providing the equivalent of cloud services and cybersecurity services to national security actors in China. Obviously, they’re going to offer AI services as well. Unless you think the AI somehow won’t be part of that package — which I don’t know how you’d conclude — it’s a pretty straight line from cloud and cyber services to frontier AI models.
Jordan Schneider: We have different layers of an AI race. I’m curious about your topology and how you’d rank them on their ability to build defensible advantages.
Tarun Chhabra: I think about it as the race to the frontier, the race for diffusion globally, and then the race for adoption — which encompasses both national security and economic applications.
On the race to the frontier, it’s about power, talent, and chips.
This is partially why Anthropic focuses so much on addressing power and permitting barriers to build more AI infrastructure in America, and why we emphasize export controls — because we think hardware is our advantage for the next several years.
On the diffusion side, it’s also why we focused on export controls. We don’t think our hardware should be powering Chinese data centers, either to help them reach the frontier or to compete with U.S. companies or other trusted companies globally. The same principle applies to adoption. The more we succeed in adoption, the more compute you’ll need at the enterprise level for national security actors as well.
Jordan Schneider: When debating how to structure export controls, let’s start on the chip-making side. What are the variables you tried to optimize for, and what should current policymakers focus on moving into 2025?
Tarun Chhabra: We have a significant advantage when it comes to chip making. The U.S., together with allies — obviously Taiwan, and stemming from our dominance in the supply chain along with close allies like the Netherlands and Japan — holds this position.
The question is, how long can we ensure that we maintain that advantage? This comes down to our ability to control that technology during the period in which China has yet to indigenize it, certainly at the level that enables scale in production or very advanced production. It also entails working on components and servicing as well.
Irrespective of who’s in office, here are the next things we ought to be doing: We should do more on the component side — this is also in the interest of the tool-making companies to defend their advantages. The servicing piece is really important as well in the industry. These are the next steps we should take to defend our advantage in chipmaking.
Jordan Schneider: How much do you buy the argument about China’s will to indigenize? The will and capability to indigenize remains a live subject of debate across all the different layers of our AI future.
Tarun Chhabra: The shot was fired during the first Trump administration with actions against ZTE and Huawei. From that point forward, maintaining dependence on the United States for advanced dual-use technologies was a bad bet from their perspective.
But there’s also a broader historical story here, which you’ve probably discussed with Dan Rosen and others — the history of China’s industrial strategy over the last several decades. Name any sector where it has worked for us to say we’ll just keep them addicted to the technology.

The pattern in China’s playbook is pretty clear — buy it until you can make it. Once you can make it, kick out the U.S. competitor. Eventually, once you can make it at scale and subsidize it, try to eat up their global market share. It’s happened over and over again. You name the sector, and given the leadership’s focus on AI, there’s next to no likelihood that we can stop the indigenization train.
The real question — which I know you’ve talked to Lennart Heim and Chris Miller about — is what do we do in the interim? The other question I would ask is: where would China be today if we didn’t have the controls? We can cite various developments, but where would they be if they had the talent, the energy, and also had the chips? We’d be in a much tighter race today, and we don’t want to be there from a national security perspective.
Jordan Schneider: Why is that hypothetical such a confusing thing? The SMIC base case is that they would be further along than even Intel is today if not for controls. Is this a particularly hard hypothetical?
Tarun Chhabra: You and I tend to be in violent agreement on this point, which makes it hard for me to understand why that isn’t the question we should ask. That’s why someone like Dario Amodei says DeepSeek shows the importance of maintaining, if not strengthening, controls. They have incredibly talented AI engineers in China, power, and capital — it’s just about the hardware, and it’s for this window.
The other layer of controversy is, how much does this window matter? Our perspective is, we’re not seeing a significant slowdown in the saturation of benchmarks. We still think you’ll see transformative capabilities over the next three years.
We should all discount this in a healthy way, but if we believe it’s just over — more likely than not, or 60% or 70% — and you talk to folks about how they assess that, the implications are significant from a grand strategy perspective. We ought to be preparing for that, and from my perspective, we shouldn’t try to make this a close race.
Jordan Schneider: The other wrinkle that folks haven’t necessarily priced in is that you guys are training on Trainium, and Google’s figured out how to train on TPUs. The idea that CUDA is this “one layer to rule them all” — that if you can get them stuck on it, you’ll have control forever — we’ve seen multiple companies already figure it out, and they only had commercial motivations, not their head of state telling them they had to do it.
Tarun Chhabra: I was just having this conversation with some of our colleagues internally this week, and they’re making the same point. Yes, Anthropic is a well-funded company with really talented engineers working on the hardware problem, but if we’re able to do exactly what you just said, and a nation-state is committed to doing it, then they can probably get there.
Jordan Schneider: You mentioned increased capabilities — this is weird sci-fi stuff. I’m curious, looking backward, how much that ethos or that single-digit, low double-digit probability played into policymaking. What’s your perspective going forward on how folks should adapt for the possibility of those types of futures?
Tarun Chhabra: What we’re seeing right now with our coding model — our engineers are using it for about 90% of tasks. That’s going to take longer to diffuse across the economy or even the broader tech sector, but people doing national security work often need to do a lot of coding, especially for cyber operations. The applications in cyberspace are pretty significant.
We have a demo showing what it would cost to replicate the Equifax breach — you could probably do it for well under fifty cents of tokens. If you tried to replicate that globally, you could probably do it for under $10,000. That alone should ring the alarm bell, and that’s with current capabilities.
If we think about nation-states trying to make cyber operations more autonomous in their attacks against us, and the need to defend against them and have a viable policy in cyber defense alone, that’s a clear and present problem today.
Jordan Schneider: In trying to talk people into this worldview, what typology of skepticism do you run into nowadays in Washington?
Tarun Chhabra: That’s a good question. Some skepticism is pegged to “this model came out and it wasn’t everything I expected it to be” — whatever model that might be. Using a data point of one isn’t a great way to assess this necessarily.
Another skepticism is that adoption is slower than the most optimistic projections suggested for certain uses, like coding. Then there’s the view that it’s taking longer to penetrate the physical world in manufacturing than very optimistic projections thought it might.
But going back to the counterfactual, we ought to re-baseline the questions. If I had told you three years ago that we would have coding models that could do 90% of our software engineers’ development work today, or that we could have a significant impact on cybersecurity, you might have believed me, but many people would have been understandably quite skeptical.
When the chip controls went into place in 2022, although a big focus was LM development and views on where that was going, the easiest thing for people to understand at the time was that the chips themselves are used in computers that do nuclear modeling or design weapons systems. That’s true, but not at the scale that would really be impacted by export controls. This has been a perpetual issue — how do you get people to think a year or two ahead when you’re on this exponential curve?
Jordan Schneider: You used to be a speechwriter. The Jake Sullivan line that will ring out in national security textbooks for years to come: “Given the foundational nature of certain technologies such as advanced logic and memory chips, we must maintain as large of a lead as possible.”
As you said, 1%, half a percent, maybe 2% of these chips are probably going directly toward nuclear modeling or similar uses. The ongoing tension is that the vast majority will be for commercial use. You can have different opinions about whether the U.S. should be supporting broad-based growth of China, but this tension is built into anytime the government gets involved with technology — these aren’t night vision goggles.
Tarun Chhabra: The key issue here goes back to the question you asked at the beginning: we know this dual-use technology will end up supporting national security capabilities for a country that is actively planning military operations against the United States.
You have to accept that there’ll be some collateral impact in some cases to address that problem. Then you adjust based on what kind of advantage you think these capabilities are going to provide. If you think they could be transformative, then you take more risk on that front.
Jordan Schneider: There’s that level, and also the strategic level of what you’re doing to the relationship with your enemies and with your allies by controlling this technology. What’s the right way to conceptualize how the U.S. should be relating with the world — excluding China, Russia, North Korea — when it comes to AI?
Tarun Chhabra: We want to build as large of an ecosystem as possible that’s trusted and where U.S. AI and U.S. technologies are prevalent and even dominant. That’s the world we’re trying to build.
The question is: how do you do that? This is a decision not just between enterprises, but also one that governments will take. We often talk about the United States and fellow democracies working together on these issues. This is important not just from the standpoint of opening up markets — it’s also really important for our intelligence relationships with key allies. We want to make sure we continue to be interoperable across many layers of our relationships, both national security and economic.
But I want to come back to the point about “as large of a lead as possible.” Understanding the historical context is important because if you go back to the days of CoCom, the idea was not that we would give the Soviets an “n minus 2” advantage. That concept basically came after the collapse of the Soviet Union, when we did not have an arch geopolitical foe plotting to fight a war against us.
Jordan Schneider: We were selling arms to China as of 1995.
Tarun Chhabra: Exactly. That was the context in which the “n minus 2” concept became more popular. If you believe we are in a strategic competition with China, if you see that they are planning to fight us and target our troops and critical infrastructure, then you have to revisit that concept.
Software, Hardware, or Both?
Jordan Schneider: The idea of AI contributing to a new revolution in military affairs — we have Andrew Marshall saying: “The most important goal is to be the best in the intellectual task of finding the most appropriate innovations in the concepts of operation, making organizational changes to fully exploit the technologies already available and those that will be available in the next decade or so.” It’s not just having the models, it’s figuring out how to use them. This is a thing you guys are doing now. Where are we on this? Why don’t you respond to that quote?
Tarun Chhabra: This is where I’d actually give real credit to the current administration because they’re really focused on AI adoption, certainly across government, but also in the national security space. You see that with contracts from the Defense Department. Anthropic has one, OpenAI has one. Google and others have these as well. They’re laser-focused on accelerating adoption.
It can’t just be a question of “let’s use the chatbot” or “let’s bring the model in.” It needs to be, how can we use the models to re-engineer some of our mission space? That’s what Marshall was talking about. That’s really the much harder task that people are rightly focused on right now in the administration. That’s what we want to do together as well and have people on our team who are focused on partnering with the national security agencies to do exactly that. I see that as core to our mission. If we say we’re focused on helping support democracies, protecting national security for the United States and allies is foremost in that as well.
Jordan Schneider: The divide historically has been the ideation that Andrew Marshall was involved in. The doctrinal innovation that Andrew Marshall is talking about happens in-house basically. Then the military has toys, and they figure out how to play with them in different ways. Over the past 10 years or so, as you’ve seen on many different dimensions, commercial technology has leaped past what the department and the services are comfortable with or have an understanding of.
We have more of defense as a service, and you see more players trying to sell into the government, not only with products that are required, but with products that fit into their vision of a doctrinal future that they try to sell into Congress and the Pentagon. This makes sense at one level, maybe a little scary at another. I’m curious — how forward-leaning should you be? What is the right posture for someone coming in with new capabilities to bring into these organizations?
Tarun Chhabra: The responsibility of companies that are developing new capabilities is to ensure that policymakers and the military and the intelligence community have insight into where we think the technology is going, and certainly insight into where we see early adopters in the private sector taking the technology so that they can try to get on top of it as soon as possible and figure out how to employ it in doctrine.
That’s something we definitely can do. But obviously, the doctrine needs to come from the government. The planning needs to come from the government. There are lots of ways where we can have a really productive exchange, pressure-test some ways of doing things at the invitation of the government, of course, and say, “Hey, you could do it this way.” But look, that’s not a foreign concept. We’ve done that with a lot of other technologies, too.
Sometimes we take the idea too far that AI has come out of the totally private sector, with no government involvement. A lot of technologies that have been really important for national security — they may have been funded, yes, there may have been research funding that helped get them going. But there has been a lot of adoption of civilian technologies, as you know, and then they’re brought in and there’s a give and take between the civilian sector, the private sector and the military about how to adopt them.
Jordan Schneider: We see a bit of the future of warfare in what Israel is able to do in Iran and what is happening in Ukraine. But these are trend lines that you can trace and track over time, and you see things changing. When folks imagine the world of fast takeoff, you can paint futures in which there are very radical discontinuities in what militaries can do. I’m curious about your perspective on this.
Tarun Chhabra: You will see some discoveries that come out of leading corporations or the research community that are using the compute and using the models, particularly when they have access to certain kinds of data, specialized data. But where we want to go for the government — I hope we can empower them — is to be a force in their own right in some of these discoveries.
When we get to capabilities like that, an example that’s probably fairly straightforward today is the Department of Energy national labs. The labs have developed over decades a corpus of really incredible scientific data, in some cases experimental data. The question is, is there a way for them to use that data to potentially build a new platform for scientific discovery? As you know, the labs often are important partners for the national security community as well. That’s one example where today you could already see if we put the pieces together, there might be real capability to take advantage of transformative capabilities in the near future.
Jordan Schneider: I feel like it would be hard to imagine today someone really pulling a rabbit out of a hat — pulling the equivalent of a nuclear weapon out of a hat — given the current technological paradigm we have on September 4, 2025. If things get faster, that may change, right? Or could it not change because everyone’s going to be feeling this at the same time?
Tarun Chhabra: Well, that particular example — we actually just did some work that we announced a couple of weeks ago, where we worked with the NNSA on classifiers because we actually want to make sure that people can’t do what you just said out in the wild. That’s something we’re working through with the Frontier Model Forum. We hope that other frontier labs are going to adopt similar safeguards as well.
But there’s one way to answer your question: are we prepared in the physical world for what capabilities may be coming online? That is honestly one of the things that worries me most, which is a topic you’ve talked to many of your other guests about. If you do the net assessment of where we are in our defense industrial base — U.S. versus China today — in our broader manufacturing base, are we doing enough to be poised to take advantage of some of these capabilities down the line?
That’s another responsibility that we have as frontier labs to help ensure that we will be poised to do that. Some of our best partners in that space are going to be some of your recent guests — people who run defense firms that are AI-centric and are already using frontier models and thinking about how they’re going to be able to use that to scale production. But that is a space where we actually need more people thinking two years ahead about what happens if we reach this capability but we have the status quo in our defense industrial base.
Jordan Schneider: This is really fun doing two shows in the same day that echo each other because I get to ask you the same exact questions and see what your different answers are. Thank you for that provocation from Dan Wang: “I can’t distract us from broader American deficiencies. If the U.S. and China were ever to come to blows, they would be entering a conflagration with different strengths. Would you rather have software or hardware?”
Tarun Chhabra: We need both. The answer is not to accept the status quo. From my perspective, the answer is to prepare to take advantage of the capabilities that are going to come online. That means much more work with the physical world.
Jordan Schneider: AGI and nihilism can run in a lot of different directions. One thing I just alluded to — the idea that you can spawn Dr. Manhattan from your data centers and then just stride the globe and do whatever you want. Something that also comes out of that is, yeah, you don’t have to do the hard work of building munitions capacity, and you also don’t have to do the hard work of dealing with annoying allies, because whatever, you’ll have God on your side in maybe not two to three years, but whatever, you’ll extend it out a little bit. I wouldn’t say Dario maybe doesn’t buy into all of that, but you can squint at some of his writing and see some echoes there. What are the futures that people should really consider and what are the fallacies of AI solving all of your problems that folks shouldn’t fall into?
Tarun Chhabra: Look, I don’t think nanobots are going to save the world next year, but some of this comes from a view that it’s hard to know what these capabilities are going to yield in a relatively short period of time — by which we may mean a couple of years. It’s hard to know what advances they may give us in advanced material development or in manufacturing processes.
There’s actually, maybe counterintuitively, a dose of humility in saying it’s pretty hard to say that we ought to just build more of the status quo infrastructure when we may be on the cusp of some of those capabilities. It’s actually going to be a hard thing to manage — how do we build an infrastructure that might look really different, or could look really different, with the capabilities that are coming online in a couple of years?
Jordan Schneider: Just tech as a component of national power and how it ranks and fits into the other components of that if you’re looking at great power competition.
Tarun Chhabra: Well, I’m a little biased probably right now, but I’ve been of the view that frontier AI and biotech, and particularly the convergence of the two, are going to be very powerful tools, and they’re going to be potential vulnerabilities if we fall behind or don’t make certain investments.
But look, it is a physical and real reality today that if your adversary thinks you’re going to run out of munitions in a couple weeks into a conflict, you’re going to have a hard time doing serious deterrence. We have to live in the current paradigm and we have to ensure we’re strengthening, bolstering deterrence while we prepare for a totally new paradigm that’s really still hard to actually piece together in the mind’s eye.
Jordan Schneider: There’s a Williamson Murray quote that I won’t read in full because people will hear it in the prior interview. But basically the idea that revolutions in military affairs happen at the tactical and operational level and oftentimes strategic decisions — smart or poor — wash out whatever cool stuff you come up with with your blitzkrieg or deep battle or what have you. As we talk about all these things, does it even matter if our treaty allies go a different direction or India decides that they actually really do want to be super friends with China?
Tarun Chhabra: It matters a lot from so many vantages. But to your question about the tactical level, this is why it’s a really good idea that the army has Detachment 201, because that will help people start to use the technology and think about the technology and how to operate with the technology at the tactical level and not just how we drop it into ConOps at a super high level. We have to do both at the same time. That’s really hard to do.
But frankly, it’s a reality that most people in the business world are dealing with right now. Every day, you have CTOs who are saying you will adopt this technology. You’ll tell me how you’re re-engineering your business processes. At the same time you’ve got to use the stuff today with your current process. That’s just how everyone is doing it right now.
The national security community, in that sense, is not distinct. We often will bring together senior national security policymakers with the frontier labs, which is good to do, but in some ways their peers are more so the C-suites of major companies that are trying to adopt really quickly and are in a competitive atmosphere trying to do so. The worry about it is that competition is very real day to day in the market for a lot of companies. When it comes to militaries and intelligence, it may sometimes be harder to see until you have some sort of strategic surprise, which we want to avoid.
ChinaTalk is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Lawyers vs. Engineers
Jordan Schneider: Okay, what we do here is think about the next 20 years of U.S.-China relations and try to think of a net thing. Yeah, it’d be nice to have better frontier AI. It’d be nice to have AI adoption. But would I trade that for Japan maybe? Probably not. You know, on the hierarchy of things you would want America to get right over the next 20 years, my sense is that screwing up the relations with the other most important developed and developing countries is something that has a higher probability to be a bigger deal for the rest of the 21st century than export control policy or whose model passes which benchmark first.
Tarun Chhabra: Look, there is a way to do all of these things together.
There’s a way to try to maintain strong bonds with your allies and also try to maintain AI leadership within the alliance.
That is particularly where you have allies who are actually bought into the strategic threat posed by China. A lot of them are, but they’re also facing countervailing economic interests. In some cases they’re getting pressure, coercion from China as well.
It requires really active efforts, really active diplomacy to keep those bonds strong. I hope that’s something that we can continue to do. If you say that you really want to ensure that it’s American AI that’s used around the world, you want to start with your allies. You want to make sure they trust that AI, they believe they’re invested overall in the stack, feel like they’re a part of the supply chain where that makes sense because ultimately these are high stakes, high dollar, big corporations involved. There’s a political calculus too for a lot of this, just given the stakes.
Jordan Schneider: You spent a lot of time doing tech diplomacy. What takeaways do you have from that experience? What works, what doesn’t, how to do it right?
Tarun Chhabra: Connecting the dots in an allied government is not always straightforward. It’s not always straightforward in the US government either. Leadership from the White House really matters in that regard — having your Commerce Ministry, your defense and intelligence interests, and your diplomatic interests all come together to make decisions that can be really hard for allies. When it comes to market share worries about coercion from China, there’s really no substitute for that kind of coordination. This isn’t to say there isn’t an important role for many actors in government, but without the White House being involved and without their counterparts being involved in head-of-state offices, it can be really hard to get things done.
The other key factor is that being a first mover really matters sometimes. Seeing the seriousness of purpose that presidential decisions bring into relief shows allies that the US is serious about certain decisions and that we need allied support to make things happen. I found that pretty consistently over and over again.
Jordan Schneider: Any countries you want to shout out for being good at this?
Tarun Chhabra: Some of our key allies are actually building new muscle in this space. There were governments that consulted with the previous administration about building new offices in their head-of-state offices to coordinate technology policy because they shared the view that there needed to be some head-of-state coordination mechanism. Frankly, I saw most of our top allies building that new muscle — whether Japan and Korea, who are calling it economic security (and it does go beyond technology to include dual-use technologies), India, or Australia. Everyone’s doing it differently, but everyone was basically building that muscle between technology competitiveness and economic security. That’s a really good thing — having some coordination function to figure out where we can make our interests align.
Jordan Schneider: Let’s talk about more ligaments. There’s an analysis piece, there’s a future-casting piece — doing chip controls pre-ChatGPT took some foresight. Then there’s an execution piece where once you do this thing, you need the people to run with it and make what you say a reality. Do we still need new offices? Is it just talent? What are the building blocks that the US government should invest in?
Tarun Chhabra: You definitely need the talent, and that’s really hard. Finding people who have the technical depth and can also operate in the policy world is not easy. I was lucky to have an incredibly talented team working on these issues. But I really think the role matters. Having folks whose job it is to wake up every day and think about the technology war that China is fighting against us, and whose job is to try to make America much more competitive in key technologies, really matters. Without that, there are countervailing interests — trade interests, bilateral relationship interests that encompass much more. If that interest isn’t at the table, it has a real impact.
Jordan Schneider: You were a lawyer, used to wrangling with lawyers, and now you’re working with lots of engineers. Continuing on Dan Wang’s view of the world — America as now a lawyerly society in contrast to China’s more engineering-focused approach — people were frustrated, myself included, at the pace of the rollout of a lot of these controls. It felt like there were lawyers or maybe other things getting in the way. Did you buy this? Is this what ails America? Too many laws or too many lawyers?
Tarun Chhabra: We have lawyers, too. But look, we work with some of the most amazing engineers anywhere in the world today who are building amazing technologies. Even outside the company, there are amazing engineers who want to use the technology and are developing stuff that we wouldn’t have imagined. That’s the magic. I’m not sure that I buy the full typology — I love Dan, but we have a healthy argument about this sometimes.
The question about the pace of controls or debates we have over technology policy, especially when it comes to restrictions, is a give-and-take about industry interests, national security interests, and trying to strike the right balance. We built an architecture to entertain that debate. The problem becomes when you’re missing a piece of that set of interests at the table. If you ask the person who was leading technology and national security policy whether we could have done more, faster, of course the answer is yes. But the key thing was bringing the questions to the table, bringing the proposals to the table, having a strategy to maintain our technology leadership — when for a long time, China was fighting the war against us and we weren’t fighting back. Credit to Matt Pottinger for getting that going, especially in the 2018-2019 timeframe with some of the big actions they took then.
One thing I’m proudest of is that we built a really strong bipartisan consensus for a lot of the action. Some of that was very apparent in statute — there was the CHIPS bill, there was action on TikTok. But there was also broad support for executive actions, whether that was export controls to maintain AI leadership, outbound investment restrictions, data security restrictions, or ICTS actions on vehicles coming from China because of the cybersecurity risk posed there. That’s something that is sometimes underappreciated.
Jordan Schneider: That’s lawyer energy building bipartisan consensus, not engineering energy.
Tarun Chhabra: Look, if you think you’re in a multi-decade contest with China over technology, there’s no other way to do this. There’s got to be a bipartisan consensus that can transcend administrations. I was very privileged to inherit work that Matt and his team had done at the NSC with others in the first Trump administration. We tried to build on that and broaden the consensus more and fight back on a much broader range of sectors.
Jordan Schneider: Another piece of tech diplomacy is directly with China. The shoe that didn’t drop until 2025 was rare earth controls. I’m curious about any reflections you have on being able to ramp up what you did without having the type of response that we’ve seen from China over the past few months.
Tarun Chhabra: There was a very concerted strategy to be pretty clear internally on the actions that we needed to take for US tech leadership, while also maintaining a diplomatic channel and explaining really clearly why we were doing it and why we were doing it when we did it. That strategy was designed to ensure we could take all the steps we needed while trying to mitigate that blowback. I can’t speak to how the administration thought about where we are today, but the situation China wants us to be in is one where they get our most advanced technologies in exchange for commodities. That was definitely at play when it came to efforts to coerce our allies. As you know, there were restrictions put in place on gallium and germanium against US allies in retaliation for some of the tech controls already, but not at the level that was imposed later.
Jordan Schneider: Before this, you gave us a list of some of your greatest hits — outbound investment, ICTS. Pick your favorite. Which one doesn’t get the love it deserves?
Tarun Chhabra: One thing I worry about not getting much limelight today is biotech policy. There’s lots of attention on AI policy, rightly so. The National Security Commission on Biotechnology did a great report. [ChinaTalk has some thoughts on that…] There were bipartisan members of Congress who were on that commission. They made some pretty astute observations about where China was going and where we will be in a net assessment if we stay on the current course — the kinds of dependencies we’ll be in on China and the long-term economic impacts of doing that. I would highly recommend that report and hope it gets much more attention. I hope that we find a way to invest in our R&D architecture while China is increasing theirs very, very quickly.
Jordan Schneider: That was my takeaway. Spending a lot of time thinking about this and reading that report deeply, the levers are not nearly as straightforward and sexy as the ones that the government is able to pull when it comes to manufacturing equipment and AI chips. It’s not fun stuff like FDA reform and investment into universities, as opposed to “here’s this machine where if we take it out the whole edifice crumbles.” It’s harder to have something be salient when the upside is more drugs for people.
The disturbing part of that report for me was their italicized vision of the future where China cures cancer, but we’re not allowed to get it or we’re charged exorbitantly for China’s cancer cure. It’s like, “Well, but we cured cancer,” right? The dual-use downsides of breaking this ecosystem, which right now seems to be much more — I don’t know, I’m less concerned with the potential futures that I see for the AI one and I’m also less convinced that doing things which would take the potential of one-sixth of the world’s scientists offline or reduce their productivity would be a loser for America and society at large. But I do buy the argument that the process knowledge involved with coming up with and scaling new drugs is not something you want to completely outsource to anywhere.
Tarun Chhabra: That’s the worry. You put your finger on it — the status quo is not necessarily sustainable because the status quo is trending toward greater and greater dependency, both on the manufacturing side, but also on the drug development side and on the clinical testing side. The government did impose restrictions in January on some advanced, high-throughput, high-fidelity biotech equipment. But you’re right that that’s one piece of a much bigger puzzle in the case of biotech that is going to require a lot more streamlining and regulatory reform because if you’re doing biomanufacturing, you don’t know what agency to go to in many cases depending on what your product is. That’s an area where I hope there’s much more focus and it doesn’t happen because of some big surprise.
Jordan Schneider: Did I send you the Quad Monkeys pitch?
Tarun Chhabra: No, I don’t think I did.
Jordan Schneider: Oh wow, I did a bad thing for America. But yeah — India. We can’t get Chinese monkeys anymore. India has all the monkeys, but there are regulatory reasons. There’s some lobbyist who’s been really trying on this for a long time now. It deserves its own podcast.
The AI companion stuff. You want to talk about that? It’s not about mass precision; it’s about mass intimacy, Tarun. This is the future of warfare. We’re developing closer and closer emotional bonds with our AI chatbots. If you thought stealing an SF-86 was bad, the amount you can learn about someone or directly influence them by seeing their chatbot logs, controlling their chatbot companion, which doubles as your best friend, therapist, spouse — this seems to me like the revolution in not even military affairs, but just social affairs, which nation states can very much play a big role in exploiting.
I’m worried about this. Thirty percent of American AI companions are headquartered out of China right now. Should I be? Am I crazy? It seems wild that this is something that we’re okay with.
Tarun Chhabra: I mean, you’re right to be worried about it. It’s an extension of the concern we have about China’s ability to manipulate information or information space.
Jordan Schneider: If you thought TikTok was bad, those are still videos, right? This is not your friend. You still have to pay off the influencer to say nice things about X party versus having them in your AR glasses, seeing the entire world you interpret.
Tarun Chhabra: This is one where you can already see what the future could look like if we don’t make certain decisions. In the same way that if you think about what is the state of today’s cybersecurity, and if we do nothing and we have the world of IoT descend on us, and you’re getting daily software updates from China — these are worlds that are coming very soon, and for some reason, it’s really hard sometimes to get your head around it. But that’s absolutely a concern.
Part of it goes to the point that we were discussing earlier, which is when you have companies headquartered in China that are able to use frontier American AI for certain applications. What will they do with it? Will they be used in the ways that you’re describing, let alone much more direct applications for national security? That’s something that, as a company, we’ve now taken action to address.
Jordan Schneider: This is your thing now — having companies make less money because you tell them it’s the right thing to do. What’s the rationale behind this policy change?
Tarun Chhabra: This is very much a leadership decision. You’ve talked to Dario directly about how he sees US-China competition on AI, so you’ll find this wholly consistent with what you’ve discussed with him before. We’ve long had a policy stating that China is not a supported region for selling our frontier AI. But over time, we’ve seen many Chinese companies headquartering in third countries and from there getting access to all these services.
The concern is whether this access aligns with the spirit of our policy of not supporting China as a supported region. How will that access benefit applications that could be used for national security by Chinese actors? When it comes to the competition for the AI stack globally, will it enable Chinese applications — building on our models — to then compete against American companies around the world? There’s also a really thorny technical challenge of detecting distillation, which becomes even harder when you have high-volume throughput happening.
For all these reasons, we think it’s more consistent with our position on export controls and national security to simply not provide our services to those entities.
Jordan Schneider: This speaks to a broader question of where you want to be able to control the stack. If you’re going to split it off and let China build on it, are we selling chips into China that China can then use to build models and companies? Are we selling chips into Malaysia that China can build models and companies on? There’s been an active debate for years about where on that stack it’s okay to let China play. You have these massive Oracle contracts with ByteDance, which exemplify one answer to that question. What’s the right framework for thinking through this?
Tarun Chhabra: The administration is right to focus on US AI dominance. What does that look like? What does the stack look like?
To me, it should be American models using American chips and AI data centers powering US applications, together with our closest allies.
That’s what we want to see. The debates you’re seeing now are about US chips fueling Chinese data centers. The change we’re making is about US models fueling AI applications in China that could ultimately undermine US national security.
Jordan Schneider: If you don’t like where things are headed, where is it easiest to change course two years from now? Pulling the models out from under folks — models seem pretty easy to fast-follow and steal. But are customers sticky? Are data centers sticky? Is the way you train things sticky? These are all open questions. It’s not super straightforward.
Tarun Chhabra: Yes, in some ways they’re open questions, but we also have to factor in that the Chinese Communist Party has a very strong view about what they want to see — a full Chinese stack. They’ll take the chips while they can, of course. The question is: what are we going to do until China gets to that phase? If we believe that really significant, even transformative capabilities are coming online, should we not take more risks now to enable the US to really have AI dominance?
Tarun Lore and Advice
Jordan Schneider: You’ve had nine months now. Are you reading anything fun? Taking any trips? Give the folks some recommendations.
Tarun Chhabra: Yes, I’ve taken some good trips. I’m originally from Shreveport, Louisiana, so I’ve seen a lot more of my parents, which has been great. I’m out to San Francisco pretty frequently as well — India, Australia. Some good trips and seeing former colleagues as well.
The book I’m reading now is Joseph Torigian’s book, which is great. When he was still a pre-doc, Rush Doshi and I brought him into a project we were doing at Brookings while the book was still a dissertation. It’s really cool to see his book out, and I highly recommend it. The way he blends the official party discourse with personal stories is really powerful.
Jordan Schneider: We end every episode with a song. You got one that captures our AI future? The true essence of export controls?
Tarun Chhabra: True essence of export controls... I’m a big country music fan, having grown up in Louisiana. I’ve recently discovered Steven Wilson Jr. Maybe we could sign out with some of his music.
Jordan Schneider: We need a little bit of Tarun lore. The Shreveport to AI policy pipeline is not the most robust. What do you want to tell the kids to live their policy dreams?
Tarun Chhabra: I’ve been incredibly lucky to have really great mentors. I still remember — I spent a year in Moscow after college, and one of my college advisors happened to be traveling there. This was actually Chip Blacker. If you ever met him and had dinner with him, he was carrying on about what my life would look like in 20 years. I was asking, “What do you mean, Chip? How do you know that?” He said, “No, no, you’ll do this and you’ll do this, and then we’ll talk.”
I’d never had anyone express confidence in where I might go in my life. I grew up in Louisiana, and my parents are immigrants. They provided a privileged upbringing, but they didn’t really go to college. Having someone just say, “No, I take it for granted that you’ll be able to do interesting things in the world” — that still sticks with me.
Jordan Schneider: Anyone listening to China Talk, I have absolute confidence you’ll be able to do interesting things as well. A tiny bit more lore. From Shreveport to wanting to go to Moscow in the first place — give us a little more color here.
Tarun Chhabra: I was a Cold War geek growing up. I was very interested in Cold War history. The fact that Hoover was at Stanford was a huge draw for me. I was particularly interested in post-communist societies. I did a summer abroad in Cuba after I graduated from high school. It was with Wake Forest — I think it was the second year American students were allowed in.
Jordan Schneider: This is very early days, right?
Tarun Chhabra: The Pope had just visited for the first time. That was my interest in Russia as well — what was going on in 2000, 2002, and 2003. It was such a different time.
Jordan Schneider: Why do you have a day job? I want you to take a year off. Give us the memoir, man. We’ve got a lot of good stuff here.
Tarun Chhabra: I saw Boris Yeltsin drunk at tennis matches in Moscow. It was really something.
Jordan Schneider: Could he play, or was he watching tennis?
Tarun Chhabra: He was watching the final.
Jordan Schneider: Final question — give China Talk some homework. What’s the more ambitious version of what we’re doing?
Tarun Chhabra: The question you were asking earlier about what these futures look like — in a way that’s unafraid and brings together people who know a sector deeply but who can talk to people who see what’s coming in AI — is really important from a strategic perspective. We try to do it, and I also need your recommendations for who’s doing that really well. That’s some of the most important work we could be doing right now.