Reading view

There are new articles available, click to refresh the page.

Nate Silver on AI, Politics, and Power

writes Silver Bulletin and is the author of On the Edge: The Art of Risking Everything, now in paperback with a new foreword.

In today’s conversation, we discuss…

  • Honesty, reputation, and paying the bills with writing,

  • Impact scenarios for the AI future, including how AI could impact elections and political decision-making,

  • The emerging synergy between prediction markets and journalism, and how Nate would build a team of professional Polymarket traders,

  • How to build a legacy,

  • Nate’s plan to reform US institutions and how that compares with real-world prospects for creating political change over the long term.

Listen now on your favorite podcast app.

Mathematicizing Persuasion

Jordan Schneider: Before we get going, I just want to say thanks, Nate. Writing on the Internet is scary, and you’ve made it less scary. Getting to the point where I can just say things that I know are going to piss off administration officials and $4 trillion companies takes a lot. Watching you do that over the years has been a good lodestar for not caring what powerful and rich people think and working toward the truth.

Nate Silver: I appreciate it, Jordan. There’s an equilibrium where people are way too short-term focused and too susceptible to peer pressure.

Conversely, once you develop a reputation for doing your reporting and speaking from a place of knowledge and experience — without trying to sanitize things too much — you develop trust with your audience. You carve out more of a distinct niche. People are too afraid of honesty and differentiation.

It’s easy to say if you cover fields that are popular and get a lot of audience and traffic, whether that’s electoral politics or sports. I’m not doing investigative reporting here, but I do think that working hard and being the best version of yourself — and being an honest version of yourself — is usually a smart strategy in the long run.

Jordan Schneider: That’s even more difficult with policy writing. ChinaTalk is closer to a think tank than it is to journalism. The vast majority of people who work in this field can’t make public comments because they either work in the government or work in government relations for a big company. Even if you’re at a think tank, you have to pay the bills somehow, and that basically means getting corporate sponsorship for your work.

Nate Silver: As a consumer, there are lots of issues — including China — where I’m not quite sure who to believe or trust. China is among many issues where I feel I’d have to invest a lot of time investigating who I can trust. At that point, you could almost write about it yourself. There are a lot of issues where there’s no kind of trustworthy authority. Kowtowing to corporate power is part of it. That’s part of the beauty of having a Substack model with no advertising.

Paid subscriptions help support ChinaTalk’s mission. Consider supporting us if you can!

But especially in diplomatic and international relations, people are always calibrating what they say. There are smart commentators, but you have to read between the lines — sway the reads 20 degrees left or right.

Jordan Schneider: There’s this recent micro-scandal. Robert O’Brien, former national security advisor, wrote an op-ed saying we should sell lots of Nvidia chips to China. It comes out three days later that Nvidia is a client of his.

Nate Silver: I’m never quite sure whether to assess these arguments tabula rasa and ignore who’s making the argument, versus considering that people have long-term credibility and reputational issues. One thing I do is play poker, and in poker, the same action from a different player can mean massively different things. A lot of stuff is subtextual. A lot of stuff is deliberately ambiguous.

One reason why I think large language models are interesting is because they understand that you can mathematicize language. In some ways, language is a game in the sense of game theory — it’s strategic. What we say, exactly how we say it, and what’s left unsaid is often powerful. A single word choice can matter a lot.

You probably think about this in a different context — in the case of official statements by the Chinese government, for example.

Jordan Schneider: Let’s stay on this, because this was actually one of my mega brain takes that I want your response to. We have Nate Silver at 11 years old wanting to be president, and now we’ve had 20 years of him thinking about analyzing presidents and presidential candidates and what they do. You don’t necessarily have to believe in AGI and fast takeoff to think that 10, 20, 30 years down the road, a lot of the decisions that presidents and executives make today — an AI will just strictly dominate what the human would do.

Maybe that Cerberus moment becomes what the Swedish Prime Minister was saying a few days ago — “Oh yeah, I ask ChatGPT all the time for advice.” I’m curious, what parts of the things that presidents and presidential candidates do you think are going to be automated the fastest, assuming we just let them ingest all of the data that a president or executive would be able to consume themselves?

Nate Silver: AI in its current form might be an improvement over a lot of our elected government officials, but that says a lot more about the officials than the AI necessarily.

I don’t take for granted — and some people do, including people who know a lot about the subject — that we’re going to achieve superhuman general intelligence. There are different meanings between these terms that we can parse if you want. But some of the reason that large language models are good now is because they train on human data and they get reinforcement learning with human feedback.

There are cases — pure math problems — where you can extrapolate out from the training set in a logical way. For more subjective things, political statements, I don’t know as much.

Some people believe that AIs could become super persuasive. I’m skeptical. First of all, humans will be skeptical of AI-generated output, although maybe I’m more skeptical than the average person might be. Also, it’s a dynamic equilibrium. You can message test, have an AI train, and it can figure out: “Okay, we can now predict — we’re running all these ads, sending out all these fundraising emails — we can predict which will get a higher response rate."

But when people start seeing the same email — “Nancy Pelosi says this or that” — 50 or 100 times, then they adjust and react and backlash to it. In domains where you are approaching some equilibrium, profit’s not easy to have. There are no easy tricks. You have to play a robust, smart strategy, and ultimately the strength of your hand matters, along with how precisely you’re constructing the mix of strategies that you take at any given time.

I’d bucket it roughly as a 25% chance that on relatively short timelines, AI just blows our socks off. 25% that it does that, but at a longer timeline — a decade, two decades, three decades out. Then 50% that AI is a very important technology — more important than the Internet or the automobile — and reshapes things, but does not fundamentally reshape human dynamics across a broad range of fields.

Things like international relations or politics are among the more resistant domains toward AI solutions. At the same time, another risk is that you’ll have people who view the AIs as oracular. We’ve seen cases of people who are encouraged by ChatGPT to think they’ve developed some new scientific theorem or discovered a new law of physics. They’re very smart at flattering you.

One thing I do is build models. Sometimes a bad model is worse than no model or your implicit mental model. Trusting an all-knowing and all-powerful algorithm, especially in cases where the situation is dynamic — the laws of mathematics don’t change, but international relations and politics are always dynamic and maybe changing faster — and whether the AIs can adapt to new situations quickly is also an open question.

Jordan Schneider: If we’re trying to bucket the types of things a CEO or a president or a senator does, we have personnel management — who am I going to hire, who am I going to fire? We have the outward-facing stuff — what do I say to this interviewer, how do I talk in the debate? Then we have these decision points where you have a memo, you could pick A, B, or C, and there are different sets of trade-offs where you could optimize for this thing or that thing.

I don’t think it’s crazy to think that parts of those different buckets could be radically improved, even just to play out the different second and third-order effects of whatever you’re negotiating for in the next budget bill or something.

Nate Silver: For discrete tasks, AI can already be wonderful. I’m doing a little coding now on a National Football League model, and it’s late at night. I’ve been up a long time, had some wine at dinner, and I’m thinking, “Okay, Claude, how do you do this thing in the language I’m programming in?” The thing is a discrete task where I have enough experience with these models to expect it to give a good answer, and I have enough domain knowledge that when I plug in this code, I can tell if it works or not. I’m not going to have some bad procedure that chains into other bad procedures in a complicated model.

There’s been mixed evidence on how much more productive AI makes people. My stylized impression is that it makes the best people even more productive and makes people who are not that smart maybe worse. I worry that it’ll be a substitute for domain knowledge and human experience, but for certain things it’s already superhuman, and for other things it’s dumb as rocks.

Knowing what is what — there’s a learning curve for that.

Jordan Schneider: There’s also this aspect where the human floor can be pretty low, especially if you’re tired or stressed or you’re the president and you’ve got a hundred thousand things being thrown in your face. It’s literally an impossible job. Maybe there’s also this weird electoral feedback thing where, presuming that AI is really helpful for winning and governance, the folks who trust it more and faster are the ones who perform better in their jobs and get elected to higher and higher office.

Nate Silver: Government is backward in a lot of ways. This varies country by country too. In some ways, America has maintained a relatively high degree of international hegemony despite having this constitutional system that’s now hundreds of years old. It has flaws that people — not to sound a little personal here — Trump has found a way to exploit.

It’s amazing that we still entrust all of this power in one president. New York City, with 8 million people, is probably about as large an entity as you should have maybe one person in charge of. But we haven’t really developed other systems.

If anything, one of the more dynamic places in the world right now — you’d still say the U.S., you’d say China, and then probably the Middle East — they’ve kind of cheated. The U.S. is increasingly less democratic, and the other two were not really democratic to begin with. Maybe my assumptions have been wrong.

Poker and Prediction Markets

Jordan Schneider: Let’s talk about prediction markets. You spent a lot of time thinking about poker and had this whole experiment in your book where you just spent a year betting on basketball. The thing about basketball and poker is there’s a lot of data and track record you can base your estimation on. You can find your edge in very weird corners. But a lot of these markets on Polymarket and Kalshi are very one-off. Right now there’s “Is Trump going to put more sanctions on Putin in the next six weeks?” Is there a regression you can run on that? Not really. It’s fascinating because these are so much more one-off and open-ended than what you would see in the stock market or sports betting.

Nate Silver: I’m a consultant for Polymarket, so I do have a conflict of interest to disclose. Sometimes the one-off events are not as good. One market where Polymarket, Kalshi, and others did not do as well was the election of the new Pope. Cardinal Ratzinger had a very low chance of being selected.

What happens there? On one hand, you have a papal election every 10 or 20 years, so you don’t have a lot of data. Everybody leaks, but the papal conclave does not leak, apparently. There’s not really any inside information. Then people apply the heuristics they might apply to other things. When they have the smoke signals come out quickly, they think, “Oh, it must be the most obvious name.” The favorites went up, and that wasn’t true. People just had no idea what was going on. There are some limitations there versus things like elections, which are more regularizable.

At the same time, there is a skill in estimation that poker players and sports bettors have. Maybe I’m making a real-time bet on an NFL game and Patrick Mahomes gets injured — the quarterback of the Kansas City Chiefs. I have to estimate what effect this has on the probability of the Chiefs winning. If you just do that a whole bunch, then you get better at it. You have to have domain knowledge and be smart in different ways.

When I’ve consulted in the business world — not capital-C consulting, but for people who are actually making bets — you realize that a good answer quickly is often what makes you money, whereas a perfect answer slowly doesn’t. This is one reason we’re seeing a shift of power away from academia toward for-profit corporations. You can say that’s bad or good — I think we need both, frankly. But you have a profit motive and an incentive to answer a question quickly.

In poker, the same thing. If I play a hand and I’m getting two-to-one from the pot, so I need to have the best hand or make my draw one-third of the time. Then you go back and run the numbers through a computer solver, and you’re like, “Well, actually here I only had 31% equity when I needed 33%.” That was a big blunder. For most people, 31 versus 33 is the same, but with training you can estimate these things with uncanny precision. There’s a lot of implicit learning that goes on, and it becomes second nature.

Jordan Schneider: Are you worried about insider trading with all this political betting? There’s an aspect where these are all crypto — you get on these markets with crypto. There were markets asking which way Susan Collins is going to vote. The tail outcomes for a legislative assistant in her office are that you can make 10 times your salary in a minute. What’s your thinking on this?

Nate Silver: There are a couple of qualifications. First of all, people on the inside often aren’t as well-informed as they think, or there are downsides to having an inside view versus an outside view. You might drink the Kool-Aid, so to speak. You might be in a bubble.

Jordan Schneider: There are ones where you can literally — I mean, there have been lots of group chats of people talking about very sketchy trades and one-way bets being made in the stock market about what’s going to happen with a trade deal. You can literally be the person who decides and be betting on the side.

Nate Silver: If there are incentives to make money in a world of 8 billion people, many of whom are very competitive and most of whom have access to the Internet, people are going to find a way to do it. It’s not just that game theory equilibrium is a prediction of what occurs in the ideal world — it’s what very much does happen.

We’ve seen things in the crypto space like an increasing number of crypto kidnappings. That’s one of the consequences of people being worth vast amounts of wealth that isn’t very secure. It’s just going to happen until you up security or have better solutions.

I don’t think there’s necessarily any more or less insider trading on Polymarket than there might be for sports betting sites — we’ve seen a lot of sports betting scandals — or for regular equities. The literature says that members of Congress achieve abnormal returns from their stock portfolios. I’d have to double-check that — I’m sure there’s some debate about it.

People can also sometimes misread insider information or read a false signal that becomes insider information or a fake signal. If they see an unusual betting pattern, they’ll think, “Oh, okay.” There were some tennis betting scandals where tennis is an easy sport to throw because it’s individual — two people. You don’t need multiple conspirators. Something unusual happens and people think, “Therefore, it must be an insider trading move” or “Therefore, it must be someone throwing the match.” Maybe sometimes it is, other times it isn’t. It’s very hard a priori to know which is which.

Jordan Schneider: It’s a new variable in politics. The way you could previously cash out was becoming a spy for another country — very high risk with lots of downside. Or you’d have your career and then become a lobbyist, but that pays out over years.

This is something new, and we’re going to have to watch it because I find I get a lot of value from seeing these numbers every day and watching how they change. Polymarket has become something I check before looking at the homepages of major news outlets. But there’s something that makes me a little queasy about opening up this new realm of betting where maybe we, as citizens, don’t want the people we’re paying to do these jobs to have this alternate way to cash in.

Nate Silver: Or journalists too. I know a couple of projects where people are basically trying to apply journalistic skills to make trades — not necessarily in prediction markets, maybe a little bit of that, but more in equities.

If you have a well-informed view of China, especially particular industries, that has big implications for how you might trade a variety of stocks, including American stocks.

My impression is that people at Wall Street firms trading equities don’t like all this macro risk. They prefer to predict what the Fed is going to do, analyze earnings reports, and work with long-term trends they can run regressions on.

They don’t like profound political uncertainty where the macro bets are very long-running. It may be hard to come up with the right proxy for the trade you want to make.

You’ll probably see more fusion between trading and research and journalism at every step along that path.

Share

Jordan Schneider: I’ve started interviewing a lot of these Polymarket investors and traders. It’s remarkable to me that there are no funds or teams. I assume that’s just because these markets aren’t liquid enough. But what would your dream team of skill sets look like if you were going to start the Polymarket hedge fund?

Nate Silver: You probably want some AI experts. You want macro people, you want a mix of people who are smart micro traders — poker player estimator types. More barbell theory. You want the macro people — a China expert, an AI expert, an expert on American politics. On average, the takes that Wall Street has about American politics are pretty primitive. You want someone who understands macroeconomics and inflation and the debt. You want a mix of skill sets.

In terms of what the banks and hedge funds are doing, different firms probably differ in what they think. There is reputational or enterprise risk to trading. Crypto can be a gray area. Prediction markets can be a gray area. I suspect there’s probably more of it than you might assume.

Until the last couple of years, there definitely wasn’t enough money in prediction markets overall to be worth it for institutional traders. Now there might be, especially for smaller firms. If you were a firm that wanted to say, “We are primarily trading non-traditional assets — prediction markets, cryptocurrencies, maybe low market cap stuff” — there are crypto hedge funds. If they’re getting more into prediction market stuff too, it wouldn’t surprise me in the least.

As prediction markets get bigger, then the bigger quant hedge funds and eventually Morgan Stanley and Goldman Sachs will want to trade them too.

The Barbell Strategy

Jordan Schneider: What do you think your legacy is now, and how would you want to change that in the next 10 to 20 years?

Nate Silver: The best work I’ve done is the book I wrote a year ago, On the Edge, now in paperback. The election models are valuable and might be what I’m best known for, even though it feels like they’ve been 5 or 10% of my lifetime productive work. That’s a really hard question to answer — I don’t think I’m quite old enough to answer it yet.

Jordan Schneider: I can answer it maybe. There weren’t a lot of numbers in many discussions before you. With elections, people have to respond to facts. The facts of you boiling all the polls and demographic data and voter registration data into one thing is a much more tangible, grounded set of facts than “you say Marist, I say Quinnipiac” — what does that even mean?

This approach is spreading now into wider areas on Polymarket and Kalshi, where different modes of politics now have numbers attached to them in ways they didn’t before. There’s literally a “Will China invade Taiwan?” market. However much we want to believe anyone has insight into that, there was no number you could point to before to ground you in that reality.

Great man versus trends — there’s been a lot more data and computing power over the past 25 years to enable what you do. But it was both the modeling and presenting it in a compelling, engaging voice that really helped reshape the way people think about these issues.

Nate Silver: I appreciate that I have certain talents. If you’re a 7 out of 10 on the modeling and a 7 out of 10 or maybe 8 out of 10 on the presentation, that overlap is somewhat rare. The overlap is maybe more than the sum of the parts — more valuable than being merely pretty good at each individually. I’m a good modeler, but the combination of those skills is valuable.

The world is moving directionally more toward this. Prediction markets in particular — if you look at sports betting, it’s not really growing as much as the big industry players had hoped, but prediction markets have had some false starts before. Now with Polymarket, Kalshi, Manifold, and others, you have a robust and well-constructed enough trading ecosystem where they are here to stay for many things.

If I’m about to publish a story — let’s say on Trump’s latest round of tariffs — and I need to check if Trump has done anything crazy in the past five minutes since I last read the internet, I’ll just go to Polymarket. Did any markets radically change? Is there some massive event that would make it insensitive to publish a story? Has there been an earthquake somewhere? You can instantly see that news. Twitter used to be somewhat similar for instant feedback.

But this leads to a gamified ecosystem where, as a poker player, sports bettor, or rapid news consumer, you’re always checking your email, phone, Twitter, and the internet. You’re always aware of 15 things at once. It makes it harder to unplug and creates fuzzy boundaries between work life and real life. If I’m running on the east side listening to a podcast and thinking about my next article, that’s work. If I’m checking my phone late at night when I’m out at dinner — checking work stuff, not gossip — that’s work too.

The world’s moving more that way, for better or for worse, and it prioritizes quick computation and estimation.

Jordan Schneider: Does this make it harder to do big things anymore?

Nate Silver: No, it creates more of a barbell-shaped distribution. Working on the book took me three years — that was really important fundamental work. Right now I’m working on this National Football League model. That’s a six-week project, not a three-year project, but it’s foundational work that will produce dozens of blog posts and hopefully hundreds or thousands of subscribers for Silver Bulletin for many years.

Having three or four things that you’re really interested in and fully invested in — obsessions, you might call them — is very valuable. But the skill of quick reaction is also important: your best flash five-second estimate, having the first authoritative take on Substack when Jamaal Bowman won the primary by a larger margin than predicted.

I was in Las Vegas at the World Series of Poker. I had just busted out of a tournament and hadn’t seen any great takes on this yet — it was a very fruitful subject. It was midnight New York time, 9 PM Vegas time. I cranked through until 3 AM Vegas time, 6 AM New York time, and published what I thought was a pretty smart story on it. That kind of thing is important.

The middle ground — the ground occupied by magazines, for example (not that places like The Atlantic, which have become digital brands, don’t do great work) — is maybe the in-between where the turnaround is too slow to be the lead story in a rapidly moving news cycle, but it’s not quite foundational work either.

Academia suffers from even more of this problem. The turnaround time to publish a paper is just too slow.

I’ll run a couple of regressions, give it a good headline, make some pretty charts, and it’ll be 90% as good as the academic paper in terms of substantive work and 150% better written because I’m writing for a popular audience, not journal editors. That exchange of ideas is what moves the world.

It’s fascinating to see these dynamics. I don’t know a lot about DeepSeek, for example, but it was interesting to see the narrative shape in real time about how competitive China is in the short to medium term on large language models. Or to see with Jamaal winning the primary — probably the general election too — how much that anchors the conversation in different ways.

Subscribe now

Stylized, abstracted, modelized versions of the real world become more dominant. We’re all model-building. I have a friend who’s a computational neuroscientist at the University of Chicago. He says ultimately the brain is a predictive mechanism. When you’re driving and looking at the road ahead, you’re not seeing a literal real-time version of the landscape in front of you. It’s a stylized version where your brain makes assumptions and fills in blanks because it’s more efficient processing. The message length can be shorter if you focus on the most important things.

You can notice this if you’re in any type of altered state — whether drug-induced, under anesthesia, tired, or experiencing extreme stress or stage fright. You process things differently at a broader level.

My first book, The Signal and the Noise, is about why the world is so bad at making predictions. Part of it is that you have to take shortcuts or else you’ll never get anything done. But when you take shortcuts, that leads to blind spots, and that’s not really solvable. AI might scrape off some of the rough edges, but sometimes the rough edges are created by the market being efficient and dynamic and by people keying off other people’s predictions and forming a rapidly shifting consensus. Those dynamics are going to remain very interesting.

The Taiwanese edition of The Signal and the Noise. Source.

Jordan Schneider: The DeepSeek experience was surreal for me because this is a story — China and AI — that I’ve been following for five to seven years. All of a sudden, it became the story. Our team hit it pretty well; we doubled our subscribers. But watching our little thing try to shape the broader narrative, and suddenly all these journalists are asking, “What’s this company?” I’m thinking, “I’ve been writing about it for two years. Where have you guys been?” I’ve never had one of my stories really become the main story before.

Nate Silver: It can be an amazing feeling. This relates to the Great Man theory in some way. Early in a news cycle, the way a story is covered is very important. One news outlet or journalist covering a story differently can shape attitudes about it for weeks to come. This is partly why PR people always say, “Don’t say so much, but be fast.” You want to preempt things because that founder effect can matter.

Nate Silver: Tell me, Jordan, what did the mainstream media — The New York Times, Wall Street Journal, and others — get wrong about the DeepSeek story?

Jordan Schneider: There was this first narrative that it cost them $6 million to train their model. That was illustrative of the problem. When I went on Hard Fork with Casey Newton, that’s the first question they asked me. I’d already written a few things, but no — it did not cost $6 million to make this model. You need to hire the people, run the compute, have the infrastructure, run lots of experiments. All in, it’s probably more like half a billion dollars.

But there were enough people for whom that narrative was interesting or financially useful that it spun faster than my loyal crowd of ChinaTalk listeners who were hearing us beat the other side of the drum.

Another thing that’s been shaped by this is the export control debate. There was this expectation that because DeepSeek exists, export controls are worthless. There’s lots more nuance to that, which we’ve covered in other podcasts. But the simplified version of drawing that line from A to B has really shaped the trajectory of American policy toward AI export controls and AI diffusion more broadly.

The cohort of folks who understood this nuance wasn’t able to seep into the halls of decision-making. We have now, quote-unquote, “lost” when it comes to semiconductor export controls, partially because there was this moment that ended up reshaping narratives, and the people who agreed with my version of the facts weren’t able to influence policy.

Nate Silver: These narratives that prevail are often in someone’s economic or political interest. There was a narrative after the 2024 election that Democrats lost because of low turnout, especially among younger voters. There’s a slight grain of truth in that.

However, this was exacerbated because people don’t realize it takes a month to count votes fully in the United States. The counts you see on election night and the next day shortchange turnout by tens of millions of votes. But that was a convenient narrative because people wanted to move to the left. It’s more that younger voters — particularly young men — shifted against Democrats. There’s somewhat lower turnout, but it’s still historically high.

It’s tricky when you have the more nuanced take versus the easily memeable take. If you’re making sports bets, a lot of it’s in the nuance. We all know this quarterback is good, but maybe he can be both very good and a little overrated by conventional wisdom for various reasons. There’s enough of a profit margin where you have positive expected value on your bet. In the news cycle, that’s less forgiving. But Substack and podcasts give a little more room for subtlety and exploring things.

Jordan Schneider: I’ve shied away from writing a book all these years. You’re very quick-twitch but have also done it and just made the case that it’s really valuable. Why don’t you expand on that for me? What has having these two book projects under your belt given you?

Nate Silver: For one thing, I like the process of writing a book. Ordinarily, day to day, I consider myself a journalist, but for the most part, the process involves me and a computer. Particularly when it comes to politics, I don’t really want to call Democratic or Republican sources and get their take. People are paying me for my take, and I don’t care to be spun. That increases the turnaround time.

For the book, it’s the opposite. On the Edge involved interviews with roughly 200 people — a lot of experts and practitioners in fields I find fascinating. Even if you weren’t working on a book, if you took two years to interview 200 really smart people about things they’re knowledgeable about just for your own edification, that would make you a lot smarter.

Jordan Schneider: When I was reading the book, I was annoyed. I wanted the Nate Silver experience — I want to listen to the hour-long conversation you had with Peter Thiel. That’d be fascinating if it’s on the record. Why not sequence it that way?

Nate Silver: The people I spoke with were often very candid, maybe against their narrow self-interest in some cases. But I would never approach somebody and say, “I want to talk to you about X.” I would approach them and say, “I’m writing a book about X that’ll be published in a year or two.” I would always be very honest about the rationale for the conversation.

If something Peter Thiel said was super newsworthy — and I’m trying to remember if that was a conversation with explicit ground rules — but if it’s a conversation where someone says something newsworthy in a non-book context, it’s a little ethically fraught. I don’t think it crosses some bright line journalistically, but it’s complicated.

With Sam Bankman-Fried, that was an explicit understanding. He definitely told me things because he thought the timeline would insulate him from some risk or he could shape the narrative somehow. It wound up coming out after he was already sentenced and in prison.

You’ll have reporters who embargo reporting on political projects — there should be lots of books about Biden’s decline. The fact is that people will tell you more when they have more protection. Probably 80% of my interviews were more or less fully on the record. Sometimes the background interviews use journalism’s distinctions for these terms, and there are in-between categories where you can publish with approval. I don’t love that, but I did it for one or two important sources where it was the least bad option.

People are more candid if they understand the context of your project and believe it’s coming out in the context of a book that puts everything into a broader universe.

I used to work at The New York Times — that ended in 2013, but now I write for them a few times a year. It’s friendly. If The Times calls me for a story, I’m still sometimes reluctant to say anything because you’re going to have one or two quotes put into their narrative that may or may not suit your purposes and may or may not be accurate.

The Times is popular in part because they write in narratives. Even a boring economic data story has a little spin on the ball — good writing, good headlines. Nothing wrong with those things, but sometimes there’s a narrative that’s a little reductionist. They’re the best in the business, or among the best. When you deal with people who don’t have those journalistic standards, you’ll encounter more problems potentially.

Jordan Schneider: We have these nice historical interludes in the book. Is that a type of non-journalism-driven writing? Is that something potentially on your horizon as well? How do you think about hanging out in the archives for a year or two?

Nate Silver: History and statistics are closely tied together. For this NFL project, I’m researching every NFL game played back to the 1920s. It’s remarkable to see how this one sport has survived with significant changes. But you come across something and wonder, “Why were there no games played that day?” Oh, the September 11th attacks. You encounter changes in real-world behavior and technological changes.

Any system model sometimes involves extrapolation from first principles, but the most empirical ones just say, “Okay, we are extrapolating from history and making the assumption” — and it is a big assumption — “that trends that existed in the past will correctly extrapolate to the future.” Often they don’t. Economic forecasting is notoriously difficult because there are regime changes. The economics of the internet era versus the pre-internet era versus the pre-automation era versus the pre-agricultural era — there’s great research showing these are all very different.

The field of economic history, sometimes called progress studies, is quite underrated. The notion of why different societies, great powers or not-so-great powers, rise and fall — why is South Korea as prosperous or more prosperous than Japan today per capita when there was a 10x or 20x gap 40 years ago? These seem like really high-stakes, important questions. Because they play out at longer time scales, they often don’t motivate people as much. But they seem vital.

Even within AI, from what I understand, the AI companies are not really putting a lot of effort into thinking about what this looks like in five or 10 years. They have longer time horizons than most, but they’re not really forecasting how the entire world changes if we do achieve superintelligence. They talk about it a lot — listen to the Dwarkesh podcast or whatever — it’s a popular subject, but that’s probably substantively more important than what’s in the daily news cycle.

Theories of Political Change 纳特·西尔弗思想

Jordan Schneider: There’s a myth of a software engineer who’s annoyed by something in the Spotify app, joins Spotify for two weeks, fixes it, and quits. I’m curious — what would you do during such a stint if you could plop yourself down anywhere in government?

Nate Silver: Let me redesign the Constitution, I guess. Maybe we need a ban on gerrymandering and we need to change the Senate. To some extent I’ve done that. I was dissatisfied with the way elections were covered, so what I thought was a little two-week project became a life-altering career.

I do feel like there’s maybe a little more openness to improvement. New York City has a new subway map now, which is much more legible. I saw that the other day — that was a nice little improvement. I could be a good restaurant consultant: “This restaurant’s not going to work. I live in this neighborhood. Nobody ever walks on this block. They walk on 8th Avenue and not 7th Avenue — I can’t explain why."

Being a copy editor, I guess. I notice copy editing problems in advertisements and things like that a lot.

Jordan Schneider: Think bigger. Come on. What Cabinet secretary? What bureau? Jamaal’s saying, “I’ll give you any job, Nate.” Which is it?

Nate Silver: How to have a good poker scene in New York — we’ve got to have poker rooms. We don’t need the rest of the gaming.

I guess I agree with the abundance critique where New York just takes an awful lot of time to build things. At the local government level, there often are incremental improvements made in different ways. In New York, our new infrastructure projects — LaGuardia Airport, the other airports, the West Side development — they’re all nice. It just took too long and was too slow.

But Jamaal is to my left, I think. I do think that he’s enough of an outsider and bright enough that he might do those smart, experimental things that don’t just fall under bureaucracy and inertia. I don’t know — was it Japan or Korea where they have little lights embedded in the sidewalk? That’s pretty cool. Just little things. If you’re looking down, you know when to cross or when not to.

Jordan Schneider: You’re a “department of special projects” guy — “let’s just make life better on the margin a little bit.” Or we’re going to let Nate Silver rewrite the Constitution. More barbell theory.

Nate Silver: Think big, everything’s small. Absolutely.

Jordan Schneider: Okay, let’s do barbell theory again with money. Say SBF hits and you’re just his advisor, and you got billions of dollars to spend on stuff — maybe not dumb stuff. Where are you putting your marginal $10 billion of philanthropy around politics?

Nate Silver: I don’t think politics is a very effective use of money. If it is, it’s at the local level. If you look at projects that were really successful — one of the most successful projects in American history is the conservative movement’s multi-decade effort to win control of the American court system. Supreme Court justices serve five times longer than presidents on average. Doing ground-level, long-term work is quite valuable. The notion of how to make government more efficient matters.

Jordan Schneider: Let’s lean on that for a second because one of the things that story did was invest in ideas and people and the intellectual superstructure for this movement. There don’t seem to be — maybe we’re starting a little bit now with the abundance stuff and Patrick Collison funding progress things — but there seem to be a lot fewer center or left billionaires who are investing in the interesting intellectual ecosystem to grow the movement.

Nate Silver: You probably see more of it on the right. The Peter Thiel Fellowship program pays kids to drop out of college — that’s an interesting ideological project that has produced some degree of success. The effective altruists would say that you want to purchase anti-malaria mosquito nets in the third world, and probably that’s very effective.

As much as there are lots of inefficiencies in politics and government, it also reflects the revealed equilibrium from complicated systems and complicated incentives. Maybe change is harder to achieve. It’s part of the reason why I’m reluctant to give off-the-cuff answers.

There have to be improvements you can make in government efficiency. Why does it cost 10x more to build a mile of subway track in New York than in France or Spain or Japan? I also think there are probably a lot of really sticky factors explaining why that’s the case.

In principle, I’d be on board with a project like DOGE. But DOGE should have spent the first three months studying this — not a fake commission, but actually studying where’s the overlap between problems where there really is inefficiency and where it’s tractable and solvable. That’s not something you could answer off the cuff or just by looking at a spreadsheet.

Jordan Schneider: Would you ever join a campaign?

Nate Silver: No, I don’t think I’ve ever really been offered that, believe it or not. It goes against my ethos — I want to study and have the outside view on politics. Campaigns are pretty rough. It’s very tough because you basically have one outcome: do you win or lose? In the presidential primaries, you have 50 states and they go in sequence, so it’s a little bit better. But it’s very hard to know what worked or what didn’t.

Kamala Harris and Donald Trump had lots of different, subtle, nuanced strategies. The fact is that if inflation had peaked at 4% instead of 9% in 2022, then she might have won, having nothing to do with strategy per se. It’s very hard to get feedback on campaigns.

I’m skeptical that you can gain as much through better messaging as you might think. It becomes saturated. The first time the next candidate uses Jamaal-style messaging, they’ll probably get something out of it. The fifth one who does it might backfire because it seems like a bad facsimile of what came before. Just like there are various Obama imitators or mini-Trumps, people like novelty — and they like some sense of authenticity.

What seems authentic is very tricky. Trump, in many ways, is a very fake, plasticky person, and yet he has this ironic, almost camp level of authenticity that would have been hard to predict in advance. I was not alone in 2015 when he’s going down the elevator at Trump Tower to think this is a joke.

This sketches the limits of prediction in general. The world is complicated and dynamic and contingent and circumstantial. Social behavior is contagious, and the focal points created by the media and the internet do more of this, where things can change unpredictably and rapidly. There’s less political science in running campaigns than in a lot of other fields that might possibly be in my interest set.

Share

Jordan Schneider: It’s interesting because the big tactical decision that people are still talking about is, why didn’t she go on Rogan? Well, even if she did, it wouldn’t have gone well because she doesn’t vibe with that. It’s a bit of a revealed preference — she didn’t do this open-ended media appearance because she doesn’t fit it.

There’s an aspect of — you can only “Manchurian Candidate” your candidate to go so far from their essence as a human being. Until we’re electing AI models, people can still just get a sense for whether they like or dislike people. That’s probably 2 or 3% marginal.

Nate Silver: When President Trump got shot, it was a very sympathetic moment. I never voted for Trump and wouldn’t, but even then I felt some sympathy. That moved the polls by one or two points. One of the most momentous events of the past 50 years of American campaigns.

When Biden had the worst debate in presidential history, that moved the numbers by maybe 2%. It mattered because he was already behind and now he fell further behind.

There are times when preferences are extremely plastic and times when they’re extremely sticky. Knowing which is which, which interventions work at which intervals, is probably important.

Jordan Schneider: Has covering politics shaped your view on the great man versus structuralist view of how things unfold?

Nate Silver: The empirical default is to be more structuralist — that’s the trendy thing. I don’t know. I think Donald Trump is a very important figure. If Donald Trump had been a stand-up comedian instead — Trump is funny — the world would look a lot different. Elon Musk has been very important to the shape of the world. I’m not saying positively or negatively, but Xi is very important.

Things are trending more that way. I’m not some Elizabeth Warren type, but if the top richest people in the world basically double their wealth every decade — and it’s different people cycling in and out of the list, but if you look at the inflation-adjusted list of the richest 10 or 100 people, and it’s often more about the 10 now than the 100 — they basically double their wealth, inflation-adjusted, every 10 years. You compound that over several cycles, and that concentration of power might shift things more toward a great man theory.

I’m sure there’s a substantial degree of randomness too. I have no idea how China formed, for example, but you get in these feedback loops where you have a virtuous loop or a less virtuous loop. Thirty years ago, Americans were worried about being overtaken by Japan. That happened to Japan a few times, and in some ways it’s still a very amazing, advanced society.

Maybe an example is Rome, Italy, which is one of my favorite places to travel to. I’ve been there at different points in my life. There are parts of Rome where if you go today in 2025, they don’t really look that much different than when I was a college kid in 1999-2000. Except for the cell phones, you could really be placed in 2000 and you wouldn’t miss a beat.

Jordan Schneider: Slight non sequitur. My favorite dad book is called An Italian Education. It’s this memoir of a cranky British person who married an Italian woman and is talking about raising their kid in early 1990s Italy. It’s really fun, kind of New Journalism-style writing. But it’s also this fascinating portrayal of this country at a big transition moment where they’re going from being super Catholic to more modern, and from being very localized to conceiving of themselves as part of this European project.

I don’t have great 2025 Italy takes, but it’s interesting just how much further — or not further — a country could have gone from that moment to today, looking at different parts of the world from 1994 to the present.

Nate Silver: I have a friend who’s Irish — actually Irish, not Irish American — who moved here in early adulthood. He’s gay, and he’s told me, “When I was growing up, Ireland was very Catholic and very anti-gay, and now they almost go out of their way to be pro-LGBTQ rights.” It is interesting how much countries can change.

Conversely, Russia and that sphere are separating a bit more. The United States is diverging more from Europe too. Europe hasn’t grown a lot economically — there’s not a lot of innovation there. At the same time, their lifespans are increasing and they’re taking this slower-growing dividend into quality of life. Whereas in the U.S., male life expectancies, even if you ignore COVID, have basically not increased in a decade.

We are getting wealthier. I worry a little bit that as we’re doing things that undermine American leadership and state capacity, we’ve been playing the game on easy mode because the dollar is a world reserve currency and the U.S. is the world’s biggest military. We’ve been — this is complicated relative to a low bar — a relatively trustworthy player in international relations. If we’re throwing those things away, there might not be an impact next year, but when you visit these different countries and extrapolate out 3% GDP growth versus 1% and compound that over 20 years, it’s extraordinarily powerful. You see it on the ground where these places are stagnant and these places are growing.

On Podcasting (Plus: Renaming ChinaTalk?!?)

Jordan Schneider: You interviewed a lot of really rich people for this book. Why do they all want to start podcasts?

Nate Silver: They do love hearing themselves talk. It’s not just that they’re rich. These are people mostly in competitive fields — a lot of them venture capital — where they’ve had success and it goes to their head. You see this in poker — being on a winning streak in poker is helpful. It improves your attitude, makes people fear you. But you can go on “winner’s tilt,” it’s called.

It’s very hard when you’re powerful — people start catering to you. “The Emperor’s New Clothes” is one of the more accurate fables. That and “The Boy Who Cried Wolf” are the two eternal fables that describe human behavior extremely well.

If you’re somebody like Elon Musk, he has made several really good bets, whether they’re skilled bets or luck. If the fourth SpaceX rocket had blown up — even he told Walter Isaacson that he was going to be cooked. But it’s very hard if you’re one of these people who has had that kind of success.

One thing I’ve learned, Jordan, is that there’s always one more tier of wealth and power behind a closed door than you might expect. There’s always one more privilege level. Even at an event that’s already privileged, there’s a VIP room and a VIP room within the VIP room. The biggest VIPs of all are not even at that — they’re already at the after party.

There are smaller worlds too. Keith Rabois told me there are really only six people in all of Silicon Valley that matter. Which is an exaggeration, but maybe not directionally wrong. Maybe a few dozen. They all know one another.

With the tech types and VC types, they feel embattled by their employees who are “too woke” and the media being mean to them. People are mean to me on the internet sometimes too.

Subscribe now

Jordan Schneider: Regarding your idea that there’s always one level up — if that’s your theory of the world, then why do you want to have an audience of 10,000 people listening to you talk about the news?

Nate Silver: Part of what I’ve done is go from a giant platform. I used to work for ABC News, which is about as mainstream as it gets. The average watcher is 70 years old at an airport somewhere or maybe a retirement home. Now with Substack, it’s an audience that turned out not to actually be narrower because the notion of building an email list is a good business model and very sticky.

At first, the stuff that goes behind a paywall is definitely reaching fewer people, but they’re willing to pay. You’re self-selecting too. Because the work I do, especially when it comes to election forecasting, is so easily misinterpreted, I don’t mind having a filter for people who come to the problem with more knowledge. It also can make the writing sharper.

If I’m freelancing with The New York Times, I have a good editor over there. They’re often saying, “Slow down, you have to explain this thing better.” It’s nice to have a conversation where you’re starting with certain premises and memories. I’ve stated complicated things before about my political views on issues that might come into play, or I’ve disclosed that I consult for Polymarket. It’s a cumulative project, and inherently nobody has time for everybody’s cumulative project.

Having a smaller audience of even dozens of people, hundreds, or tens of thousands is pretty important. Especially if there is maybe an unfortunate degree of concentration of influence and wealth and power. When I was working on the book, one of the things that surprised me is how many people I had no connection with were willing to have a conversation with me or at least provide a polite response if they had a good reason not to.

It’s a pretty small world and people talk to one another. That’s something that’s shifted a little bit again, maybe toward more of this Great Man theory — although all of them are men — toward this theory where individual agency matters more.

Jordan Schneider: My two cents on this is there’s a big cognitive bias for Keith Rabois to want to think that he is the center of the universe. There are more times than you would think where you have the market providing the discipline, or the people, or bigger sentiment when something blows up, where politicians or the media end up reflecting mass opinion more than they do the opinion of three people who are trying to pull the strings.

Nate Silver: One tip I heard in poker recently is that everybody is the main character of their own poker story. If I got caught making a big bluff against a third party earlier in the hand, and you’re sitting at the table not involved in the hand, Jordan, you might not even notice that — you’re going to be on your phone.

If you got bluffed by another player earlier and I’m not involved, that affects my strategy against you much more than what I did before, because I’m not involved in your narrative except to the extent I affect you. Life is often the same way.

Jordan Schneider: To close, I have an inside baseball question. Silver Bulletin is a great name. ChinaTalk... I don’t know, I think I should get out of it somehow, but I don’t want it to be The Jordan Show.

Nate Silver: People didn’t like the Silver Bulletin name at first. I came up with it in three minutes. I thought Twitter was gonna die, and I needed some placeholder. A lot of names are stupid when you think about them. Sports names are kind of dumb — the Green Bay Packers is kind of a dumb name. But it just sticks, and it seems normal because people repeat it over and over again.

Jordan Schneider: It’s not that I have names I’d be fine with. It’s more that the switching cost isn’t transparent to me — how much it’ll change listenership or open rates or whether it puts me on a larger growth trajectory. I wouldn’t change my coverage to do less China. Right now, it’s still 50% China, but it would just send a signal that it’s more than China here.

Nate Silver: That’s an interesting case. If it’s just a name, it’s a little awkward because it does include an implicit premise for what the subject is. Most of the time I would say the switching costs are actually pretty high. To sacrifice brand recognition is costly. Maybe you need to start a sub-brand or something. That’s a trickier one than for most people.

Jordan Schneider: The other thing is that advertisers are terrified of China. That’s just reality.

Nate Silver: Well, that tips it over to...

Jordan Schneider: Yeah. It’s more once I have a big contract, once I have Google telling me, “Jordan, we’ll buy $500,000 of ads only if it’s called the Jordan Schneider Show” — then you’ll know we’re doing it.

Nate Silver: We’re doing it for 500k. That’d be worth it.

We’ll take new podcast names in the comments below…

Nate said he was into shoegaze so his mood music is:

在世界游荡的女性20:Solo trip意大利:与艺术和建筑安全地共情了

为全球华人游荡者提供解决方案的平台:游荡者(www.youdangzhe.com)
这世界的辽阔和美好,游荡者知道。使用过程中遇到问题,欢迎联系客服邮箱wanderservice2024@outlook.com.

【和放学以后永不失联】订阅放学以后的Newsletter,每周三收到我们发出的信号:afterschool2021.substack.com 点击链接输入自己的邮箱即可(订阅后如果收不到注意查看垃圾邮箱)。如需查看往期内容,打开任一期你收到的邮件,选择右上角open online,就可以回溯放学以后之前发的所有邮件,或谷歌搜索afterschool2021substack查看。

截至目前,放学以后Newsletter专题系列如下:“在世界游荡的女性”系列、“女性解放指南”系列、“女性浪漫,往复信笺”系列、莫不谷游荡口袋书《做一个蓄意的游荡者》系列、“莫胡说”系列”《创作者手册:从播客开始说起》,播客系列和日常更新等。

大家好,本期放学以后信号塔由还在英国的霸王花木兰和荷兰的朋友茶茶共同轮值。先预告一下第57期播客《在经济下行期一起共读:感受微光抵抗石化》将于9月24日下周三北京时间零点更新,敬请期待!

这周日周一我休息,应剑桥听友的提议和邀请,我从莱斯特坐火车去剑桥游览了两日,除了最负盛名的剑桥大学,剑桥本身在温带海洋性气候影响下,就是一个郁郁葱葱的绿色小城,秋天逐渐变黄的橡树叶随风旋转起舞,长得酷似板栗实则有毒的马栗掉落街头满地,垂到地面的柳树随风时而轻柔时而猛烈摇摆,完全就是《哈利波特》里打人柳的原型。

我喜欢在田野林间散步,和听友一起发现免费疯长的野杏、山楂,黑莓,一起畅聊彼此的故事,交流观点和想法,再一起去剑桥河边散步,看游船穿过康河,晒一会偶尔出现的阳光,吹一会猛烈凉爽的秋风,淋一场急来急走的小雨,感受一下英国生活的日常。英国有着与荷兰一样的气候,人们喜欢打理美丽的花园,有时会分不清是在荷兰还是在英国,一些细节相同之处实在令人亲切熟悉,然而你看那些古朴的哥特风格的乡间小屋,就会清楚地知道,这是在英国。离开西班牙时我还在过阳光灿烂炎热尚未退却的夏天,所以对于英国的秋天,大风,凉爽,阴冷,时不时的下雨,我乐于体验,敞开怀抱迎接。

而茶茶去意大利时正值炎热的夏季,那时我也在考虑是否加入茶茶的行程,一起探索我尚未去过的威尼斯和弗洛伦萨。由于我在6月就已经被西班牙热到,7月又在米兰晒了几天,向热夏投降的我和茶茶分享,这个时间可能会太热,多注意防晒和补水。没想到茶茶不仅独自出发,还在游荡途中享受solo trip,结束行程没多久便发来了她的这篇游荡之旅。看完文章的我想说,别管天气热不热,别管有没有同行人,就去做你想做的事,去你还有热情和兴趣的地方,Follow your heart and enjoy yourself。

下面是茶茶的意大利solo trip游荡之旅,祝大家阅读愉快。

(游览剑桥时看到这句slogn:enjoy a big bold beautiful journey)

写在前面:

在突发奇想去意大利旅游的时候我正好处在burnout的爆发期里:已经被工作消磨了所有的意志,又因为搬新房一切都需要添置和维修,又因为家门口修路,很多东西都无法顺利送进我家,快递丢失,热水器反复故障,一切都把我往崩溃的路上逼,甚至对朋友聚餐和跳舞都失去了热情。

就像过敏需要远离过敏源一样,在欧洲的度假季,我决定推自己一把,完全从工作和日常生活里抽离,去另一个地方放空。

作为一个历史八卦和人文景点爱好者,决定去意大利是水到渠成的决定:雕塑、油画、建筑和历史积淀足够我忘记蝇营狗苟的一切。

没有找到同行者,我就一个人上路了。出发前在小红书被很多帖子吓到,但在去过意大利的女性朋友的鼓励下,我还是出发了!

事实证明:胆大心细的人享受世界。这是不变真理。有同伴当然很好,但如果有一个很想去的地方却没找到同伴,solo trip也不用害怕。

Solo trip意大利:与艺术和建筑安全地共情了

8月中下旬去了一周威尼斯+佛罗伦萨,然后我知道我完了。

不仅仅是意大利的美食和美景治愈了我,我也没想到能在这里完成一场充分地、安全地打开自己的心,接纳美学的洗礼。

我的航班是从荷兰埃因霍温早上10点出发去威尼斯的,很感激莫不谷收留了我一晚,得以顺利赶上航班(因为从我家去埃因霍温机场中间的铁路出发前一周出了毛病,经常无故取消火车)。

落地威尼斯,它立刻用炽热的阳光和蔚蓝的泻湖迎接了我:好晒!

(住宿房间的窗外)

千百根木桩支撑起的威尼斯岛像一块海上的浮萍,托举着凝固了的旧梦。

在这之前,我对意大利的印象是:宗教与世俗权力媾和,人是被绑架的牺牲品。但现实是:意大利在19世纪才完成了统一,在那之前,威尼斯是个共和国,实行的是贵族寡头政治。总督宫、圣洛克大会堂和威尼斯画派颠覆了我的刻板印象:在一个信仰天主教的国家里,他们想尽了办法实现(贵族之间的)权力平衡,还把宗教的干预关在门外,并且强调人的重要性。

(在圣洛克大会堂的大厅里,各种行业的工匠的雕刻绕了大厅一周,意思是国家的基石是由平民阶层铸就的)

总督宫和圣洛克大会堂的雕塑和绘画并不是为了纯装饰而作的,而是出于有目的的考量:富庶并不是由神赐予的,而是人创造的。我们敬神,是因为神赐予了我们力量去创造。

总督宫:但神最好别来管我们做事。

圣洛克大会堂:我们为威尼斯的工匠自豪。

威尼斯画派:神不神的,神活着的时候不也和人活在一起吗?

(总督宫大厅,丁托列托画的《天堂》,22米*9米。他画了几十年,颈椎都变形了,我不敢想是什么力量支持着他。天顶上,是神为威尼斯女神加冕。是的,不是为哪一任总督或者主教,而是为威尼斯加冕。他们真的很为自己自豪。)

(总督宫的秘密投信口,可以跨级告密同事领导,但如果调查到头发现是诬告,就等着走叹息桥吧。于是没人敢诬告。

(大厅天花板上有一圈是历代总督的肖像。这块黑布画在这里意思是这一任总督因为想搞君主制被驱逐了,是威尼斯共和制度的敌人。威尼斯没有把它作为羞耻掩盖,而是大方把这件事展示出来。)

(这张画叫做《利未家的晚餐》,其实就是最后的晚餐,教堂托维罗内塞画的时候就说要放教堂里当教堂画的。一般的教堂画都很神圣,像丁托列托的《天堂》那种气质,但维罗内塞就不:明明是吃晚餐,又不是吃断头饭,哪有这么多庄严呢?于是在他的画上就出现了:狗,异教徒,黑人侍从,和楼后面的围观群众。教堂人员一看就疯了,叫维罗内塞改。但艺术家就不,只改了名字,不再叫最后的晚餐了,于是这画一点儿没改。不过不知道他最后收到了甲方的尾款没有。维罗内塞:人就是人啊,要画得有人气才行。宗教神圣?耶稣脑袋后面不是在发光了吗?)

在威尼斯的三天有无数的时刻,我好像能听见这个城市在说话。我知道这源于我自己的感受融合了一些想象,但有一天我在旅馆的天台上,看着夕阳打在砖红色屋顶的城市上,一个念头划过脑海:文艺复兴的时候,夕阳也是这么落下来的吧。只是那时候,丁托列托还在总督宫里对着墙画《天堂》,隔壁走廊第二个厅里,黑袍的议员还没停止密谋,几条巷子之外,金匠才要收起里亚托桥上的摊子。好普通的一天,普通到他们不知道他们以为的永恒在未来没有继续存在:教会衰落了,共和亡了。只剩下石头和油画默默无言。

那我们呢?我们的普通将如何湮灭?

然后我就在天台上默默哭了一下。

(那天在天台上看的日落)

佛罗伦萨比起威尼斯则承载了更多的故事。它太丰富了,以至于不管从哪个视角去看它,都是一场盛宴。从美第奇家族的角度,这是家族起源、兴盛和埋葬的故乡,把佛罗伦萨共和国变成了他们自己的舞台:暗杀,密道,权谋,不一而足;从艺术的角度,这座城市拥有过太多艺术家、工匠和科学家,在这里繁荣,在这里痛苦。

我提前一个月抢到了进入米开朗琪罗地下工作间的导览票。工作间在美第奇家族墓地的一间很小的地下室里。入口非常狭窄,室内不足10平米,两个很小的窗,在他工作的那个年代面对的是对面楼的墙:封闭、狭小、阴暗。他的炭笔素描直接画在墙上。

(这些草稿是他为了自己的雕塑画的。他画这些的时候,城外在爆发黑死病,而他自己,作为美第奇家族的背叛者却又被他们聘为家族圣堂的雕塑师,在这里一工作就是好几年。死亡的恐惧、恶劣的工作环境萦绕着他创作的所有时刻。而他把自己倾注在炭笔和凿子里,一下一下刻出自己生命的火焰。我相信他是真的因为知道自己的生命只能这样表达,除此之外别无他法。即使得不到亲人的认可,他也要做自己认为对的事情。)

其实我到佛罗伦萨的第一天是去爬的乔托钟楼,看了圣母百花大教堂。乔托钟楼好难爬啊,141级台阶,没有空调,又闷又暗又狭窄,很难爬。爬到顶的时候,我问Chatgpt:

-以前的人要敲钟也要这样爬上来吗?

-对,天天这样爬,一天好几次。

-很陡诶,还没有扶手,摔下去很容易死的。

-对,但这就是中世纪的生活。受苦首先是没办法,其次是很习惯,如果死了,就是死了。

(钟楼的阶梯是环绕型的,绕着钟楼外围一圈一圈爬上去,除了三个平台之外,只有手掌大小的透气窗,所以非常暗和闷。敲钟人或者维修员每天都要爬上爬下,我只能说佩服。)

(圣母百花大教堂最有名的穹顶,是靠那种木头架子搭到100多米高,再把工具一层层递上去修的。在这之前,设计师布鲁内斯基还用实木先搭了模型去招标。施工之前还花了好几年先发明了把工具运到高处的“升降机”。而穹顶壁画《最后的审判》是盖好后瓦萨里踩着悬挂的木板,爬上几十米高的脚手架,仰着头一点点画出来的。又是一名颈椎病受害者,但命大并没有出意外。这种高空作业出意外的比例太高了。)

一个我一直忽视的事实:在有现代卫生概念和工具代劳之前,人是那么地脆弱,生死是那么常见的、轻易的一瞬间。可能今天喝了一瓢河水,晚上就因为得了痢疾去世。所以他们才会依赖信仰来抵御生命无常的冲击,消解短暂的生命在这世上一闪而过却什么也没有留下的无意义感,需要在日复一日的劳作中抬头看到教堂的尖顶,知道自己是被原谅和眷顾的救赎和宽慰,在黑暗无望的生活里,相信有一种力量能引领你继续前行。

PS:信仰是信仰,教会是教会。

在佛罗伦萨也呆了三天,临走前的最后一个白天,我去了圣十字殿堂——埋葬米开朗琪罗,马基雅维利,伽利略和其他名人的地方。对,他们都是佛罗伦萨人,但除了米开朗琪罗有遗嘱说自己想埋葬在佛罗伦萨,其他都没有留下遗愿想埋骨何处。

那天佛罗伦萨天气很好,早晨的阳光照着教堂庭院,一派托斯卡纳夏日的气息。主殿的大理石还很凉,我问Chatgpt,他们会开心自己埋在这里吗?gpt也说不上来。我在看过教堂角落里,伽利略最初的墓之后又回到了他在主殿的墓碑前:有天使和圣人环绕着他,还有守护和纪念的女神雕像。但我不知道他会不会高兴自己被埋在这,尤其是他一生都和教会闹得很不愉快,到死都没有被承认。对他的加荣他并不知道,哪怕是迟来的道歉他也收不到。这样的“纪念”不过是做给活人看的,是要拿他的痛苦和成就为这座城市贴金的,就好像他们从来都会嘉奖提出异议、坚持真理的人一样。真的很讽刺。

伽利略最初的墓是这样的,非常简单,和主殿的华丽对比明显

主殿外的过道上有一个女性的墓碑。没有铭文,什么都没有,就这样放在走廊里。特地查了chatgpt才知道了一点她的情况:

Giovanna Zampieri Altemps,19世纪意大利贵族女性,佛罗伦萨名门,嫁给了罗马名门Altemps。墓碑正中下方的刻文出自《雅歌》2:10,“起来吧,我的爱人,快来吧。”

她好像一个仙女飘然离尘世而去。活过,但世间再也没有她的踪迹。

当天晚上回到荷兰的家里也还久久无法平静。两个城市毫不吝惜地把美学和历史拍在了我脸上。我很懵,落不了地,并且确定了一件事:我一定还会回去。

最后附上美第奇最后的继承人,安娜玛丽亚路易莎的雕像照片。是她的遗愿要把所有收藏品和宫殿作为博物馆对大众展出并要求它们永远留在佛罗伦萨。没有她就没有今天能看到的一切。

谢谢她成就了我旅行的幸福。

女性去世界游荡,不仅拓宽了自己生活可能性的边界,也激发了世界各地的女性朋友行动的勇气和力量。如果有正在世界游荡的女性朋友想分享自己的体验,加入到“在世界游荡的女性”系列创作中,欢迎来信给我们的邮箱 afterschool2021@126.com !也欢迎在游荡者平台(www.youdangzhe.com)多多分享,多多创作!

在世界游荡的女性19:十日入埃及记,我体会到的割裂感更加真实

在世界游荡的女性18:女子游荡天团,重新定义春晚!

在世界游荡的女性17:在一无所有的时候,也可以靠『你是你』这件事情游荡世界

在世界游荡的女性16:在美国看见的伊拉克女性

在世界游荡的女性15:游荡的十年,是理想的十年

在世界游荡的女性14:一趟寻找美食与欢愉之旅

在世界游荡的女性12:在游荡的途中和偶遇的同路人畅聊

在世界游荡的女性11:在芬兰,在北欧,崭新的,美好的,冷冽的,热气腾腾的,和阴魂未散的

在世界游荡的女性10:埃及,普吉和夜郎活在21世纪

在世界游荡的女性9:莫不谷的滔滔生活和金龟换酒

在世界游荡的女性8:热烈的海岛和女性在这个世界的“归属”

世界游荡的女性7:一次济州岛之行意外引发的觉醒、凝固和群体讨论

在世界游荡的女性6:脱离长期生活的环境,才能有机会感知自我

在世界游荡的女性5: 人和动物还可以这么活

在世界游荡的女性4: 霸王花和莫不谷从巴黎发给你的10张明信片

在世界游荡的女性3:不再成为国家的受害者

在世界游荡的女性2: Run了就能脱离有限游戏吗?

在世界游荡的女性1:她从墨尔本的来信和我在阿姆斯特丹的回信

为全球华人游荡者提供解决方案的平台:游荡者(www.youdangzhe.com)
这世界的辽阔和美好,游荡者知道。使用过程中遇到问题,欢迎联系客服邮箱wanderservice2024@outlook.com.

【放学以后文章&书籍&其它】

解锁放学以后《创作者手册:从播客开始说起》:https://afdian.com/item/ffcd59481b9411ee882652540025c377

解锁莫不谷《做一个“蓄意”的游荡者》口袋书:
爱发电:https://afdian.com/item/62244492ae8611ee91185254001e7c00微信公众号:《放学以后After school》(提示安卓用户可下载“爱发电”app,苹果用户可把爱发电主页添加至手机桌面来使用,目前爱发电未上线苹果商店)

Newsletter订阅链接:https://afterschool2021.substack.com/(需科学/上 网)

联系邮箱:afterschool2021@126.com (投稿来信及合作洽谈)

为全球华人游荡者提供解决方案的平台:游荡者(www.youdangzhe.com)

小红书:游荡者的日常

同名YouTube:https://www.youtube.com/@afterschool2021

同名微信公众号:放学以后after school

欢迎并感谢大家在爱发电平台为我们的创作发电:https://afdian.com/a/afterschool

播客收听平台:【国内】苹果播客(请科学/上网)、爱发电、汽水儿、荔枝、网易云、小宇宙、喜马拉雅、、QQ音乐;
【海外】Spotify、Apple podcast、Google podcast、Snipd、Overcast、Castbox、Amazon Music、Pocket Casts、Stitcher、Radio Public、Wordpress

MP Materials, Intel, and Sovereign Wealth Funds

We have a ChinaTalk meetup this coming Thursday in SF. Sign up here if you can make it!


Uncle Sam is taking a bite out of companies left and right. Today, we’re going to focus on MP Materials — the Trump administration’s answer to China’s restrictions on rare earth material exports to America.

To discuss, ChinaTalk interviewed Daleep Singh, former Deputy National Security Advisor for International Economics, now with PGIN; Arnab Datta, currently at Employ America and IFP; and Peter Harrell, former Biden official and host of the excellent new Security Economics podcast.

Today, our conversation covers:

  • How China achieved rare earth dominance,

  • The history of rare earth mining and refinement in the US,

  • What the MP Materials and Intel deals do, and whether they can succeed,

  • The key ingredients for successful industrial policy and imagining a sovereign wealth fund.

Listen now on your favorite podcast app.

Broken Markets

Jordan Schneider: Why do deals like MP Materials even need to happen in the first place?

Daleep Singh: Critical minerals markets are broken for three main reasons.

First, there’s concentrated market power. China refines 70 to 90% of most minerals that we need to power clean energy, digital infrastructure, and defense systems. They have enormous market power — not just over supply, but also pricing, standards, and logistics. No market can be resilient if one player dominates the entire market ecosystem.

Second, there’s extreme price volatility. Prices for minerals like lithium, nickel, or rare earths swing far more violently than oil and gas. For producers, this creates asymmetric risk — if you undersupply the market, you may lose some profit. If you oversupply, you may go bust. That asymmetry deters the investment we need to expand supply when quantities are low and prices are high, preventing the market from clearing.

The third problem is that we don’t really have market infrastructure for critical minerals. For oil, we have futures exchanges, benchmark prices, and deep liquidity. For most critical minerals, we don’t. Transactions are opaque, bilateral, and heavily distorted by state intervention, especially China’s. Markets don’t provide price discovery, and producers and consumers don’t have hedging tools. Investors lose confidence in these markets and walk away.

All together, we have chronic underinvestment, chronic gaps between supply and demand, and chronic vulnerability to geopolitical shocks. Those are the problems.

Jordan Schneider: Daleep, let’s dig deeper into the market infrastructure piece. What does this mean in practice — that it’s not like WTI, Brent, or something similar?

Daleep Singh: If you’re a producer, you need tools to manage price volatility. When prices fall dramatically, you need the ability to continue generating revenue to stay liquid. You need futures markets and option markets that you can use to hedge against downside price risk. Right now, if you’re a critical minerals producer for most of the minerals that matter for our economic security, you don’t have that option.

You also need price discovery — to know where prices are in the market. We really don’t have genuine price discovery from any of these markets. China can decide, just by virtue of its dominance in supply, where it wants the price to settle. If it wants that price to settle at a level that wipes out the competition, that’s its choice. That’s not a market.

Arnab Datta: One quick piece to add is that the market infrastructure problem Daleep mentioned was really an intentional strategy by China. In addition to very robust industrial policy that provided substantial subsidies to producers and refiners, they stepped into the market infrastructure gap that was retreating in the West, particularly after the global financial crisis.

When you saw liquidity leave Western markets partly because of regulations passed during that time, China seized the opportunity. They built exchanges and benchmark contracts on Chinese exchanges so they could control that market infrastructure and how these prices were constructed.

Peter Harrell: I’d add two important pieces.

First, America’s dependence on China for rare earths is actually a relatively new problem. Historically, going back several decades, the US actually produced, mined, refined, processed, and manufactured plenty of rare earths in the 1950s, 1960s, really through the 1980s and into the 1990s.

Isadore Posoff, WPA. Source.

It was in the 90s and 2000s — the era of peak globalization — where China successfully expanded its rare earth refining in particular. You saw Chinese firms begin to outcompete American firms, and a real decline in US manufacturing related to this consolidation of Chinese control. This isn’t because the US never made rare earths. This is really a problem of economics that emerged in the 90s and 2000s.

Second, we saw just a couple of months ago the critical risk that dependency on China for rare earths gives us, because it became part of the trade war Trump launched with China. Back in April, China retaliated by threatening to — and then actually — cutting off its exports of rare earths to the US, which had the potential to really impact manufacturing here. It became much less of a hypothetical long-term risk and much more of an immediate threat that could actually hurt the United States in the near term because of how China responded to Trump’s trade war in April.

Arnab Datta: Just to add to the WTI comparison — if you think about how WTI is priced, it’s a physically cleared contract. You’re purchasing a barrel that will be delivered at Cushing, Oklahoma. The pricing incorporates pipeline transport, logistics, and a whole infrastructure of traders, logistics providers, and port managers — all of that goes into the price of that physically delivered barrel at Cushing.

That’s something we just don’t have in the context of many of these newer metals markets. It’s very difficult to properly price a material when the only analog you have is a Chinese benchmark that potentially has very different constraints and very different characteristics.

Strategic Resilience Reserve

Jordan Schneider: This became very acute a few months ago when Trump imposed tariffs. Something that people have been talking about in Washington for literally 20 years — China using its role in the global rare earth export market to punish countries for doing things they don’t want — finally manifested. Trump walked back, and now we have this as a central thing that China and the US are tussling over.

Peter, Daleep, you guys aren’t dumb. You knew this was an issue. People have been writing about this for a very long time. What is the activation energy required in the 21st century to do the kind of industrial policy necessary to really change the dynamics on an issue like rare earths? Why have we only seen small, half-formed efforts until spring and summer 2025?

We have a Washington that has talked about the problem for a very long time now starting to spend nine and ten figures to address it in a more direct way than the incremental efforts folks had been pursuing. Peter, talk us through this deal. What came out of the Trump administration and the DoD over the past few weeks?

Peter Harrell: As you said, this isn’t a new problem. Policymakers have been aware for more than a decade that there was US dependency on China for rare earths. The Chinese had cut off their exports of rare earths to Japan back in 2011 or 2012. We’d actually seen the Chinese execute this playbook once before on an allied country.

This isn’t a new problem, and it’s not that there were no efforts to deal with this issue prior to the deal that the Defense Department announced in July. There were some efforts — previous grants, including to MP (the company that got the deal in July) to try to restart manufacturing and processing of rare earths in California where there’d been a longtime US mine. Actually, the mine had reopened in 2017.

There had also been some grants to other companies and universities to look at other ways of mining and processing rare earths — for example, to extract them from mine tailings in West Virginia. There had been some government money to try to sponsor innovation to reduce dependencies on rare earths, maybe create magnets and other products that you need rare earths for but without actually needing the rare earth elements.

There had been some policy processes and policy money put into trying to address this problem. But there were a couple of challenges with those prior efforts. First is just the scale of the effort. Frankly, the way Washington works, until there is a very acute crisis, it can be hard to mobilize the scale of effort that is actually needed to solve it. These prior efforts were much smaller in dollar spend and scope because the crisis seemed less acute. That’s just a political reality of how Washington works.

Second, this is a very complex issue. I don’t even think this new DoD deal with MP is going to be the whole solution. It’s going to require several parts. It is, in fact, a very complex issue.

Third, related to mobilization: solving a problem like this is going to cost money. You get into big debates about who should pay for it — should US taxpayers come up with the money, or should you make the private sector bear these costs? That adds to why it takes time. It’s not that there was nothing — there was some foundation that this deal is now building on. Not that there was nothing before, but Daleep, I’d welcome you defending our work together in the Biden administration.

Daleep Singh: Jordan, I appreciate you suggesting that we’re not dumb. That’s nice — we don’t always get that. But look, there have been piecemeal efforts to funnel public money toward private sector companies that could help produce minerals we need. What we haven’t done is fix the market. That’s where we are now.

I started thinking about reimagining the Strategic Petroleum Reserve into a Strategic Resilience Reserve for 21st-century vulnerabilities.

When prices crash and China continues to flood the market, we have this recurring problem of producers going bankrupt.

A Strategic Resilience Reserve could be a buyer of last resort or provide bridge financing to companies that are solvent but illiquid. That’s what could allow producers to keep producing during downturns and keep production capacity alive.

What can we do about investors not having confidence in these markets? If you don’t have futures markets and hedging markets, and refiners can’t lock in predictable revenues, could a Strategic Resilience Reserve step in with tools like selling a put option that allows you to make money when prices fall? Could it provide a price floor or some type of demand guarantee? The point is: can you create enough certainty for private capital to keep flowing?

What do you do about concentration risk? Even with a deal like MP, no country is going to mine its way to self-sufficiency when we’re up against what China has. But we do have producers and miners in places like Canada, Australia, and Finland. They’re hesitating to expand production because they know China can tank prices tomorrow.

An SRR, if we got that authorized, could provide demand backstops and offtake agreements. Could it intervene in the market so that producers in allied countries know they’re not going to go bankrupt if Beijing floods the market? That’s the idea we’ve started to develop over time — probably with some mistakes — to change the market itself rather than a series of ad hoc transactions that don’t alter the economics.

Jordan Schneider: SRR — a Strategic Resilience Reserve — a topic we’ll get to in a few moments. I’d also like to say in defense of the last 20 years of American policymaking that this was a latent threat, and the trajectory of US-China relations that made this become an actual threat has manifested relatively recently.

The fact that the Biden administration was able to “get away” with imposing semiconductor export controls, implementing big tariffs, essentially banning Chinese electric vehicles, and a handful of other tariffs without triggering this response is important to recognize. This is only a problem in the context of the US-China diplomatic relationship. Without that relationship souring, then we just get to use some subsidized magnets and the world moves on.

Peter, what was your thinking about trying to inch forward with more and more aggressive economic tools while seeing things bubbling up in terms of new Chinese legislation but not wanting them to hit back for the efforts you were making?

Peter Harrell: When I think about how one can solve a problem like our dependency on China for rare earth elements — and then we can unpack what this deal will and will not do — you need to think about several different categories of policy tools that you need to mesh together to solve the problem.

We’ve had this history in American industrial policy over the past decades where we’ve focused almost exclusively on what you might think of as supply-side industrial policy. We’ve given grants to companies to build a factory or a mine to do something. In some cases, that can be sufficient because the problem we need to solve is one of startup costs. It costs more to get something off the ground in the United States, and you can provide a capex incentive to help get it off the ground.

But when you look at China’s dominance of rare earths — where they not only have already spent a lot of capex, but their operating expenses are lower than in the United States and they control the market infrastructure — if you want to break China’s control here, you can’t solve it simply with our own capex.

You also need to think about the market infrastructure, as Daleep says, and you need to think about what the demand side looks like. If US operating costs for producing rare earths are going to be higher than they are in China, you have to find some demand for that higher-cost US product. Otherwise, US companies are going to keep buying Chinese products because the Chinese products are going to be cheaper.

You need to create a market infrastructure that’s going to ensure stable demand for the US-made product. Layering these things together — these different sets of policy tools to address the different parts of the chain — is not something the US government has done in a long time. You have to get your reps in and spend some time in the gym before you can do it.

Daleep Singh: Peter and I used to sit in the part of the White House that was straddling economics and national security. For most of us, very early on in the term we understood — especially as Russia’s forces were mounting on Ukraine’s border — that we’re going to be in this incredibly contested geopolitical environment for the rest of our lives. China and Russia have now made it very clear and revealed they’re going to challenge the US-led order everywhere. Because today’s great powers are nuclear powers, our expectation became that this competition is going to play out mostly in the theater of economics, energy, and technology.

The question was, if we’re going to prevail, how can we harness the financial firepower of the world’s most dynamic financial system to advance strategic objectives? Do we have the right tools, do we have the right institutions to overcome this short-term profit motive that drives most of what’s going on on Wall Street? The answer is no. As time went on and we started to have time to breathe, we started to think about new ideas. That’s where the Strategic Resilience Reserve came up. We also started to think about whether the US should have a sovereign wealth fund. These are all ideas trying to solve the same problem: the private sector systematically underinvests in exactly the kind of projects that matter most for our economic security and for our national security.

Can the Deal Create a Market?

Jordan Schneider: What does this MP Materials deal do? What is interesting and exciting about it? And why is it not the systemic solution that Daleep craves to manifest?

Arnab Datta: One thing this deal does is treat the problem holistically. Peter mentioned that you need a mix of supply and demand side tools. The administration deserves credit for using the DPA, the Defense Production Act, in a robust way. They are applying a toolkit that includes loans, equity investments, price floors, and a guaranteed contract for offtake for the finished product. That’s just a recognition. Ultimately, if we’re going to deal with this problem over the next one year, five year, ten year, decades, we need a robust toolkit and we need a mechanism by which we can address these very challenges.

Jordan Schneider: Arnab, briefly, who did this? This is very sophisticated, impressive work. It’s a lot of puzzle pieces which haven’t been put together in a very long time.

Arnab Datta: It was done through the Defense Department. It pairs a number of different authorities. I would say the most creative, atypical interventions were through the Defense Production Act — this is Title 3 of the Defense Production Act. It has very wide authority attached to it. Peter did a recent piece in Lawfare examining this, but it basically allows you to engage in a number of different transaction types to achieve the goal of building our defense industrial base. There’s also some capital from the Office of Strategic Capital. That’s where the loan is coming from.

One thing to keep in mind is that some of these appropriations are not spoken for. Over time you could imagine funding coming from different parts of DoD from the national defense stockpile. They’re going into this with the commitment and a very clear interest and effort in continuing with this deal. But there are some risks and there’s also some structural challenges with this deal that I’d be happy to go into as well.

Jordan Schneider: Peter, give us the flip side. What doesn’t this accomplish and solve?

Peter Harrell: Let’s first walk through what this deal is, because there was some news last month when it came out. I think a lot of the news focused on the fact that the Defense Department, as a piece of this deal, was taking equity in MP Materials, which now looks like a precursor for the Trump administration going out and taking equity in Intel and maybe a whole bunch of defense companies and everything else. I think that was the piece that attracted the news. But the deal is a fairly complicated deal that has a couple of different parts.

Part one of the deal is the government gave MP Materials, this mining company, some loans and then some cash as part of the equity stake to expand its mine in California, not that far from Las Vegas — Las Vegas is the nearest big airport to this mine, but it’s in California. To expand production at the mine and then relatedly to expand and build a new facility to take some of the rare earths being produced in this mine and to manufacture them into magnets, because what we need is not raw rare earths. What you need are magnets that go into motors and turbines and all kinds of other things. There’s almost no magnet manufacturing in the US and in fact, previously this mine had been producing rare earth ore and then selling it to China to be made into magnets there.

《日月浮沉》— copperplate print by Liu Kuo-sung 劉國松. Source.

Part of this is a capital injection to MP to expand the mine and to build some magnet processing — expand some magnet manufacturing capability here in the United States. They’re doing that with both a debt and equity stake.

Another part of this deal is the Defense Department set a price floor for the raw rare earths, where the Defense Department has guaranteed that when MP is mining and doing initial processing for the raw rare earths, it now has a guaranteed minimum price, which by the way, is about twice what the current Chinese market price is.

That’s how you guarantee that it’s economical for MP to make this stuff over the next ten years. Because DoD said, “Even if the market price is $54,” which is about what I think it is today, “We’re going to guarantee a price of $110 per kilogram. We’ll pay you the difference between $54 and $110 per kilogram.” You have this price floor for the minimally processed rare earths. Then on the magnet side, DoD also said, “We’ll buy all of your magnets. You can produce these magnets for the next ten years, and we’ll buy all of them.”

There are some interesting pieces, such as if DoD and MP jointly agree that some of the magnets can be sold to buyers other than DoD, then there will be some profit sharing and other provisions. But it’s actually a pretty complicated deal with interrelated parts, which very clearly does ensure the viable business for the next decade of MP. MP gets capital injection. MP gets a guaranteed price floor for its rare earths concentrates — minimally processed rare earths. And then MP has a guaranteed buyer for its magnet.

MP is taken care of for the next decade and will be able to scale up production of both the minimally processed rare earths and probably of magnets.

But that doesn’t mean we have a market here. What we have is a market for MP.

That’s where I think there’s some interesting questions about this deal. Are we right to bet all in on MP as a national champion, or should we be thinking more systemically about the markets and less about how we guarantee the success of this particular firm? Arnab, I know you have a lot of thoughts on that piece of it.

Arnab Datta: We have a forthcoming article on the topic. We’re hoping to get it into Alphaville there, but they’re working it up the chain. We’re not fully signed off.

Jordan Schneider: In this piece, Peter and Arnab, you point out that this is similar to Chinese industrial policy circa Mao era, not the version 2.0. You’re picking one winner. And by the way, this company is probably not the best managed company in the world, as opposed to the way that China does it, where you have lots of firms fight it out to be the top dog.

Once you whittle it down to not one, but five or seven, then you start really turning on the jets and pouring on the money to secure your position in the global marketplace. As Daleep alluded to, this is also a concern with Intel.

For what it’s worth, I do think that manufacturing at the leading edge probably doesn’t support as many entrants as opposed to just building some mines and making some batteries. But, there does seem to be some tricky incentives and a lot of risk that their head of mining doesn’t go to a Coldplay concert with their head of HR or something. Daleep, where are you on this as an approach?

Daleep Singh: It makes me think of Intel a lot and I realize that we’re talking about very different markets, but I have the same take on it. Let’s actually pivot for a moment to Intel. There definitely needs to be government intervention in both of these markets. With leading edge semiconductors, we don’t produce any of them. Intel’s the only US firm capable of making them. But it has no customers and without customers, Intel can’t scale its unit cost efficiency — remains low and its competitiveness lags. Market forces aren’t going to solve that problem, nor will it solve the problem for MP.

But what gets interesting is instrument choice. What I worry about is ad hoc improvisation about what tools of industrial policy to use for particular sectors with a different context and a different kind of problem to solve. What I come back to is the systematic stuff. We do need a playbook, a governance structure, a doctrine for industrial policy. Start with the strategic objective. What problem are we trying to solve? Whether it’s MP or Intel or any other company, what is the market failure? Is it a shortfall of demand? Is it a capital constraint? Is it a cost differential? Is there a coordination problem? Is there some national security externality?

Then the third step is: pick the policy instrument that remedies the failure. Don’t default to equity injections or subsidies if the problem is demand, for example. Can you actually intervene? This goes to Peter’s analysis on MP. Does the intervention sustain competition and does it avoid a single point of failure? I would try to avoid substituting a foreign monopoly for a domestic point of failure. Can you tie the support to milestones, objective milestones, so that you can claw back the support you’re giving from taxpayers if they underperform? Can you sunset the support to avoid permanent dependence?

The last thing is how are you measuring the strategic return? What is the metric for success with this deal? It can’t just be for financial gain. How are we going to measure the benefit in terms of resilience, security, technological edge? That’s what’s missing for me. Maybe it’s out there somewhere, I just haven’t heard it.

Arnab Datta: I’d add to that a couple of things. This is a national champion that’s crowned without contest. We do have a pretty robust, vigorous competitive process folding out right now in the magnets space. There are other companies. MP Materials has the Mountain Pass Mine, but it has never produced a commercial magnet. It has not sold a rare earth magnet at commercial scale. When you think about the challenges that go into selling commercially — automotive is a major purchaser of these magnets — you need to get your production facility warranted. That’s a long process. There’s no sense right now — we don’t know they could get warranted for automotive. They might not. It’s a very challenging process.

Subscribe now

We do have competitors that are innovating. There’s a company, Niron Magnetics, that’s based out of Minnesota, they’ve produced a rare earthless magnet. This is the best of America in my opinion. You’re innovating yourself out of this vulnerability. I don’t know if Niron can scale at this point to the commercial scale that we need. But I also don’t know about MP Materials. When you start to get into some of these policy questions about is this intervention in this single company the right one, it raises a lot of secondary thorny issues.

This is a bet on vertical integration for rare earth magnets, that’s what they’re trying to build here. With MP Materials, that might be a good thing. A lot of the Chinese champions are vertically integrated, but there’s also a world where vertical integration on its own creates its own vulnerabilities. We see this a lot in the metals space where when we need to increase production because of some challenge, it’s not the vertically integrated producers that are responding quickly to price swings. It’s the marginal producers, the independent producers. This is something very common in metals markets. It’s something very common in the oil sector as well. These are really important policy questions.

My biggest concern globally with this deal is I don’t know what that reasoning is. It’s possible there are very well thought through reasons, but these are things that need to happen with some kind of a process that has technocratic democratic legitimacy to it. That’s why Daleep talking about the systemic solution is really important because we do need to make sure that these decisions are made in that context. I am not opposed to equity investments of all kinds. I think it’s an important tool for the government to have. It lets you push the risk frontier for your investments. If you’re a program, it lets you participate in the upside. But that needs to be done in a very thoughtful way. It’s a very powerful tool and we need to think about whether we are inculcating the things that make the American system dynamic — competition, innovation, technological innovation.

Daleep Singh: Can I ask Jordan, what is the exit strategy from the MP deal? Is it tied to production capacity or profits? How is the government going to sell down its public stake if at all?

Peter Harrell: The SEC filings talk about the government taking the stake. The government has also, in addition to the price floor, the guaranteed offtake agreement for the magnets. A belt and suspenders approach also guaranteed MP an annual profit of $140 million a year, which the government will pay as a cash payment if it’s not generated from the operations of the company. Presumably the government is intending to hold its equity for at least the ten year duration of the other elements of this deal. But there’s no specific language in the SEC filings about the government’s exit plan. It’s about the equity and then the duration of these other parts of the deal, which is a decade.

Arnab Datta: It’s structured as a ten year deal. I think ultimately the expectation is that the price floor and the offtake agreement will end at that point. But there’s no protection against the dependence. How do we stop this from becoming something that’s permanently dependent on this subsidy? It’s not clear.

It also doubles down on the Chinese market infrastructure. The benchmark that they are using is the Asian metals benchmark. That brings in the risk of manipulability too. China can bleed DoD for hundreds of millions more by flooding the market. How long is Congress expected to continue appropriations for that? These are not paid for. The one thing that was very clear in the 8-K is that they don’t have appropriations for all of this. How long can we expect Congress to keep paying? I think it is a very reasonable question as well.

Maximalist Industrial Policy

Jordan Schneider: I want to have this strategic question. What are reasonable goals over a three-year, five-year, or 10-year horizon when it comes to rare earths in particular. More broadly, what types of things would you want the Strategic Resilience Reserve to touch on?

Arnab Datta: There are a couple of key objectives that we’re trying to build here.

First, can we build a governance structure that is independent, technocratic and driven by market realities and not by political exigencies or other factors?

Second, can we build that robust toolkit that we talked about earlier for different markets? Rare earths we’ve talked about have particularly unique needs. They’re smaller than some of the bigger metals markets. We can’t be sure that you need a futures market for every rare earth that is on the market. But that’s a major goal as well.

Third, I would say the explicit purpose of what we’re trying to do here is build that competitive market. Are you supporting the buildout of a market infrastructure that is tied to market dynamics that US and allied producers face? Are we doing lending with intermediaries that can engage in more trading activity because they’ve got the leverage that left the market in the 2000s and 2010s, as I described? That’s an important piece of it because over five to ten years, if we can have a more stable market infrastructure for US and allied producers that reflects the costs they face, the logistical challenges they face, ultimately you’ll have a better stable foundation in place for those producers to compete.

Jordan Schneider: Beyond solving the market plumbing for things that would fall into strategic resilience, what is the big bold version of the systemic and thoughtful way to do the sorts of things that we’ve seen over the past few months with MP and Intel and we’ve seen over the past few years with the CHIPS Act and the IRA?

Daleep Singh: The maximalist version is a sovereign wealth fund. If you believe that the private sector systematically underinvests in projects that we need most for economic security and national security, then we’re not going to invest as a country at pace and scale to build fusion plants, dozens of semiconductor fabs, next-generation lithography, 6G telephony, or advanced geothermal. We’re also not going to invest enough in old economy sectors where we need to blunt a competitive disadvantage. Think about shipbuilding, or, lagging-edge chips, or mining.

What all of these projects share in common is that they require a lot of upfront capital and they require a decade or more of patience to generate a commercially attractive return. You need a huge tolerance for risk and uncertainty. The private sector venture investors, in particular, but also corporate America, are not likely to touch these in the size that we need them to because they’ve got plenty of other opportunities to make faster, higher, less risky returns. That’s why we have this valley of death right between breakthrough research and commercial scale.

I think the maximalist way to solve this problem is to create a flagship investment vehicle that gives the US patient, flexible capital, that can step in where markets won’t and that can crowd in private investment and back projects with genuine strategic value. That’s the case for a sovereign wealth fund. It’s not about picking winners, though. It’s about picking supply chains and technologies where our national security and our economic resilience are at stake.

It’s premised on the idea that left to itself, the US’ financial system is not designed to maximally align with our national interests. We need to intervene.

Jordan Schneider: I remember first reading you and Arnab’s piece on this a few years ago and thinking that was unlikely, but now Trump is into it. I wonder if it wasn’t called a golden share if he would have been as excited about this concept. But you do enough one-off ones and then you also learn that there are mistakes in the one-off ones and that you aren’t getting a systemic solution. It can go both ways. Either you give up on the project entirely or, given that the broader strategic purpose for these things keeps rearing its ugly head, you start to think in a larger and more systematic way at attacking these problems.

Let’s go level down. How are we funding this? What’s our governance structure? How’s the democratic involvement?

Daleep Singh: Whether you’re focusing on the MP deal or the 10% stake in Intel or the 15% revenue share from Nvidia or the golden share in Nippon, the point is we have a choice. Either we can improvise and experiment or we can develop a framework. Because I think the problem with improvisation is that if we just reach for different levers — an equity stake here, a profit share there, a golden share somewhere else — if we don’t have an overarching framework for why we’re using these tools and when and how and to what extent, I worry that this has the makings of a political piggy bank and a national embarrassment.

I understand some degree of experimentation is going to be needed. We haven’t done industrial policy in 40 years, and the muscles have atrophied. I get it, let’s take small steps and learn from those steps and then recalibrate. But I’m not in favor of ad hoc capitalism with American characteristics because that’s inevitably going to pick favorites and distort incentives.

You’re asking the right question. How do you govern a sovereign wealth fund or a Strategic Resilience Reserve the right way? How do you fund it? On the sovereign wealth fund idea, my thinking is we’re asset rich as a country. The federal government owns about 30% of the land. We have extensive energy and mineral rights. We own the electromagnetic spectrum. We have infrastructure assets all over the country. We’ve got 8,000 tons of gold that’s valued at 1934 prices. We’ve got $200 billion of basically money market assets that are sitting idle. The question is, are we maximizing the strategic bang for the buck on those assets? I would say no. That’s one potential source of funding.

You could also create new revenue streams to fund the vehicle. If you think that the US has too much Wall Street and not enough Main Street, that we financialize the economy into a series of boom-bust asset cycles, then let’s raise revenues from financial activities that serve no strategic purpose. I would say high frequency trading, for example, and fund vehicles that are explicitly designed to advance our national interests.

Jordan Schneider: As long as we stay away from fixed income.

Daleep Singh: Exactly. That’s untouchable. But the most appealing approach is the most straightforward one: ask Congress, be straight up about it. Ask Congress to seed the fund, authorize its existence as an independent federally chartered corporation authority. This is too important to leave entirely to the executive branch and have Congress set a clear mandate in terms of the objectives, the metrics for success, the oversight, the democratic accountability which Arnab was pointing at earlier. It’s a shame we didn’t do this ten years ago when our cost of capital was near zero. That would have made this effort far more affordable. But this is about our long-term national competitiveness. We don’t need to try to time the market.

Arnab Datta: One model that we think about a lot at Employ America is the Federal Reserve. The Federal Reserve has an independent board still, knock on wood. But that’s a structure that is well insulated from political day-to-day activities. It is not a 51-49 majority power structure. It has staggered terms, which, in my opinion, lends itself to depoliticization that’s helpful and has served us well over time.

In terms of the congressional point that Daleep made, we have had a version of this. We’ve worked with Senator Chris Coons’ office since 2020 on his proposal to establish an Industrial Finance Corporation. This is modeled off of the Reconstruction Finance Corporation that we had in the 30s, 40s, and 50s. We had then-Senator Vance on as a co-sponsor. I don't think the political viability of something like that is small. The way we structured that was we appropriated capital to it as a backstop against the borrowing that the corporation could do itself. This corporation could go out and raise capital by raising bond capital and then deploy that capital towards these investments that Daleep mentioned.

One value add about that is you don’t need to compete with the private sector on the rate of return, but you can generate a rate of return. Ultimately that type of a structure could pay for itself. There are a lot of technical accounting rules related to how you would structure that, particularly the Federal Credit Reform Act would come into play. But that is a structure that I think could be viable over time and we have the money to do it. Ultimately because a lot of these investments would be productive over 5, 10, 20 years, I think it would pay for itself.

The Right Tools for Intel

Jordan Schneider: I can’t let you guys leave without a few more Intel takes.

Arnab Datta: I’ve seen two separate conversations happening. One is on the legality of this and another on the policy justification. Peter did an excellent piece in Lawfare that came out a couple of days ago. This is possibly legal in a very technical sense, but does probably violate the spirit of the CHIPS Act in that the CHIPS Act is intended to incentivize manufacturing investments — they are giving this money to Intel but relinquishing most of those requirements. Earlier, we talked about milestones that companies should have to meet. Intel had a bunch of milestones attached to this money. They couldn’t get it all until they reached those milestones. They now have this capital, but they don’t have to meet those milestones. I think that’s a big problem.

Separate from the legality of the policy proposal here, why was this the best way, best thing for Intel? It’s not clear. As Daleep mentioned earlier, they need customers. An equity investment is not going to help them in that sense. For all I know, the share price could go down and our investment could go down because they can’t find customers. I think it’s a big problem that we’re not approaching the question of how can we make Intel more competitive? We seem to be approaching it in an ad hoc way — how can we get the best for our dollar in the form of a deal, an equity deal.

Daleep Singh: That’s my main concern — the right tools here. I agree with the intervention, but the right tools have to come from the demand side. Procurement guarantees, offtake agreements, sourcing mandates — all of those ideas make a lot of sense to me. It’s not clear how the equity injections fill the demand gap.

When you make upfront equity investments, you are foregoing optionality. I would have liked to see warrants or options that are tied to success. In general, I think policy support should be conditional. Conditional on whether you’re reducing unit costs or diversifying customers or hitting your production capacity targets. I do like the idea of clawbacks. The government has lost a lot of optionality with an upfront common equity injection. Maybe there’s a lot in the fine print that we don’t understand, but that’s what I found lacking.

Peter Harrell: I just echo what Daleep and Arnab said. The specifics of this deal are troubling. The idea of policy support, financial support to have onshoring of US semiconductors — clearly needed, clearly broad, bipartisan support. The idea that we shouldn’t be dependent on TSMC, the Taiwanese semiconductor firm for leading edge manufacturing, I think also has bipartisan and sensible policy support. You want to have some competition and some optionality at the leading edge of semiconductor manufacturing.

But what this deal did was take a grant in which Intel was getting $11 billion in exchange for Intel investing — call it $80 billion in fabs over the next decade. Intel was going to get the $11 billion in tranches as it built the fabs. If it failed to build the fabs, there was going to be a clawback. Now Intel is getting about $9 billion of the dollars in exchange for the stock. Plus they have to complete building certain DoD specialty lines.

Most of the obligations to build fabs went poof, and they got the cash in exchange for stocks.

I get why Intel might have done it. They get cash that’s largely unrestricted. They dilute their existing shareholders, but they probably decided the cash is worth it for us to do whatever we want with it. Reasonable call from Intel.

Arnab Datta: I’m also thinking about warrants. They’re using, in all likelihood, something called other transactions authority to legally justify the use of this deal. Other transactions authority is an incredible gift to the Commerce Department to be able to design very diverse mechanisms for policy here. In my opinion, wasting it on this equity investment that has little attached to it is a real mistake. They could put some effort into something creative that did go to the root of the problem about customers, and they’re squandering it, in my opinion.

Jordan Schneider: I think what you all said makes sense under a normal presidency living in the year of our Lord 2025. The way Intel survives is it gets customers, and the way it gets customers is Trump terrifies CEOs. If 10% of the company is what Trump can do to terrify CEOs, then all right, we’ll see. When we were talking earlier about MP Materials, it’s really not rocket science. You could have a beauty pageant with five different companies all trying to mine different places and have something. There’s one horse in this race and at a certain point you have to hope that they can execute as long as the demand’s there.

My sense and hope is that having a golden share owning 10% — Trump will care and be more invested and put more of his cycles and wrath into rounding up a handful of people who are going to spend the time to deal with Intel and help them get back on track. Regardless of whether it was warrants or a grant or equity, whether or not Intel is able to catch back up to TSMC is going to be a function of execution. And a president turning the screws on US fabless customer companies to play ball with Intel. The fact that Trump is caring about this and is focused on this, I would not have priced in completely from the get-go. He was literally talking about having to fire Pat Gelsinger — probably the only man who could, the person who I trust more than anyone else on the planet to actually execute this right who doesn’t work at TSMC currently. I’m more bullish on this than you guys are.

Arnab Datta: Can I offer one pushback on that, Jordan? One thing I would say is yes, there is a tremendous focusing mechanism — companies will, you saw this with MP where just a few days after the announcement Apple signed a big deal with them, a $500 million deal. The thing I would say is at some point the market has to trust that Trump’s commitment to this company will continue. President Trump is not going to be president forever. Intel is not going to be operating only on a four-year timeline. At some point Intel is going to require commitments from other companies and at some point they might turn and say this guy’s not going to be president anymore. We’ve got someone else to please here.

Certainly I take your bullish case. But Intel can’t survive only on that. They need an outside market and they need potentially capital from external sources down the line. At some point we’re going to be in a post-Trump world and it could look very different for Intel.

Mood Music:

#104 美国枪战:第二修正案

枪是美国政治、美国社会特有的主题。上周,美国保守派活动人士Charlie Kirk被刺杀。他在犹他州一所大学演讲的时候,凶手从200码外开枪,击中他的颈部,不久宣布死亡。Charlie Kirk生前大力支持民众拥有枪支,支持宪法第二修正案。

2023年3月末,田纳西州纳什维尔发生校园枪击案,3名9岁的学童和3名老师遇害。一周后,Charlie Kirk在犹他州盐湖城演讲。有观众问他,每年有很多人被枪打死,这种代价是不是值得?Charlie Kirk回答说:“I think it's worth to have a cost of, unfortunately, some gun deaths every single year, so that we can have the 2nd Amendment.”——“每年都有人被枪打死,我认为,不幸的是,这个代价值得,这样我们才能拥有第二修正案。”

Charlie Kirk遇刺是个不幸的悲剧。他已经不在人世,是非功过由世人去评说。上期节目,我说了自己的看法。有人喜欢,有人不喜欢,只要不突破人性的底线,都是正常的。这里再强调一点:不管他生前说过什么,主张过什么,在言论自由的民主社会,用杀人来解决观念争端,都是不折不扣的野蛮行为。

这期节目,我们讲一讲美国拥枪权和第二修正案的前生今世,还有围绕第二修正案争论的来龙去脉。作为一名枪支拥有者,我支持拥枪权,支持第二修正案,但同样支持把第二修正案的本义和最高法院的解读搞清楚。

可以说,第二修正案不只是一条宪法修正案,不只是一句法律条文,而是一场贯穿几代人的法律枪战。

近半个多世纪,枪成为美国宗教、政治和文化保守派的图腾,也是共和党候选人团结选民的一面鲜明旗帜。反对政府控制枪支的保守选民,有效地把选举变成对《宪法》第二修正案的公投,形成共和党最稳固的基本盘。大部分民主党候选人虽然支持控枪,但提出的具体措施和政策主张往往轻描淡写,否则就会失去很多中间选民的支持。毕竟在美国,枪不只被看作一件武器,更重要的是,枪被很多国民当作自我认同和国家认同的标志性符号,甚至被当作个人自由的象征。

在当代美国,持枪权经常被称为“第二修正案权利”(The Second Amendment Right)。《美国宪法》第二修正案只有一句话:“A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.”——“一支管理良好的民兵,为保障自由州的安全所必需,人民拥有和携带武器的权利不得侵犯。”

近半个世纪,这句话成为美国法学界和政界论战的焦点之一。在法学界,短短几十年间发表的解读第二修正案的论文比此前200年的总和还多;在政界,第二修正案的热度更高。

2010年,奥巴马总统提名艾丽娜·卡根(Elena Kagan)做最高法院大法官,需要参议院核准。有80多名民主党和共和党参议员找她问话,问得最多的问题就包括她怎么看第二修正案,有没有拿过枪,有没有打过猎。

卡根被任命为大法官后,请最高法院的打猎高手安东宁·斯伽利亚(Antonin Scalia)大法官教她打猎。两人的政治倾向和法律观点相左,卡根被认为是自由派,斯伽利亚是有名的保守派。

2016年2月去世前,斯伽利亚大法官有好几次,带卡根大法官去打猎,主要是打野鸭子、打鹿。在怀俄明山中,卡根大法官猎杀了平生第一头鹿。她没有因为学会打猎而改变对第二修正案的看法,更不会赞同斯伽利亚大法官对第二修正案的解读,但她显然充分意识到了第二修正案和枪支问题,在美国政治和法律议题中的位置,还有枪支对普通美国人无所不在的影响。

Read more

💾

#103 政治刺杀案后的狂欢

两天前,保守派活动人士Charlie Kirk在犹他州遇刺。事件刚上新闻,就有听众建议我说一说。我不知道说什么,主流媒体报导都是语焉不详,事件的基本事实,当时连FBI和警方都搞不清楚,我更不可能知道。

今天早晨,嫌犯刚刚归案,是个22岁的当地年轻人。他的作案动机,前因后果,外界仍然不清楚。

每次这类事件发生,都是媒体、自媒体的绝佳流量窗口,第一波出来的,大部分是做同一件工作,就是给观众摁情绪按钮。

网络时代,无数人一天到晚耍手机,盼望各种重大事件发生。这种事一发生,他们马上脱掉衣服,露出满身的情绪按钮,到处找媒体,找自媒体给他们摁。

理性的人会静下来想一想,连FBI,连警方都还不知道的事实,媒体、自媒体怎么可能知道?带着情绪按钮去耍新闻,去找自媒体摁一摁,除了刺激情绪以外,什么也得不到。

Charlie Kirk今年31岁,他是在犹他州一所大学校园演讲的时候,被刺杀的。凶手从200码以外开枪,击中他的颈部。他被击中的时候,他的太太和两个孩子,一个3岁,一个1岁,都在下面的观众席上。

过去两天,FBI忙着抓凶手,一些媒体和自媒体忙着给FBI指明方向,无非是指向各种阴谋论。左派说是右派干的,右派说是左派干的。像样的政客说些永远不会错的话,不像样的政客利用这个事件煽动情绪,加剧社会分裂。

一些政治极端分子,当然更不会错失这个机会,甚至叫嚣“war” ——战争。

极端政治把一些平常看着还算正常的人变成鬼。有人在社交媒体上幸灾乐祸,据说还有人庆祝。立场左右,都不是问题,问题是不能没人性,不能反人性。在一个有言论自由的民主社会,不管言论怎么样,观点怎样,不能把人杀了。

Charlie Kirk是个公众人物,是喜欢他,还是厌恶他,还是觉得无所谓,都是正常的。

我不喜欢他的很多观点,甚至厌恶他的一些说法,但我欣赏他表达观点的方式,就是去大学校园,跟年轻人面对面辩论,而不是诉诸暴力,威胁对方。他被子弹击中的时候,不是在作案,而是在演讲。

事件发生后,Steven Pinker教授在推特上说:“Speech is not violence. Violence is not speech.”——“言论不是暴力,暴力不是言论”。

如果不喜欢Charlie Kirk的言论,不管什么原因不喜欢,都无可厚非。不喜欢,可以不理睬,也可以跟他辩论,甚至可以用恶毒的语言骂他,但不能把他杀了。

作为一个普通人,不喜欢他,厌恶他,也不能看着有人把他杀了,就去庆祝,更不能去当杀人凶手的拉拉队。

我想,这是民主社会的一条做人底线。

社交媒体时代,也是谣言时代。只要是公众人物,都会有无数谣言伴随他们。一些不三不四的人,会主动编造谣言,一些无脑观众会兴高采烈地传播谣言,让正常人防不胜防。我很尊重的一位作家Stephen King,也没有幸免。

Read more

💾

Why Robots are Coming

8VC is hosting a meetup for ChinaTalk this coming Thursday. Sign up here if you can make it!


Ryan Julian is a research scientist in embodied AI. He worked on large-scale robotics foundation models at DeepMind and got his PhD in machine learning in 2021.

In our conversation today, we discuss…

  • What makes a robot a robot, and what makes robotics so difficult,

  • The promise of robotic foundation models and strategies to overcome the data bottleneck,

  • Why full labor replacement is far less likely than human-robot synergy,

  • China’s top players in the robotic industry, and what sets them apart from American companies and research institutions,

  • How robots will impact manufacturing, and how quickly we can expect to see robotics take off.

Listen now on your favorite podcast app.

Tesla is spending what Chief Executive Elon Musk called “staggering amounts of money” on gearing up for mass production. Above, robots assemble Model S sedans at the electric car maker’s 5.3-million-square-foot plant in Fremont, Calif.
Robotic arms at Tesla’s factory in Fremont, California. Source.

Embodying Intelligence

Jordan Schneider: Ryan, why should we care about robotics?

Ryan Julian: Robots represent the ultimate capital good. Just as power tools, washing machines, or automated factory equipment augment human labor, robots are designed to multiply human productivity. The hypothesis is straightforward — societies that master robotics will enjoy higher labor productivity and lower costs in sectors where robots are deployed, including in logistics, manufacturing, transportation, and beyond. Citizens in these societies will benefit from increased access to goods and services.

The implications become even more profound when we consider advanced robots capable of serving in domestic, office, and service sectors. These are traditionally areas that struggle with productivity growth. Instead of just robot vacuum cleaners, imagine robot house cleaners, robot home health aides, or automated auto mechanics. While these applications remain distant, they become less far-fetched each year.

Looking at broader societal trends, declining birth rates across the developed world present a critical challenge — How do we provide labor to societies with shrinking working-age populations? Robots could offer a viable solution.

From a geopolitical perspective, robots are dual-use technology. If they can make car production cheaper, they can also reduce the cost of weapon production. There’s also the direct military application of robots as weapons, which we’re already witnessing with drones in Ukraine. From a roboticist’s perspective, current military drones represent primitive applications of robotics and AI. Companies developing more intelligent robotic weapons using state-of-the-art robotics could have enormous implications, though this isn’t my area of expertise.

Fundamentally, robots are labor-saving machines, similar to ATMs or large language models. The key differences lie in their degree of sophistication and physicality. When we call something a robot, we’re describing a machine capable of automating physical tasks previously thought impossible to automate — tasks requiring meaningful and somewhat general sensing, reasoning, and interaction with the real world.

This intelligence requirement distinguishes robots from simple machines. Waymo vehicles and Roombas are robots, but dishwashers are appliances. This distinction explains why robotics is so exciting — we’re bringing labor-saving productivity gains to economic sectors previously thought untouchable.

Jordan Schneider: We’re beginning to understand the vision of unlimited intelligence — white-collar jobs can be potentially automated because anything done on a computer might eventually be handled better, faster, and smarter by future AI systems. But robotics extends this to the physical world, requiring both brain power and physical manipulation capabilities. It’s not just automated repetitive processes, but tasks requiring genuine intelligence combined with physical dexterity.

Ryan Julian: Exactly. You need sensing, reasoning, and interaction with the world in truly non-trivial ways that require intelligence. That’s what defines an intelligent robot.

I can flip your observation — robots are becoming the physical embodiment of the advanced AI you mentioned. Current large language models and vision-language models can perform incredible digital automation — analyzing thousands of PDFs or explaining how to bake a perfect cake. But that same model cannot actually bake the cake. It lacks arms, cannot interact with the world, and doesn’t see the real world in real time.

However, if you embed that transformer-based intelligence into a machine capable of sensing and interacting with the physical world, then that intelligence could affect not just digital content but the physical world itself. The same conversations about how AI might transform legal or other white-collar professions could equally apply to physical labor.

Today’s post is brought to you by 80,000 Hours, a nonprofit that helps people find fulfilling careers that do good. 80,000 Hours — named for the average length of a career — has been doing in-depth research on AI issues for over a decade, producing reports on how the US and China can manage existential risk, scenarios for potential AI catastrophe, and examining the concrete steps you can take to help ensure AI development goes well.

Their research suggests that working to reduce risks from advanced AI could be one of the most impactful ways to make a positive difference in the world.

They provide free resources to help you contribute, including:

  • Detailed career reviews for paths like AI safety technical research, AI governance, information security, and AI hardware,

  • A job board with hundreds of high-impact opportunities,

  • A podcast featuring deep conversations with experts like Carl Shulman, Ajeya Cotra, and Tom Davidson,

  • Free, one-on-one career advising to help you find your best fit.

To learn more and access their research-backed career guides, visit 80000hours.org/ChinaTalk.

To read their report about AI coordination between the US and China, visit http://80000hours.org/chinatalkcoord.

Jordan Schneider: Ryan, why is robotics so challenging?

Ryan Julian: Several factors make robotics exceptionally difficult. First, physics is unforgiving. Any robot must exist in and correctly interpret the physical world’s incredible variation. Consider a robot designed to work in any home — it needs to understand not just the visual aspects of every home worldwide, but also the physical properties. There are countless doorknob designs globally, and the robot must know how to operate each one.

The physical world also differs fundamentally from the digital realm. Digital systems are almost entirely reversible unless intentionally designed otherwise. You can undo edits in Microsoft Word, but when a robot knocks a cup off a table and cannot retrieve it, it has made an irreversible change to the world. This makes robot failures potentially catastrophic. Anyone with a robot vacuum has experienced it consuming a cable and requiring rescue — that’s an irreversible failure.

The technological maturity gap presents another major challenge. Systems like ChatGPT, Gemini, or DeepSeek process purely digital inputs — text, images, audio. They benefit from centuries of technological development that we take for granted — monitors, cameras, microphones, and our ability to digitize the physical world.

Today’s roboticist faces a vastly more complex challenge. While AI systems process existing digital representations of the physical world, roboticists must start from scratch. It’s as if you wanted to create ChatGPT but first had to build CPUs, wind speakers, microphones, and digital cameras.

Robotics is just emerging from this foundational period, where we’re creating hardware capable of converting physical world perception into processable data. We also face the reverse challenge — translating digital intent into physical motion, action, touch, and movement in the real world. Only now is robotics hardware reaching the point where building relatively capable systems for these dual processes is both possible and economical.

Jordan Schneider: Let’s explore the brain versus body distinction in robotics — the perception and decision-making systems versus the physical mechanics of grasping, moving, and locomotion. How do these two technological tracks interact with each other? From a historical perspective, which one has been leading and which has been lagging over the past few decades?

Ryan Julian: Robotics is a fairly old field within computing. Depending on who you ask, the first robotics researchers were probably Harry Nyquist and Norbert Wiener. These researchers were interested in cybernetics in the 1950s and 60s.

Norbert Wiener, founder of cybernetics, in an MIT classroom, ~1949. Source.

Back then, cybernetics, artificial intelligence, information theory, and control theory were all one unified field of study. These disciplines eventually branched off into separate domains. Control theory evolved to enable sophisticated systems like state-of-the-art fighter plane controls. Information theory developed into data mining, databases, and the big data processing that powers companies like Google and Oracle — essentially Web 1.0 and Web 2.0 infrastructure.

Artificial intelligence famously went into the desert. It had a major revolution in the 1980s, then experienced the great AI winter from the 80s through the late 90s, before the deep learning revolution emerged. The last child of this original unified field was cybernetics, which eventually became robotics.

The original agenda was ambitious — create thinking machines that could fully supplant human existence, human thought, and human labor — that is, true artificial intelligence. The founding premise was that these computers would need physical bodies to exist in the real world.

Robotics as a field of study is now about 75 years old. From its origins through approximately 2010-2015, enormous effort was devoted to creating robotic hardware systems that could reliably interact with the physical world with sufficient power and dexterity. The fundamental questions were basic but challenging — Do we have motors powerful enough for the task? Can we assemble them in a way that enables walking?

A major milestone was the MIT Cheetah project, led by Sangbae Kim around 2008-2012. This project had two significant impacts — it established the four-legged form factor now seen in Unitree’s quadrupedal robots and Boston Dynamics’ systems, and it advanced motor technology that defines how we build motors for modern robots.

Beyond the physical components, robots require sophisticated sensing capabilities. They need to capture visual information about the world and understand three-dimensional space. Self-driving cars drove significant investment in 3D sensing technology like LiDAR, advancing our ability to perceive spatial environments.

Each of these technological components traditionally required substantial development time. Engineers had to solve fundamental questions — Can we capture high-quality images? What resolution is possible? Can we accurately sense the world’s shape and the robot’s own body position? These challenges demanded breakthroughs in electrical engineering and sensor technology.

Once you have a machine with multiple sensors and actuators, particularly sensors that generate massive amounts of data, you need robust data processing capabilities. This requires substantial onboard computation to transform physical signals into actionable information and generate appropriate motion responses — all while the machine is moving.

This is where robotics historically faced limitations. Until recently, robotics remained a fairly niche field that hadn’t attracted the massive capital investment seen in areas like self-driving cars. Robotics researchers often had to ride the waves of technological innovation happening in other industries.

A perfect example is robotic motors. A breakthrough came from cheap brushless motors originally developed for electric skateboards and power drills. With minor modifications, these motors proved excellent for robotics applications. The high-volume production for consumer applications dramatically reduced costs for robotics.

The same pattern applies to computation. Moore’s Law and GPU development have been crucial for robotics advancement. Today, robots are becoming more capable because we can pack enormous computational power into small, battery-powered packages. This enables real-time processing of cameras, LiDAR, joint sensors, proprioception, and other critical systems — performing most essential computation onboard the robot itself.

Jordan Schneider: Why does computation need to happen on the robot itself? I mean, you could theoretically have something like Elon’s approach where you have a bartender who’s actually just a robot being controlled remotely from India. That doesn’t really count as true robotics though, right?

Ryan Julian: This is a fascinating debate and trade-off that people in the field are actively grappling with right now. Certain computations absolutely need to happen on the robot for physical reasons. The key framework for thinking about this is timing — specifically, what deadlines a robot faces when making decisions.

If you have a walking robot that needs to decide where to place its foot in the next 10 milliseconds, there’s simply no time to send a query to a cloud server and wait for a response. That sensing, computation, and action must all happen within the robot because the time constraints are so tight.

The critical boundary question becomes: what’s the timescale at which off-robot computation becomes feasible? This is something that many folks working on robotics foundation models are wrestling with right now. The answer isn’t entirely clear and depends on internet connection quality, but the threshold appears to be around one second.

If you have one second to make a decision, it’s probably feasible to query a cloud system. But if you need to make a decision in less than one second — certainly less than 100 milliseconds — then that computation must happen on-board. This applies to fundamental robot movements and safety decisions. You can’t rely on an unreliable internet connection when you need to keep the robot safe and prevent it from harming itself or others.

Large portions of the robot’s fundamental motion and movement decisions must stay local. However, people are experimenting with cloud-based computation for higher-level reasoning. For instance, if you want your robot to bake a cake or pack one item from each of ten different bins, it might be acceptable for the robot to query DeepSeek or ChatGPT to break that command down into executable steps. Even if the robot gets stuck, it could call for help at this level — but it can’t afford to ask a remote server where to place its foot.

One crucial consideration for commercial deployment is that we technologists and software engineers love to think of the internet as ubiquitous, always available, and perfectly reliable. But when you deploy real systems — whether self-driving cars, factory robots, or future home robots — there will always be places and times where internet access drops out.

Given the irreversibility we discussed earlier, it’s essential that when connectivity fails, the robot doesn’t need to maintain 100% functionality for every possible feature, but it must remain safe and be able to return to a state where it can become useful again once connectivity is restored.

Jordan Schneider: You mentioned wanting robots to be safe, but there are other actors who want robots to be dangerous. This flips everything on its head in the drone context. It’s not just that Verizon has poor coverage — it’s that Russia might be directing electronic warfare at you, actively trying to break that connection.

This creates interesting questions about the balance between pressing go on twenty drones and letting them figure things out autonomously versus having humans provide dynamic guidance — orienting left or right, adjusting to circumstances. There are both upsides and downsides to having robots make these decisions independently.

Ryan Julian: Exactly right. The more autonomy you demand, the more the difficulty scales exponentially from an intelligence perspective. This is why Waymos are Level 4 self-driving cars rather than Level 5 — because Level 5 represents such a high bar. Yet you can provide incredibly useful service with positive unit economics and game-changing safety improvements with just a little bit of human assistance.

Jordan Schneider: What role do humans play in Waymo operations?

Ryan Julian: I don’t have insider information on this, but my understanding is that when a Waymo encounters trouble — when it identifies circumstances where it doesn’t know how to navigate out of a space or determine where to go next — it’s programmed to pull over at the nearest safe location. The on-board system handles finding a safe place to stop.

Then the vehicle calls home over 5G or cellular connection to Waymo’s central support center. I don’t believe humans drive the car directly because of the real-time constraints we discussed earlier — the same timing limitations that apply to robot movement also apply to cars. However, humans can provide the vehicle with high-level instructions about where it should drive and what it should do next at a high level.

Jordan Schneider: We have a sense of the possibilities and challenges — the different technological trees you have to climb. What is everyone in the field excited about? Why is there so much money and energy being poured into this space over the past few years to unlock this future?

Ryan Julian: People are excited because there’s been a fundamental shift in how we build software for robots. I mentioned that the hardware is becoming fairly mature, but even with good hardware, we previously built robots as single-purpose machines. You would either buy robot hardware off the shelf or build it yourself, but then programming the robot required employing a room full of brilliant PhDs to write highly specialized robotic software for your specific problem.

These problems were usually not very general — things like moving parts from one belt to another. Even much more advanced systems that were state-of-the-art from 2017 through 2021, like Amazon’s logistics robots, were designed to pick anything off a belt and put it into a box, or pick anything off a shelf. The only variations were where the object is located, how I position my gripper around it, what shape it is, and where I move it.

From a human perspective, that’s very low variation — this is the lowest of low-skilled work. But even handling this level of variation required centuries of collective engineering work to accomplish with robots.

A pick-and-place robot aligns wafer cookies during the packaging process. Source.

Now everyone’s excited because we’re seeing a fundamental change in how we program robots. Rather than writing specific applications for every tiny task — which obviously doesn’t scale and puts a very low ceiling on what’s economical to automate — we’re seeing robotics follow the same path as software and AI. Programming robots is transforming from an engineering problem into a data and AI problem. That’s embodied AI. That’s what robot learning represents.

The idea is that groups of people develop robot learning software — embodied AI systems primarily composed of components you’re already familiar with from the large language model and vision-language model world. Think large transformer models, data processing pipelines, and related infrastructure, plus some robot-specific additions. You build this foundation once.

Then, when you want to automate a new application, rather than hiring a big team to build a highly specialized robot system and hope it works, you simply collect data on your new application and provide it to the embodied AI system. The system learns to perform the new task based on that data.

This would be exciting enough if it worked for just one task. But we’re living in the era of LLMs and VLMs — systems that demonstrate something remarkable. When you train one system to handle thousands of purely digital tasks — summarizing books, writing poems, solving math problems, writing show notes — you get what we call a foundation model.

When you want that foundation model to tackle a new task in the digital world, you can often give it just a little bit of data, or sometimes no data at all — just a prompt describing what you want. Because the system has extensive experience across many different tasks, it can relate its existing training to the new task and accomplish it with very little additional effort. You’re automating something previously not automated with minimal effort.

The hope for robotics foundation models is achieving the same effect with robots in the physical world. If we can create a model trained on many different robotic tasks across potentially many different robots — there’s debate in the field about this — we could create the GPT of robotics, the DeepSeek of robotics.

Imagine a robot that already knows how to make coffee, sort things in a warehouse, and clean up after your kids. You ask it to assemble a piece of IKEA furniture it’s never seen before. It might look through the manual and then put the furniture together. That’s probably a fantastical vision — maybe 10 to 20 years out, though we’ll see.

But consider a softer version: a business that wants to deploy robots only needs to apprentice those robots through one week to one month of data collection, then has a reliable automation system for that business task. This could be incredibly disruptive to the cost of introducing automation across many different spaces and sectors.

That’s why people are excited. We want the foundation model for robotics because it may unlock the ability to deploy robots in many places where they’re currently impossible to use because they’re not capable enough, or where deployment is technically possible but not economical.

Jordan Schneider: Is all the excitement on the intelligence side? Are batteries basically there? Is the cost structure for building robots basically there, or are there favorable curves we’re riding on those dimensions as well?

Ryan Julian: There’s incredible excitement in the hardware world too. I mentioned earlier that robotics history, particularly robotics hardware, has been riding the wave of other industries funding the hard tech innovations necessary to make robots economical. This remains true today.

You see a huge boom in humanoid robot companies today for several reasons. I gave you this vision of robotics foundation models and general-purpose robot brains. To fully realize that vision, you still need the robot body. It doesn’t help to have a general-purpose robot brain without a general-purpose robot body — at least from the perspective of folks building humanoids.

Humanoid robots are popular today as a deep tech concept because pairing them with a general-purpose brain creates a general-purpose labor-saving machine. This entire chain of companies is riding tremendous progress in multiple areas.

Battery technology has become denser, higher power, and cheaper. Actuator technology — motors — has become more powerful and less expensive. Speed reducers, the gearing at the end of motors or integrated into them, traditionally represented very expensive components in any machine using electric motors. But there’s been significant progress making these speed reducers high-precision and much cheaper.

Sensing has become dramatically cheaper. Camera sensors that used to cost hundreds of dollars are now the same sensors in your iPhone, costing two to five dollars. That’s among the most expensive components you can imagine, yet it’s now totally economical to place them all over a robot.

Computation costs have plummeted. The GPUs in a modern robot might be worth a couple hundred dollars, which represents an unimaginably low cost for the available computational power.

Robot bodies are riding this wave of improving technologies across the broader economy — all dual-use technologies that can be integrated into robots. This explains why Tesla’s Optimus humanoid program makes sense: much of the hardware in those robots is already being developed for other parts of Tesla’s business. But this pattern extends across the entire technology economy.

Jordan Schneider: Ryan, what do you want to tell Washington? Do you have policy asks to help create a flourishing robotics ecosystem in the 21st century?

Ryan Julian: My policy ask would be for policymakers and those who inform them to really learn about the technology before worrying too much about the implications for labor. There are definitely implications for labor, and there are also implications for the military. However, the history of technology shows that most new technologies are labor-multiplying and labor-assisting. There are very few instances of pure labor replacement.

I worry that if a labor replacement narrative takes hold in this space, it could really hold back the West and the entire field. As of today, a labor replacement narrative isn’t grounded in reality.

The level of autonomy and technology required to create complete labor replacement in any of the job categories we’ve discussed is incredibly high and very far off. It’s completely theoretical at this point.

My ask is, educate yourself and think about a world where we have incredibly useful tools that make people who are already working in jobs far more productive and safer.

China’s Edge and the Data Flywheel

Jordan Schneider: On the different dimensions you outlined, what are the comparative strengths and advantages of China and the ecosystem outside China?

Ryan Julian: I’m going to separate this comparison between research and industry, because there are interesting aspects on both sides. The short version is that robotics research in China is becoming very similar to the West in quality.

Let me share an anecdote. I started my PhD in 2017, and a big part of being a PhD student — and later a research scientist — is consuming tons of research: reams of dense 20-page PDFs packed with information. You become very good at triaging what’s worth your time and what’s not. You develop heuristics for what deserves your attention, what to throw away, what to skim, and what to read deeply.

Between 2017 and 2021, a reliable heuristic was that if a robotics or AI paper came from a Chinese lab, it probably wasn’t worth your time. It might be derivative, irrelevant, or lacking novelty. In some cases, it was plainly plagiarized. This wasn’t true for everything, but during that period it was a pretty good rule of thumb.

Over the last two years, I’ve had to update my priorities completely. The robotics and AI work coming out of China improves every day. The overall caliber still isn’t quite as high as the US, EU, and other Western institutions, but the best work in China — particularly in AI and my specialization in robotics — is rapidly catching up.

Today, when I see a robotics paper from China, I make sure to read the title and abstract carefully. A good portion of the time, I save it because I need to read it thoroughly. In a couple of years, the median quality may be the same. We can discuss the trends driving this — talent returning to China, people staying rather than coming to the US, government support — but it’s all coming together to create a robust ecosystem.

Moving from research to industry, there’s an interesting contrast. Due to industry culture in China, along with government incentives and the way funding works from provinces and VC funds, the Chinese robotics industry tends to focus on hardware and scale. They emphasize physical robot production.

Xiaomi’s “Dark Factory” 黑灯工厂 autonomously produces smartphones. Source.

When I talk to Chinese robotics companies, there’s always a story about deploying intelligent AI into real-world settings. However, they typically judge success by the quantity of robots produced — a straightforward industrial definition of success. This contrasts with US companies, which usually focus on creating breakthroughs and products that nobody else could create, where the real value lies in data, software, and AI.

Chinese robotics companies do want that data, software, and AI capabilities. But it’s clear that their business model is fundamentally built around selling robots. Therefore, they focus on making robot hardware cheaper and more advanced, producing them at scale, accessing the best components, and getting them into customers’ hands. They partner with upstream or downstream companies to handle the intelligence work, creating high-volume robot sales channels.

Take Unitree as a case study — a darling of the industry that’s been covered on your channel. Unitree has excelled at this approach. Wang Xingxing and his team essentially took the open-source design for the MIT Cheetah quadruped robot and perfected it. They refined the design, made it production-ready, and likely innovated extensively on the actuators and robot morphology. Most importantly, they transformed something you could build in a research lab at low scale into something manufacturable on production lines in Shenzhen or Shanghai.

They sold these robots to anyone willing to buy, which seemed questionable at the time — around 2016 — because there wasn’t really a market for robots. Now they’re the go-to player if you want to buy off-the-shelf robots. What do they highlight in their marketing materials? Volume, advanced actuators, and superior robot bodies.

This creates an interesting duality in the industry. Most American robotics companies — even those that are vertically integrated and produce their own robots — see the core value they’re creating as intelligence or the service they deliver to end customers. They’re either trying to deliver intelligence as a service (like models, foundation models, or ChatGPT-style queryable systems where you can pay for model training) or they’re pursuing fully vertical solutions where they deploy robots to perform labor, with value measured in hours of replaced work.

On the Chinese side, companies focus on producing exceptionally good robots.

Jordan Schneider: I’ve picked up pessimistic energy from several Western robotics efforts — a sense that China already has this in the bag. Where is that coming from, Ryan?

Ryan Julian: That’s a good question. If you view AI as a race between the US and China — a winner-take-all competition — and you’re pessimistic about the United States’ or the West’s ability to maintain an edge in intelligence, then I can see how you’d become very pessimistic about the West’s ability to maintain an edge in robotics.

As we discussed, a fully deployed robot is essentially a combination of software, AI (intelligence), and a machine. The challenging components to produce are the intelligence and the machine itself. The United States and the West aren’t particularly strong at manufacturing. They excel at design but struggle to manufacture advanced machines cheaply. They can build advanced machines, but not cost-effectively.

If you project this forward to a world where millions of robots are being produced — where the marginal cost of each robot becomes critical and intelligence essentially becomes free — then I can understand why someone would believe the country capable of producing the most advanced physical robot hardware fastest and at the lowest cost would have a huge advantage.

If you believe there’s no sustainable edge in intelligence — that intelligence will eventually have zero marginal cost and become essentially free — then you face a significant problem. That’s where the pessimism originates.

Jordan Schneider: Alright, we detoured but we’re coming back to this idea of a foundation model unlocking the future. We haven’t reached the levels of excitement for robotics that we saw in October 2022 for ChatGPT. What do we need? What’s on the roadmap? What are the key inputs?

Ryan Julian: To build a great, intelligent, general-purpose robot, you need the physical robot itself. We’ve talked extensively about how robotics is riding the wave of advancements elsewhere in the tech tree, making it easier to build these robots. Of course, it’s not quite finished yet. There are excellent companies — Boston Dynamics, 1X, Figure, and many others who might be upset if I don’t mention them, plus companies like Apptronik and Unitree — all working to build great robots. But that’s fundamentally an engineering problem, and we can apply the standard playbook of scale, cost reduction, and engineering to make them better.

The key unlock, assuming we have the robot bodies, is the robot brains. We already have a method for creating robot brains — you put a bunch of PhDs in a room and they toil for years creating a fairly limited, single-purpose robot. But that approach doesn’t scale.

To achieve meaningful impact on productivity, we need a robot brain that learns and can quickly learn new tasks. This is why people are excited about robotics foundation models.

How do we create a robotics foundation model? That’s the crucial question. Everything I’m about to say is hypothetical because we haven’t created one yet, but the current thinking is that creating a robotics foundation model shouldn’t be fundamentally different from creating a purely digital foundation model. The strategy is training larger and larger models.

However, the model can’t just be large for its own sake. To train a large model effectively, you need massive amounts of data — data proportionate to the model’s size. In large language models, there appears to be a magical threshold between 5 and 7 billion parameters where intelligence begins to emerge. That’s when you start seeing GPT-2 and GPT-3 behavior. We don’t know what that number is for robotics, but those parameters imply a certain data requirement.

What do we need to create a robotics foundation model? We need vast amounts of diverse data showing robots performing many useful tasks, preferably as much as possible in real-world scenarios. In other words, we need data and diversity at scale.

This is the biggest problem for embodied AI. How does ChatGPT get its data? How do Claude or Gemini get theirs? Some they purchase, especially recently, but first they ingest essentially the entire internet — billions of images and billions of sentences of text. Most of this content is free or available for download at low cost. While they do buy valuable data, the scale of their purchases is much smaller than the massive, unstructured ingestion of internet information.

There’s no internet of robot data. Frontier models train on billions of image-text pairs, while today’s robotics foundation models with the most data train on tens of thousands of examples — requiring herculean efforts from dozens or hundreds of people.

Subscribe now

This creates a major chicken-and-egg problem. If we had this robotics foundation model, it would be practical and economical to deploy robots in various settings, have them learn on the fly, and collect data. In robotics and AI, we call this the data flywheel: you deploy systems in the world, those systems generate data through operation, you use that data to improve your system, which gives you a better system that you can deploy more widely, generating more data and continuous improvement.

We want to spin up this flywheel, but you need to start with a system good enough to justify its existence in the world. This is robotics’ fundamental quandary.

I want to add an important note about scale. Everyone talks about big data and getting as much data as possible, but a consistent finding for both purely digital foundation models and robotics foundation models is that diversity is far more important than scale. If you give me millions of pairs of identical text or millions of demonstrations of a robot doing exactly the same thing in exactly the same place, that won’t help my system learn.

The system needs to see not only lots of data, but data covering many different scenarios. This creates another economic challenge, because while you might consider the economics of deploying 100 robots in a space to perform tasks like package picking...

Jordan Schneider: Right, if we have a robot that can fold laundry, then it can fold laundry. But will folding laundry teach it how to assemble IKEA furniture? Probably not, right?

Ryan Julian: Exactly. Economics favor scale, but we want the opposite — a few examples of many different things. This is the most expensive possible way to organize data collection.

Jordan Schneider: I have a one-year-old, and watching her build up her physics brain — understanding the different properties of things and watching her fall in various ways, but never the same way twice — has been fascinating. If you put a new object in front of her, for instance, we have a Peloton and she fell once because she put her weight on the Peloton wheel, which moved. She has never done that again.

Ryan Julian: I’m sure she’s a genius.

Jordan Schneider: Human beings are amazing. They’re really good at learning. The ability to acquire language, for example — because robots can’t do it yet. Maybe because we have ChatGPT, figuring out speech seems less of a marvel now, but the fact that evolution and our neurons enable this, particularly because you come into the world not understanding everything... watching the data ingestion happen in real time has been a real treat. Do people study toddlers for this kind of research?

Ryan Julian: Absolutely. In robot learning research, the junior professor who just had their first kid and now bases all their lectures on watching how their child learns is such a common trope. It’s not just you — but we can genuinely learn from this observation.

First, children aren’t purely blank slates. They do know some things about the world. More importantly, kids are always learning. You might think, “My kid’s only one or two years old,” but imagine one or two years of continuous, waking, HD stereo video with complete information about where your body is in space. You’re listening to your parents speak words, watching parents and other people do things, observing how the world behaves.

This was the inspiration for why, up through about 2022, myself and other researchers were fascinated with using reinforcement learning to teach robots. Reinforcement learning is a set of machine learning tools that allows machines, AIs, and robots to learn through trial and error, much like you described with your one-year-old.

What’s been popular for the last few years has been a turn toward imitation learning, which essentially means showing the robot different ways of doing things repeatedly. Imitation learning has gained favor because of the chicken-and-egg problem: if you’re not very good at tasks, most of what you try and experience won’t teach you much.

If you’re a one-year-old bumbling around the world, that’s acceptable because you have 18, 20, or 30 years to figure things out. I’m 35 and still learning new things. But we have very high expectations for robots to be immediately competent. Additionally, it’s expensive, dangerous, and difficult to allow a robot to flail around the world, breaking things, people, and itself while doing reinforcement learning in real environments. It’s simply not practical.

Having humans demonstrate tasks for robots is somewhat more practical than pure reinforcement learning. But this all comes down to solving the chicken-and-egg problem I mentioned, and nobody really knows the complete solution.

There are several approaches we can take. First, we don’t necessarily have to start from scratch. Some recent exciting results that have generated significant enthusiasm came from teams I’ve worked with, my collaborators, and other labs. We demonstrated that if we start with a state-of-the-art vision-language model and teach it robotics tasks, it can transfer knowledge from the purely digital world — like knowing “What’s the flag of Germany?” — and apply it to robotics.

Imagine you give one of these models data showing how to pick and place objects: picking things off tables, moving them to other locations, putting them down. But suppose it’s never seen a flag before, or specifically the flag of Germany, and it’s never seen a dinosaur, but it has picked up objects of similar size. You can say, “Please pick up the dinosaur and place it on the flag of Germany.” Neither the dinosaur nor the German flag were in your robotics training data, but they were part of the vision-language model’s training.

My collaborators and I, along with other researchers, showed that the system can identify “This is a dinosaur” and use its previous experience picking up objects to grab that toy dinosaur, then move it to the flag on the table that it recognizes as Germany’s flag.

One tactic — don’t start with a blank slate. Begin with something that already has knowledge.

Another approach — and this explains all those impressive dancing videos you see from China, with robots running and performing acrobatics — involves training robots in simulation using reinforcement learning, provided the physical complexity isn’t too demanding. For tasks like walking (I know I say “just” walking, but it’s actually quite complex) or general body movement, it turns out we can model the physics reasonably well on computers. We can do 99% of the training in simulation, then have robots performing those cool dance routines.

We might be able to extend this framework to much more challenging physical tasks like pouring tea, manipulating objects, and assembling things. Those physical interactions are far more complex, but you could imagine extending the simulation approach.

Jordan Schneider: Or navigating around Bakhmut or something.

Ryan Julian: Exactly, right. The second approach uses simulation. A third tactic involves getting data from sources that aren’t robots but are similar. This has been a persistent goal in robot learning for years — everyone wants robots to learn from watching YouTube videos.

There are numerous difficult challenges in achieving this, but the basic idea is extracting task information from existing video data, either from a first-person perspective (looking through the human’s eyes) or third-person perspective (watching a human perform tasks). We already have extensive video footage of people doing things.

What I’ve described represents state-of-the-art frontier research. Nobody knows exactly how to accomplish it, but these are some of our hopes. The research community tends to split into camps and companies around which strategy will ultimately succeed.

Then there’s always the “throw a giant pile of money at the problem” strategy, which represents the current gold standard. What we know works right now — and what many people are increasingly willing to fund — is building hundreds or even thousands of robots, deploying them in real environments like factories, laundries, logistics centers, and restaurants. You pay people to remotely control these robots to perform desired tasks, collect that data, and use it to train your robotics foundation model.

The hope is that you don’t run out of money before reaching that magic knee in the curve — the critical threshold we see in every other foundation model where the model becomes large enough and the data becomes sufficiently big and diverse that we suddenly have a model that learns very quickly.

There’s a whole arms race around how to deploy capital quickly enough and in the right way to find the inflection point in that curve.

Jordan Schneider: Is Waymo an example of throwing enough money at the problem to get to the solution?

Ryan Julian: Great example.

Jordan Schneider: How do we categorize that?

Ryan Julian: Waymo and other self-driving cars give people faith that this approach might work. When you step into a Waymo today, you’re being driven by what is, at its core, a robotics foundation model. There’s a single model where camera, lidar, and other sensor information from the car comes in, gets tokenized, decisions are made about what to do next, and actions emerge telling the car where to move.

That’s not the complete story. There are layers upon layers of safety systems, decision-making processes, and other checks and balances within Waymo to ensure the output is sound and won’t harm anyone. But the core process remains: collect data on the task (in this case, moving around a city in a car), use it to train a model, then use that model to produce the information you need.

Self-driving cars have been a long journey, but their success using this technique gives people significant confidence in the approach.

Let me temper your enthusiasm a bit. There’s hope, but here’s why it’s challenging. From a robotics perspective, a self-driving car is absolutely a robot. However, from that same perspective, a self-driving car has an extremely simple job — it performs only one task.

The job of a self-driving car is to transport you, Jordan, and perhaps your companions from point A to point B in a city according to a fairly limited set of traffic rules, on a relatively predictable route. The roads aren’t completely predictable, but they follow consistent patterns. The car must accomplish this without touching anything. That’s it — get from point A to point B without making contact with anything.

The general-purpose robots we’re discussing here derive their value from performing thousands of tasks, or at least hundreds, without requiring extensive training data for each one. This represents one axis of difficulty: we must handle many different tasks rather than just one.

The other challenge is that “don’t touch anything” requirement, which is incredibly convenient because every car drives essentially the same way from a physics perspective.

Jordan Schneider: Other drivers are trying to avoid you — they’re on your side and attempting to avoid collisions.

Ryan Julian: Exactly — just don’t touch anything. Whatever you do, don’t make contact. As soon as you start touching objects, the physics become far more complicated, making it much more difficult for machines to decide what to do.

The usefulness of a general-purpose robot lies in its ability to interact with objects. Unless it’s going to roam around your house or business, providing motivation and telling jokes, it needs to manipulate things to be valuable.

These are the two major leaps we need to make from the self-driving car era to the general robotics era — handling many different tasks and physically interacting with the world.

Jordan Schneider: Who are the companies in China and the rest of the world that folks should be paying attention to?

Ryan Julian: The Chinese space is gigantic, so I can only name a few companies. There are great online resources if you search for “Chinese robotics ecosystem."

In the West, particularly the US, I would divide the companies really pushing this space into two camps.

The first camp consists of hardware-forward companies that think about building and deploying robots. These tend to be vertically integrated. I call them “vertical-ish” because almost all want to build their own embodied AI, but they approach it from a “build the whole robot, integrate the AI, deploy the robot” perspective.

In this category, you have Figure AI, a vertical humanoid robot builder that also develops its own intelligence. There’s 1X Technologies, which focuses on home robots, at least currently. Boston Dynamics is the famous first mover in the space, focusing on heavy industrial robots with the Atlas platform. Apptronik has partnered with Google DeepMind and focuses on light industrial logistics applications.

Tesla Optimus is probably the most well-known entry in the space, with lots of rhetoric from Elon about how many robots they’ll make, where they’ll deploy them, and how they’ll be in homes. But it’s clear that Tesla’s first value-add will be helping automate Tesla factories. Much of the capital and many prospective customers in this space are actually automakers looking to create better automation for their future workforce.

Apple is also moving into the space with a very early effort to build humanoid robots.

The second camp focuses on robotics foundation models and software. These tend to be “horizontal-ish” — some may have bets on making their own hardware, but their core focus is foundation model AI.

My former employer, Google DeepMind, has a robotics group working on Gemini Robotics. NVIDIA also has a group doing this work, which helps them sell chips.

Among startups, there’s Physical Intelligence, founded by several of my former colleagues at Google DeepMind and based in San Francisco. Skild AI features some CMU researchers. Generalist AI includes some of my former colleagues. I recently learned that Mistral has a robotics group.

A few other notable Western companies — there’s DYNA, which is looking to automate small tasks as quickly as possible. They’re essentially saying, “You’re all getting too complicated — let’s just fold napkins, make sandwiches, and handle other simple tasks.”

There are also groups your audience should be aware of, though we don’t know exactly what they’re doing. Meta and OpenAI certainly have embodied AI efforts that are rapidly growing, but nobody knows their exact plans.

In China, partly because of the trends we discussed and due to significant funding and government encouragement (including Made in China 2025), there’s been an explosion of companies seeking to make humanoid robots specifically.

The most well-known is Unitree with their H1 and G1 robots. But there are also companies like Fourier Intelligence, AgiBot, RobotEra, UBTECH, EngineAI, and Astribot. There’s a whole ecosystem of Chinese companies trying to make excellent humanoid robots, leveraging the Shenzhen and Shanghai-centered manufacturing base and incredible supply chain to produce the hardware.

When Robots Learn

Jordan Schneider: How do people in the field of robotics discuss timelines?

Ryan Julian: It’s as diverse as any other field. Some people are really optimistic, while others are more pessimistic. Generally, it’s correlated with age or time in the field. But I know the question you’re asking: when is it coming?

Let’s ground this discussion quickly. What do robots do today? They sit in factories and do the same thing over and over again with very little variation. They might sort some packages, which requires slightly more variation. Slightly more intelligent robots rove around and inspect facilities — though they don’t touch anything, they just take pictures. Then we have consumer robots. What’s the most famous consumer robot? The Roomba. It has to move around your house in 2D and vacuum things while hopefully not smearing dog poop everywhere.

That’s robots today. What’s happening now and what we’ll see in the next three to five years falls into what I call a bucket of possibilities with current technology. There are no giant technological blockers, but it may not yet be proven economical. We’re still in pilot phases, trying to figure out how to turn this into a product.

The first place you’re going to see more general-purpose robots — maybe in humanoid form factors, maybe slightly less humanoid with wheels and arms — is in logistics, material handling, and light manufacturing roles. For instance, machine tending involves taking a part, placing it into a machine, pressing a button, letting the machine do its thing, then opening the machine and pulling the part out. You may also see some retail and hospitality back-of-house applications.

What I’m talking about here is anywhere a lot of stuff needs to be moved, organized, boxed, unboxed, or sorted. This is an easy problem, but it’s a surprisingly large part of the economy and pops up pretty much everywhere. Half or more of the labor activity in an auto plant is logistics and material feed. This involves stuff getting delivered to the auto plant, moved to the right place, and ending up at a production line where someone picks it up and places it on a new car.

More than half of car manufacturing involves this process, and it’s actually getting worse because people really want customized cars these days. Customizations are where all the profit margin is. Instead of Model T’s running down the line where every car is exactly the same, every car running down the line now requires a different set of parts. A ton of labor goes into organizing and kitting the parts for each car and making sure they end up with the right vehicle.

Ten to twelve percent of the world economy is logistics. Another fifteen to twenty percent is manufacturing. This represents a huge potential impact, and all you’re asking robots to do is move stuff — pick something up and put it somewhere else. You don’t have to assemble it or put bolts in, just move stuff.

Over the next three to five years, you’re going to see pilots starting today and many attempts, both in the West and in China, to put general-purpose robots into material handling and show that this template with robotics foundation models can work in those settings.

Now, if that works — if the capital doesn’t dry up, if researchers don’t get bored and decide to become LLM researchers because someone’s going to give them a billion dollars — then maybe in the next seven to ten years, with some more research breakthroughs, we may see these robots moving into more dexterous and complex manufacturing tasks. Think about placing bolts, assembling things, wings on 747s, putting wiring harnesses together. This is all really difficult.

You could even imagine at this point we’re starting to see maybe basic home tasks: tidying, loading and unloading a dishwasher, cleaning surfaces, vacuuming...

Jordan Schneider: When are we getting robotic massages?

Ryan Julian: Oh man, massage. I don’t know. Do you want a robot to press really hard on you?

Jordan Schneider: You know... no. Maybe that’s on a fifteen-year horizon then?

Ryan Julian: Yeah, that’s the next category. Anything that has a really high bar for safety, interaction with humans, and compliance — healthcare, massage, personal services, home health aid — will require not only orders of magnitude more intelligence than we currently have and more capable physical systems, but you also really start to dive into serious questions of trust, safety, liability, and reliability.

Share

Having a robot roving around your house with your one-year-old kid and ensuring it doesn’t fall over requires a really high level of intelligence and trust. That’s why I say it’s a question mark. We don’t quite know when that might happen. It could be in five years — I could be totally wrong. Technology changes really fast these days, and people are more willing than I usually expect to take on risk. Autopilot and full self-driving are good examples.

One thing the current generation of robotics researchers, generalist robotics researchers, startups, and companies are trying to learn from the self-driving car era is this: maybe one reason to be optimistic is that because of this safety element, self-driving cars are moving multi-ton machines around lots of people and things they could kill or break. You have people inside who you could kill. The bar is really high — it’s almost aviation-level reliability. The system needs to be incredibly reliable with so much redundancy, and society, regulators, and governments have to have so much faith that it is safe and represents a positive cost-benefit tradeoff.

This makes it really difficult to thread the needle and make something useful. In practice, it takes you up the difficulty and autonomy curve we talked about and pushes you way up to really high levels of autonomy to be useful. It’s kind of binary — if you’re not autonomous enough, you’re not useful.

But these generalist robots we’re talking about don’t necessarily need to be that high up the autonomy difficulty curve. If they are moderately useful — if they produce more than they cost and save some labor, but not all — and you don’t need to modify your business environment, your home, or your restaurant too much to use them, and you can operate them without large amounts of safety concerns, then you have something viable.

For instance, if you’re going to have a restaurant robot, you probably shouldn’t start with cutting vegetables. Don’t put big knives in the hands of robots. There are lots of other things that happen in a restaurant that don’t involve big knives.

One of the bright spots of the current generalist robotics push and investment is that we believe there’s a much more linear utility-autonomy curve. If we can be half autonomous and only need to use fifty percent of the human labor we did before, that would make a huge difference to many different lives and businesses.

Jordan Schneider: Is that a middle-of-the-road estimate? Is it pessimistic? When will we get humanoid robot armies and machines that can change a diaper?

Ryan Julian: It’s a question of when, not if. We will see lots of general-purpose robots landing, especially in commercial spaces — logistics, manufacturing, maybe even retail back of house, possibly hospitality back of house. The trajectory of AI is very good. The machines are becoming cheaper every day, and there are many repetitive jobs in this world that are hazardous to people. We have difficulty recruiting people for jobs that are not that difficult to automate. Personally, I think that’s baked in.

If, to you, that’s a robot army — if you’re thinking about hundreds of thousands, maybe even millions of robots over the course of ten years working in factories, likely in Asia, possibly in the West — I think we will see it in the next decade.

The big question mark is how advanced we’ll be able to make the AI automation. How complicated are the jobs these machines could do? Because technology has a habit of working really well and advancing really quickly until it doesn’t. I’m not exactly sure where that stopping point will be.

If we’re on the path to AGI, then buckle up, because the robots are getting real good and the AGI is getting really good. Maybe it’ll be gay luxury space communism for everybody, or maybe it’ll be iRobot. But the truth is probably somewhere in between. That’s why I started our discussion by talking about how robots are the ultimate capital good.

If you want to think about what would happen if we had really advanced robots, just think about what would happen if your dishwasher loaded and unloaded itself or the diaper changing table could change your daughter’s diaper.

A good dividing line to think about is that home robots are very difficult because the cost needs to be very low, the capability level needs to be very diverse and very high, and the safety needs to be very high. We will require orders of magnitude more intelligence than we have now to do home robots if they do happen. We’re probably ten-plus years away from really practical home robots. But in the industrial sector — and therefore the military implications we talked about — it’s baked in at this point.

Jordan Schneider: As someone who, confession, has not worked in a warehouse or logistics before, it’s a sector of the economy that a lot of the Washington policymaking community just doesn’t have a grasp on. Automating truckers and automating cars doesn’t take many intellectual leaps, but thinking about the gradations of different types of manual labor that are more or less computationally intensive is a hard thing to wrap your head around if you haven’t seen it in action.

Ryan Julian: This is why, on research teams, we take people to these places. We go on tours of auto factories and logistics centers because your average robotics researcher has no idea what happens in an Amazon warehouse. Not really.

For your listeners who might be interested, there are also incredible resources for this provided by the US Government. O*NET has this ontology of labor with thousands of entries — every physical task that the Department of Labor has identified that anybody does in any job in the United States. It gets very detailed down to cutting vegetables or screwing a bolt.

Share

Jordan Schneider: How can people follow this space? What would you recommend folks read or consume?

Ryan Julian: Well, of course you should subscribe to ChinaTalk. Lots of great revised coverage. The SemiAnalysis guys also seem to be getting into it a little bit. Other than that, I would join Twitter or Bluesky. That is just the rest of the AI community. That’s the best place to find original, raw content from people doing the work every day.

If you follow a couple of the right accounts and start following who they retweet over time, you will definitely build a feed where, when the coolest new embodied AI announcement comes out, you’ll know in a few minutes.

[Some accounts! Chris Paxton, Ted Xiao, C Zhang, and The Humanoid Hub. You can also check out the General Robots and Learning and Control Substacks, Vincent Vanhoucke on Medium, and IEEE’s robotics coverage.]

Jordan Schneider: Do you have a favorite piece of fiction or movie that explores robot futures?

Ryan Julian: Oh, I really love WALL-E and Big Hero 6. I prefer friendly robots.

Enjoy this deleted scene from WALL-E:

Mood Music:

❌