Can donors save science?
Renaissance Philanthropy — in my opinion, the most exciting philanthropic venture in the US — is getting a one-year check-in. Kumar Garg first appeared on the show right before I went on paternity leave, and now we’re back for round two. Before founding Renaissance Philanthropy, Kumar worked in the Obama Office of Science and Technology Policy and spent time at Schmidt Futures.
We discuss…
How Renaissance catalyzed over $200 million in philanthropic funding in its first year,
The goals of the organization and how it has responded to Trump’s S&T funding cuts,
What sets Renaissance apart from traditional philanthropic organizations, and lessons for China-focused research foundations,
AI applications in education, from tutoring to dyslexia screening,
Donor psychology, “portfolio regret,” and how to build trust within a philanthropic network.
Listen on Spotify, iTunes, or your favorite podcast app.

The Hedge Fund Model of Giving
Kumar Garg: I like that this is becoming an annual tradition.
Jordan Schneider: Yeah, we've got to set goals this year, and we can hold you to them in 2026.
Can you start off with the 101 of Renaissance philanthropy, and explain how the thesis has played out over the past year?
Kumar Garg: I’m grading myself here, so this is a biased view, but it’s been a very strong year. When we were launching the organization, we were trying to do something different.
Most philanthropic organizations exist in a single model — they work for a single donor. That donor has resources, whether they sit in a foundation, in their DAF, or as personal wealth. The organization works for them, asking how much money they want to give and on what topics, then runs their philanthropic giving.
There’s another class of organizations that are basically the people spending the money — researchers running labs and doing high-quality research. The philanthropic system has mostly operated with givers and takers — folks operating these organizations and folks doing high-quality work.
The idea behind Renaissance Philanthropy was to sit in the middle and style ourselves more like an investment fund — more like what happens in the world of finance. The folks who are the holders of capital, who have the money, mostly don’t spend their time trying to directly deploy that money.
If you work as an LP for a family office, you might have a team of 10, 20, or 30 people, and you’ve got billions of dollars to deploy. What do you do? You go out there and find intermediaries — private equity funds, hedge funds, venture capital funds, or other experts in particular sectors and areas. You give them the money, and they deploy it on your behalf to help you earn a return.
Philanthropy has mostly operated differently. It’s odd, but it’s historically contingent. The investment world moved toward specialization from the ’70s onward, while philanthropy went in the direction of direct giving. You have really large philanthropic organizations, often well-staffed by experts, that do the giving.
The challenge is that there’s a subset of donors who want to build large organizations, and there’s a large set of donors who don’t. The ones who don’t have been sitting on the sidelines. What ends up happening is maybe when they retire, they build an organization, or when they die, they bequest it to a nonprofit or university. That leaves a lot of value on the table.
The idea of Renaissance was, on various science and tech topics, can we do what an investment fund does? We write down a thesis for three years, five years — we want to achieve this goal. We recruit a field leader to run that fund, then treat the donors almost like LPs in a philanthropic fund. We’re not giving them a return back, but they’re putting money to work against that strategy.
A year ago, when I told this story to people — “I’m going to create an organization that does this” — the operative advice was, “Good luck.” You’re going to cover the waterfront across AI, climate, and economic social mobility. You’re going to take on this massive fundraising goal. That seems like a very hard way to operate. You have no natural advantages — you’re not spending one person’s resources. You have to raise the money and deploy it. It seems doubly hard.
What I was interested in was growing the pie — can we use this model to bring new donors in?
A year in, the early grade is strong. We’ve been able to stand up multiple philanthropic funds. We have a fund using AI to accelerate the pace of math research. We have another fund using AI to deliver public benefits better. We recently launched work on climate emergencies — can we solve for runaway climate risks and increase the technology readiness level of various climate technologies?
We have different funds in various areas. Each has this basic structure — they have a thesis they’re driving against, a field leader running it, and we’re recruiting donor money against that strategy.
What I’m hoping for is that this starts to become — not the only way philanthropic giving happens — but a much more credible path. This allows more donors to be active without necessarily having to take on all the operational load themselves.
Jordan Schneider: You’ve launched this in a particularly precarious time for the future of science and research in America. We’ll get to your takes on the policies in a second. But I’m curious from a donor appetite perspective — what has all the tumult in universities and government funding done for those billionaires sitting on the sidelines, giving just 1.2% of their assets annually to philanthropy?
Kumar Garg: That’s a great question. I don’t have one system-level answer — it’s a frequent question I get about how donors are interacting with the environment. They’re interacting the way most people are — there’s an incredible amount of chaos and news every day, leaving many frozen in place.
It’s relative. Government has pulled back on research funding in the short term, causing significant churn. Industry is also holding back as companies figure out what’s happening with tariffs and everything else. Philanthropy, comparatively, is cross-pressured but hasn’t engaged in the same pullback.
There are donors we interact with who are certainly reformulating their strategies. There are others who, as I mentioned, are interested in compelling ideas and looking for those just like anything else. I haven’t seen an overall pullback — just more of a sense of “Is this idea good in itself, even if government didn’t help at all?"
Jordan Schneider: Can you put in order of magnitude the hope of the new model you’re trying to manifest against, I don’t know, NIH budgets being cut by a third?
Kumar Garg: There’s no world in which philanthropy fills the gap. If you step back and ask how the US built its lead — well, the US spends on the order of $200 billion a year on R&D. Once you include basic and applied research across DoD and civilian agencies, that’s an order of magnitude more than philanthropy spends on research.
The place where these new models will get traction is that how you organize scientific organizations has suddenly become much more of a jump ball. It used to be that the academic bundle — being at a top university — had everything stacked on top of itself. You could get really good talent that way — graduate and post-graduate talent, great students. You build your lab there, do cutting-edge work. Usually, the university gives you flexibility to do many things on top of it. If you’re an academic doing well at cutting-edge research, you could do that within the four walls of a university.
Some researchers have left universities and built what are basically academic research labs outside the university. You’ve got the work that Patrick Collison is supporting around the ARC Institute, the Flatiron Institute that Simons supports. You’ve seen the FROs that Convergent Research proposes. For a long time, that’s been a very alternative path — rare to do, often requiring you to figure out what happens to your university affiliation and how it changes your career path.
If you’re a researcher who’s ambitious and wants to do big projects, whether you’re doing them within the four corners of the university or in your own nonprofit research lab and partnering with universities becomes more of an open question — especially in a world where university funding might fluctuate based on political developments.
I don’t know how that will play out over time, but we’re three months into a deeper shift in how institutional financing will happen. That could have big implications generally. On net, if the federal government doesn’t play its important role in funding research, it’s all a net negative. If federal funding returns to a healthy level, researchers will still take this as a wake-up call to think about structuring their research organizations to be more resilient against systemic shocks.
This episode is brought to you by ElevenLabs. I’ve been on the hunt for years for the perfect reader app that puts AI audio at the center of its design. Over the past few months, the ElevenReader app has earned a spot on my iPhone's home screen and now gets about 30 minutes of use every day. I plow through articles using Eleven Reader’s beautiful voices and love having Richard Feynman read me AI news stories — as well as, you know, Matilda every once in a while, too.
I’m also a power user of its bookmark feature, which the ElevenReader team added after I requested it on Twitter. ChinaTalk’s newsletter content even comes preloaded in the feed.
Check out the ElevenReader app if you’re looking for the best mobile reader on the market. Oh, and by the way — if you ever need to transcribe anything, ElevenLabs’ Scribe model has transformed our workflow for getting transcripts out to you on the newsletter. It’s crossed the threshold from “95% good” to “99.5% amazing,” saving our production team hours every week. Check it out the next time you need something transcribed.
Jordan Schneider: I observed the EAs being very excited about how many lives they save based on the bed nets they bought. Then you net that out against USAID no longer existing and all the human suffering that’s going to come out of that. The correct calculation may have been to spend all your money lobbying Congress to get people to focus on this.
I think both of us are pretty aligned — we’ve done other shows on immigration policy, university funding, and what’s happening to NSF and NIH budgets going forward. But why do Renaissance philanthropy when Kumar could be spending 100% of his time in D.C. banging on doors and trying to make it 5% more likely that we get an extra $10 billion a year for this stuff?
Kumar Garg: That’s a great question. Being policy-adjacent is generally very high ROI. No matter how you run the numbers, policy advocacy — especially on science and tech topics — punches above its weight, regardless of what you’re doing. It’s probably why I spent time in government as a policy staffer. It’s partly why, no matter what I’m doing, I’m constantly interacting with policymakers and making the case. It’s also why, when funders ask to what extent advocating on behalf of the research community should be part of their work, I’m strongly supportive.
The reason we structured Renaissance this way is that I wanted to specifically think about growing the pie of philanthropic funding because I thought no one was doing it. There are organizations working on policy advocacy. Very few organizations were trying to bring new donors into the mix.
We would be failing as an organization if we weren’t constantly thinking about how our work could impact shaping the debate on the future of R&D funding. We try to be in conversations with both Congress and the administration, as well as policymakers up and down the ladder, to say, “Here’s why this work matters, here’s why the future matters.”
Part of the new models we’re funding — whether it’s things like FROs or AI accelerating science — is to make the case for why investment should happen. Many of the ideas I’ve funded over the years, you can see echoes of in the new Heinrich legislation around accelerating science through AI, where they’re talking about ensuring these AI investments can actually accelerate the pace of science using new models.
Philanthropy, when done well, opens the aperture for what funding could do. Hopefully we’re playing that role. One area I’d like the conversation to reach is moving beyond this dialectic between “science is important” and “science needs to be dismantled because it made mistakes.” I’d like us to reach a place where we recognize there are important things we can do to help reform how we do science. We should bring more discipline to trying out new ideas, bringing in new funding methods and new voices, and reflecting on past mistakes — while also remembering that the investment agenda around science is critical for its utility. Hopefully we can be part of that dialogue.
In some ways, you’re pushing on something I think about all the time — I am a policymaker at heart. The deep utility of that shouldn’t be forgotten in my story.
Jordan Schneider: All right, I think the answer I would give to you is this is the federalist model of policymaking almost — as you said, the inventiveness that you guys can come up with from a form factor and discipline perspective when it comes to doing science and technology research is the type that’s weird enough that it’s not happening in government. But also, two, three, four years down the line, once you guys have some really awesome case studies, these are the sorts of things that can then get 10x or 100x in our gorgeous NSF circa 2027 that has been remade to fully align with the Kumar vision of how change gets made.
With that stance of optimism, let’s talk a little more in detail about some of the projects you guys have stood up. Take us on a little tour, Kumar. Where do you want to start?
Kumar Garg: I’ll go through a couple of the funds and projects we’ve launched. Just to give people a taste of how the model works, let’s start with our work in AI. Our operating theory in AI is that we’re living through a period of huge capability overhang. The idea is that the core technology is rapidly developing, but the number of people, projects, and overall work that actually applies these tools toward actual hard problems in society is really small.
I’ll give an example. We have an AI and education fund specifically focused on how AI can accelerate the pace of learning outcomes. If you follow social media and others, there are many people who write and talk about AI in education. It would give you the sense that a lot of people are working on AI in education. But if you actually dig into the space, the number of actual technical experts who have knowledge of both how education works and how AI works is still shockingly small.
We run something called the Learning Engineering Tools competition — an annual competition that invites tool developers to present cutting-edge ideas that use AI to actually advance learning outcomes. We’ve been running this competition for a couple years. I started it even before Renaissance and then brought it into Renaissance. That competition is the only large-scale ed tech competition in the world. It still blows my mind. No one is out there in a systematic way asking for sets of ideas from people who want to build AI for education.
We have another part of our AI education portfolio that specifically thinks about moonshots — what’s a really hard problem in education that AI could solve? We picked middle school math. It’s really important for advancing to future degrees, and students really struggle with it. We said, can you actually emulate the results of high-dosage tutoring, which the number of studies that J-PAL and others have done show can really double the rate of learning for students in math? Can you do it under $1,000 per kid — bring it under what would make it such that you could offer it to every kid?
We have that running as a program. We have seven teams in the program. We have two teams that are actually on track to potentially accomplish this goal.
Jordan Schneider: Which is wild, right?
Kumar Garg: When those teams are working on it and we ask them who they’re collecting lessons from, there’s not a big field they can go out to. When they go out and interview the AI labs — the ones that get written about every day — those AI labs talk about education, but they don’t have in-house education teams that can actually help these teams.
The biggest piece I would always say to people is that at the coal face, there’s tons of room to do work because when you actually start to work on it, you realize that the number of people who are actually working on it is shockingly small.
We’re now starting to explore our next moonshot area — should there be something that basically looks like the intersection of AI and early learning? Can we actually build a universal screener to best guess if a child is off track when it comes to early language development by having them speak into a device? There’s a bunch of interesting work happening in this area, but we don’t actually have a way to diagnose early learning challenges like dyslexia just by having a student speak into a device. It could vastly increase our ability to help them get to a speech pathologist, get back on track, and be reading by third grade, which is critical to future reading and learning.
That’s just one track — AI and education. That’s just one compelling thesis.
Jordan Schneider: Obviously AI is going to matter for education. Hard to find people to argue about that. Talk a little bit about finding the donors and finding the teams. What was the work that you guys had to do to make and launch this work?
Kumar Garg: What’s been interesting is it has been hard work for us to build out the team because the number of technical experts who actually know both things — AI and education — is small. We have slowly built out a team of ML experts who have educational backgrounds, basically. We call it a hub model. We basically have created an engineering hub and we recruit technical experts into it that specifically have this technical background.
I have somebody on my team, Ralph Abboud. He has a machine learning PhD, and he did his thesis on graph theory. He’s not an education expert, but we brought him onto the team. He has been working with a lot of these educational teams that we brought in. What’s interesting is that his ideas on what kind of language models they should be building are really good. It took him some time to level up on the education side, but now he is one of their highest value contributors, even though he sits on our team and he’s contributing there.
There is this transition where you can build up talent that sits across these two areas. But in AI and education we had to mostly build it. It was hard to immediately find directly. Now we have a constellation of these AI and education experts, some of which sit on our direct staff, some of which sit inside these teams that we’re betting on. It’s been great. Now we have a field team that can really go after more problems.
On the donor side, we’ve really lucked out. We found that our core donor for a lot of this work has been the Walton Family Foundation. They have a long history in funding education. What’s been interesting is that they’ve been interested in investing more in what they call their innovation portfolio, but didn’t know how to necessarily bridge that technical divide — if we’re going to do more in this area, who are going to be the technical experts who will actually do it? That had actually kept them more experimental. But their partnership with us has meant that they have become way more ambitious on how much investment they want to make on this technical AI and education lens.
That’s our core thesis — can we be the permission structure for donors to go much bigger on innovation? We’ve seen that in other areas. Slowly their support is causing other donors to come in as well. That’s basically whether you’ve been a long-standing donor but not active on science and tech topics, or you’re an early donor altogether.
Jordan Schneider: What’s the RenPhil management fee?
Kumar Garg: That’s a good question. We build our cost recovery into each fund. Usually the way that works is if we’re operating multiple funds, each fund has money going out the door for actual deployment grants, but then we’re building in our cost for the actual staff operating the funds, whatever services and technical support services we’re providing, the work we’re doing to partner with various funders, as well as our overall studio support. It varies fund to fund, but donors have found it — compared to having to try to do this themselves — much more actionable. For us, we want to build a thriving organization. We don’t want to cut corners. We want to build an organization that can both operate those funds and also be looking for the next ones.
Jordan Schneider: Does anyone complain about that?
Kumar Garg: The way it comes up is there’s a type of donor who actually has the answer in their mind. They’re thinking, “I think this needs to happen.” Really, what they’re looking for is an operating partner to just do that — “I want a conference, I want a workshop, I want to fund these three organizations.” Our model is we’re the product. You’re actually hiring us to go build out the strategy, recruit the team, deploy. If you actually have the answer in your head, we often tell them we’re way too fussy for that model. You should just — there are much simpler ways you can operate. That’s where the delta comes in. If you already have the answer in your head and you’re just looking for a partner to execute for you, we’re probably not the right fit.
Jordan Schneider: Yeah, you said this on another show. You were like, “We are for donors to take off the cognitive load.” The idea of being: yes, if I have $10 billion, maybe I’ll allocate $1 billion investing in stuff I know and think I have some subject matter expertise in, but I still have to put the other $9 billion somewhere — probably not cash. Yes, I am comfortable paying a hedge fund or financial advisor a management fee to do that.
Kumar Garg: A big part of it is opt-in. People don’t know what journey they’re on, but what they worry about is: am I going to feel stuck? A lot of folks end up not getting active philanthropically because the decision feels weighted by getting stuck. “Okay, if I hire somebody, and then six months from now, I decide maybe I want to change direction. Now I’m going to have to let someone go.” People hate that.
Or, “I met a researcher. I liked their research. I gave them one grant. But now they’ve reached out and said, ‘There’s so much happening in the world — I’ve lost funding from the government. Can you double the grant?’ I was just giving them a grant because I met them and I thought they were great. But now they’ve sent me a note that they might have to let go of postdoc students. Now I’m in this uncomfortable situation. If I say no, I feel I’m hurting them. If I say yes…”
People have all these experiences where they feel uncomfortable with the relationship they have around their resourcing. Rather than causing them to work through it, they actually hold back. One of the things we say to them is that our model is one where we’re the ones making the decisions. We’re going out there, finding researchers, finding projects, developing strategies. You can be as involved as you want. You want to be meeting the researchers? That’s great. You want to be learning from the strategy so you can do direct giving down the road? That’s great. But if you also take six months off and decide, “That was great, I learned for a few months, now I’m off doing something else,” — nothing will stop. We’re a fully operational organization that will execute on everything we said we were going to do, whether you’re involved or not.
It just takes the pressure off. You can opt in if you want to learn and be involved, but you can also choose not to. That actually frees them up to want to learn without the “Am I about to get stuck?” That sounds very psychological, but people forget how hard it is to get going on things — “I’m going to start to work out more. I’m going to start to do this.” Starting is hard. We want to make starting easy by saying you can provide a lot of value into the system without necessarily having to own all of that execution.
Jordan Schneider: There are a lot of pieces of people’s jobs that seem like more and more AI can chip away at or enable or launch or whatever. It’s interesting because some of the things that you guys are doing — you have these seven playbooks, ways you can tackle problems — I would love to upload seven of those to ChatGPT and say, “Here’s my problem in the world,” and the AI can help me pick through which one. But talking someone who’s really rich, who’s feeling uncomfortable about giving money, into starting to donate philanthropically in a serious way for the first time seems like one of the more human things where there’s really going to need to be the friendly Kumar Garg, who now has a nice microphone he can do Zoom calls with to — what did Derek Thompson say? — whisper the dulcet tones of comfort and competence in their ear in order to get them on this path. I don’t know, it just seems like a very human thing that you’re engaging in on the donor engagement side. I’m curious for any reflections you have on that.
Kumar Garg: We are very curious about how much of our own internal processes we can automate. Why not? We sit next to AI, we should be thinking, we should be dogfooding. The place where we’ve seen it already provide some value is just what you would consider baseline automations. There’s a lot of grantee reporting that you should be able to do automations on. We’re definitely interested in: hey, we have a hunch around a thesis in this area. Can you do a research report and tell me what’s the relevant stuff to know? Scoping. We’ve even used it for, “Hey, we might do an RFP on this topic. Who are some researchers who should apply?” Sometimes we found some interesting suggestions for researchers that we should affirmatively reach out to.
I will say that we’re still far away from it actually helping on anything that we would consider high stakes. As you’re saying, a huge amount of what we’re doing is making something that feels like a trust fall. Hey, this is an important decision, but one where people who take their job very seriously and put their own personal legitimacy behind the work is an important part of it. When we screw up, it’s on us. We stand behind all of the work. People appreciate that these are serious people who stand behind the work they’re putting before them. They’re not some faceless intermediary. I don’t think maybe that will change, but that’s an important part. Even on the information you should know about various people and stuff, these current AI models are not that great.
The place where I — we have this intuition that there should be parts of being a program leader that you should be able to have an AI assistant for. Right? You take more and more of the task of being a program leader or fund leader and be able to say, “Okay, I want to do a workshop on this topic. Generate me an agenda for how you would run the day.” It generates based on — it takes a bunch of your past workshop flows and generates a sample workshop design. How much of that can we create so that we really could get to a point where a program leader or fund leader is basically able to operate without that much additional support? Obviously we need to create some cross-cutting support that I’m interested in. But the chance that we’re going to get to an AI advisor — we’ll have to wait.
Jordan Schneider: The trust fall works in multiple directions. You need researchers to give up their PhD programs or leave their current positions to spend half their time with you, while simultaneously needing donors to provide funding. Having a recognizable face with a proven track record and skin in the game on the other side of that equation is something that won’t disappear anytime soon.
Kumar Garg: One thing we debate internally is that much of my workflow relies on tacit knowledge. When I’m talking to somebody about their work, twenty minutes into the conversation, I’ll say, “Tell me more about that. Why is the field stuck on this point?” They start describing it, and I realize that if there were a canonical dataset with specific dimensionality, it might solve the problem. When I ask why that doesn’t exist, they explain it’s locked up somewhere.
Part of me constantly strives to figure out how we can make this process more explicit. When we recruit somebody new to the team, they ask if they can sit in on my calls and watch me work through problems with researchers. There’s something that feels wrong about just saying, “You develop this intuitive feeling for opportunities — just pull on that thread.” The more we can transition from tacit to explicit knowledge, the better. Right now, we operate on an apprenticeship model where people learn by doing and being embedded in these structures, but I don’t think that has to be the endpoint.
Jordan Schneider: Much of what you do involves human matching — putting people in touch with each other. While you could potentially feed all your past calls into an AI system, there’s an emotional and personality matching component that you’re handling. That remains very much a human process that current models aren’t quite ready for yet.
Kumar Garg: The matching capability changes over time, but I think what people really value when I connect them is that I took time out of my day to think the two of them should know each other. That’s the actual signaling value — that my time is precious.
Jordan Schneider: Slight tangent, but if people want to establish trust and rapport, the first thing they should do is spend $150 on a microphone for their Zoom calls. That’s my recommendation for everyone. When I do my calls, I sound the same as I do on my podcast, and people respond positively. You feel like an embodied person rather than a compressed, distant voice through AirPods. It’s advice for anyone who wants to make connections and raise money from billionaires on Zoom.
Kumar Garg: I’ll echo that point, though I haven’t practiced it myself. There’s an old political adage about microphone technology. If you look back at politicians historically, there was a time when microphones couldn’t pick up subtle intonations well — speakers were just projecting loud sound. Once microphones could capture subtle intonations, politicians who excelled at that style of speaking began to dominate.
People point to President Clinton as an example — he was exceptionally skilled at subtle microphone use. I remember reading a paper arguing that this was possible because the technology had improved to support that communication style. Politicians offer good lessons here because ultimately, communication is central to building trust with the electorate. [Here are papers that explore this]
Jordan Schneider: Absolutely. If you listen to old clips of Warren Harding or Teddy Roosevelt speaking, they’re basically screaming into microphones — which was necessary at the time. Teddy Roosevelt was exceptionally good at that style of projection. You needed to be very loud to stand on a soapbox and reach people twenty rows back. Now we have the dulcet tones that modern microphones enable.
Here’s another fun fact, Kumar — the microphone I’m using has been manufactured for sixty years. It’s remarkable that microphone technology for voice pickup has essentially reached its peak — we’ve basically maxed out the capability.
Kumar Garg: I should try to find that paper I mentioned. I wonder if it’s about mobile situations — being in some random union hall where you need to set up a handheld mic in front of a politician. Perhaps that’s why microphone technology improvements became so important. That’s an interesting angle.
Jordan Schneider: I’m curious about that. Alright, shifting topics — Yascha Mounk recently wrote on Substack about attending gatherings, conferences, and dinners where leaders of America’s biggest foundations have been strategizing how to defend democracy. Few were as openly devoted to extreme forms of identitarian ideology as they might have been a few years ago, but the reigning worldview at the top of the philanthropic world assumes little has changed since summer 2020.
The general consensus holds that voters turned to Trump because American democracy failed to deliver for the “historically marginalized,” and the solution supposedly revolves around “mobilizing underrepresented communities.” The most urgent imperative is to “fight for equity” and “listen to the global majority.” I find this perspective fascinating. Kumar, as someone who’s a new entrant to this world, how do you interpret this?
Kumar Garg: Several different dynamics are happening simultaneously. Some philanthropic responses resemble dinner table conversations — people sharing hot takes about why the election unfolded as it did and offering their views on America or the American people. Much of this sounds as random as hosting a dinner party where guests share their political opinions.
There’s also a genuine state of confusion about what’s happening. The first hundred days of the Trump administration have been exceptionally active across a range of unexpected areas. Many people expected it to feel similar to the previous Trump presidency, so they examined their portfolio of issues and anticipated certain outcomes — but that’s not what materialized.
Regarding how much people are actually rethinking their approaches, that’s a valid question. The most immediate reconsideration I’m seeing centers on identifying what we’re missing. This is particularly evident in the science community, which is confronting devastating across-the-board cuts. Researchers are losing funding, university funding is being paused, and graduate students working on topics relevant to competitiveness are having visas revoked.
The community is asking, “We don’t remember this being a major campaign debate topic, so how exactly did we become a political football?” There’s extensive questioning about what we’re missing — whether there was a conversation we weren’t invited to where we were being discussed, and what we’re failing to understand.
While donors with certain political orientations likely won’t change their fundamental positions, the confusion centers less on American domestic politics and more on why certain issues became contentious. Foreign aid is a good example. The extent to which US foreign aid posture and system effectiveness became campaign issues wasn’t apparent during the election cycle. People are asking whether we missed a major debate that suggested the United States should dismantle its leadership on these topics overnight. What policy debate did we miss? That’s where much of the confusion originates — donors being puzzled about the sources of these developments.
Jordan Schneider: There’s an interesting dichotomy between foundations with living leadership and those with deceased benefactors. Gates recently indicated he would spend down his wealth faster than previously planned, presumably responding to recent events. When foundations have active leadership that engages with current events, they can be more responsive.
However, when you have flagship philanthropists who died seventy-five years ago and organizations that have built programs around worldviews that are no longer relevant or don’t meet current demands, pivoting becomes much more difficult. You encounter institutional blockers, boards, and established structures, whereas a living person with decision-making authority can simply redirect resources.
Kumar Garg: That’s definitely part of it. The piece I’d add, which connects to our Renaissance model, is that people underestimate how much philanthropic organizations become tied to their existing programs. This isn’t necessarily negative, but consider the process: you spend two and a half years scoping a program, conducting field research on topics, then executing a national search for program leadership. You recruit and convince someone to relocate for the position, provide coaching for test grants, and they’re six to twelve months into the grant cycle.
If you then decide the world has changed and want to cut the program — after issuing press releases announcing this as a major new strategic direction — it appears chaotic. People develop what I call “portfolio regret.” If they could start fresh today, they’d create different programs than what currently exists.
One argument we make to donors is structuring themselves more like limited partners, deploying money into funds where all capital remains fresh and available. You avoid the incumbency problem where team members question every pivot attempt because they have specific responsibilities you hired them to fulfill.
Flexibility requires both mindset and structure. Donors sometimes create substantial built-in costs and barriers to pivoting when they could maintain lightweight, flexible operations if they chose to do so.
Jordan Schneider: That’s fascinating because you’d think giving money away would be straightforward — you should be able to distribute funds however you want. But the emotional sunk cost around philanthropy wasn’t something I had necessarily considered.
Kumar Garg: This creates a situation where people spend enormous amounts of time operating like a duck swimming on water — their feet are moving rapidly beneath the surface while they try to keep the strategy looking consistent above water, all while changing the actual content underneath to pivot to current circumstances. This leads to significant conceptual confusion because you claim to have always had this program, but underneath, the program is completely different because the situation changed.
Part of why I favor the philanthropic fund model is transparency — what’s on the cereal box is what you get. It’s a three-year fund with specific objectives that will begin and end. Maybe it’s not perfectly timely, and that’s fine, but your new program can be timely. The alternative — constantly maintaining broad programs that you’re perpetually reworking underneath — makes evaluation nearly impossible. If I ask whether a program has been successful, people respond that the program has been changing constantly. This makes it extremely difficult to evaluate it as a focused initiative that ran for a specific period with defined goals. Did we achieve those goals? People simply don’t engage in these basic evaluations.
For example, I was speaking with a donor and pointed out that in the investment world, people prominently display their successes — putting “first check into [major company]” in their Twitter bios to demonstrate their betting acumen. I asked, “Who are the ten best program officers in America? Who on the philanthropic side has been the most effective check writer?” They responded, “How would we even know that?” Even with qualitative measures, wouldn’t you want to identify the best check writers?
A fund model, even for philanthropic goals, enables more honest assessment. You can say that fund paid out successfully, that one was moderately effective, and another one failed completely. The person who led that fund can then take that track record to their next position as a legitimate career advancement. We deny people this opportunity when we maintain the fiction that we’ve always had these programs run by different people with slightly different strategies. This obscures rather than clarifies outcomes. We could simply be honest: we executed that fund, now we’re doing something new with a clear beginning and end.
Improving China Research + Why Bother?
Jordan Schneider: It’s remarkable. I’ve spent considerable time on foundation websites researching whether organizations like the Ford Foundation might fund ChinaTalk. They have mission statements about “democratizing equity,” which is admirable — I agree we should advance democracy and create more opportunities for people. However, the problem arises when you’re only a passive recipient of pitches. You’re essentially letting grantees define what success means, and the counterfactual becomes very difficult to assess. The organizations you’re funding would probably exist whether you give them $100,000, $500,000, or nothing at all.
The alternative approach would be starting with specific objectives — “We are trying to achieve X by Y timeline,” then working backward from that goal orientation to identify the people and organizations who can take your money and provide the highest probability of achieving that outcome. This approach is far more strategic, and my frustration isn’t about the politics of how they set their goals — it’s that they need to engage more seriously in the process.
Kumar Garg: Let’s work through this together. Imagine you and I were designing a fund model versus a program model for increasing collective intelligence on US-China relations.
There’s a vague way we could structure this — “This is the US-China program. It will have three tracks — funding scholars studying China, engaging policymakers about those insights, and warehousing data and research publications on these topics.” This resembles how most programs operate: they establish a broad framework with several tracks, then people apply under those categories. But if I asked what constitutes winning or how we’d know this program succeeded, you’d probably say, “Well, people applied and we distributed grants."
You could execute the same concept with much sharper focus by asking, “What would success look like in three years?"
Jordan Schneider: Exactly. I want ten books written that are so thoughtful and essential to the future of US-China relations and American policy that Ezra Klein would be compelled to feature these authors as guests because the thinking these grants produced is indispensable. Then we work backward from that goal to determine the budget.
We’d estimate the costs — ten books, assuming a one-in-five success rate for people with strong proposals to execute effectively, calculate the pipeline requirements, and arrive at a number that gives us a 75% confidence interval for producing those ten books by 2029.
Kumar Garg: Exactly. That approach feels like a sound tax strategy built on what resembles a very tight OKR, which you may completely fail to achieve. You might be wrong, but it’s precise. Then you build your strategy around that goal.
You find someone to execute it — let’s call it the Jordan Fund. If you succeed, people ask, “Jordan, how did you pull that off? You wrote down this ambitious goal, built a strategy around it, and executed successfully. You’re clearly skilled at this.” When you pursue your next initiative, you can say, “I ran this fund called the Jordan Fund. We set this audacious goal to produce ten bestsellers on US-China relations, and we achieved it."
This feels much more tangible as an actionable strategy — something that field leaders can point to as real-world impact, even if it fails. Let’s say you only achieve partial success — you still have concrete lessons. Compare that to “the US-China program makes some awards and does some things.” How do I assess whether that’s working?
Jordan Schneider: It’s remarkable because the market is so powerful — you can’t get away with this approach when running and scaling a business, especially when taking other people’s capital and trying to generate positive returns for them.
What’s curious, Kumar, is that very capitalist people become surprisingly touchy-feely when it comes to philanthropy. There’s an emotional layer where they think, “This is giving, so we shouldn’t apply business mindsets and OKRs.” It feels somehow dirty to them.
Kumar Garg: Here’s what’s important — we need to distinguish between current donors and potential donors. People focus extensively on today’s active donors, but if you examine the statistics on potential giving, current donors might represent only 1-2% of the actual addressable universe.
The question becomes, would we attract an entirely new class of donors if we brought this level of rigor, precision, and targeted approach to philanthropy? This would feel much more familiar to their professional experience.
Why don’t existing donors demand this rigor? I believe there’s significant pent-up interest in this approach. People oscillate between thinking, “Since this isn’t about making money, we’ll substitute with having a really complicated theory of action — that’s where we’ll apply our intellectual capacity.” I often say that just because something has numerous boxes and slides doesn’t substitute for having a clear attack vector.
Jordan Schneider: Exactly — rigorous thinking. The median nonprofit worker is about five times more likely to be socialist than the average person, so perhaps people more attracted to touchy-feely logic are simply concentrated in current organizations.
Kumar Garg: Sometimes the nonprofit and philanthropy sectors spend too much time engaging in collective mission statements, as if shared purpose alone is sufficient. But we actually have distinct roles to play, including making high-quality decisions about where to deploy finite resources. Because money is limited, you must make decisions strategically and place informed bets.
This may feel reductive, but it’s actually the responsibility inherent in this work. You must be a responsible steward because high-quality decisions produce more good. People sometimes struggle with that reality.
Jordan Schneider: Without high-quality decisions, you end up with USAID getting canceled. That’s our current reality. Organizations that weren’t evidence-based and couldn’t effectively justify their impact had some good projects and some poor ones. They faced criticism from small but vocal movements — organizations like Unlock Aid, whose founder we featured a few years ago and will have on again — arguing for more rigor because there was substantial waste and inefficiency.
If you let these issues fester too long, consequences follow. I don’t want to say universities, the NIH, or the NSF “had it coming,” but one of the best defenses you can have is a tight, well-justified organization that can stand up for itself.
Kumar Garg: I don’t want to engage in victim blaming, and I don’t want to excuse what I consider sometimes bad-faith behavior. However, your point about systemic advantages is valid — caring deeply about systemic impact and bringing that rigor and constant evaluation is useful for the work itself, but also valuable when those political fights emerge. You can say, “Look, we’re building something substantial."
In some of these cases, who knows what impact rigorous evaluation might have had. We’re living through unusual times, but I believe we’re gaining traction because there’s significant pent-up demand for this approach.
Jordan Schneider: Good. That makes me feel somewhat better, I suppose.
Kumar Garg: What should donors know about China? That’s my question for you.
Jordan Schneider: The original impetus for ChinaTalk was thinking about long-term national strategic competition and competitiveness from an industrial systems and technology perspective — identifying things people could do to nudge outcomes in liberal democracies’ favor. During the Biden era, I observed errors that legislation and executive action could fix with modest improvements — 5% here, 10% there. A sophisticated understanding of what’s happening in China could meaningfully help squeeze that extra 10% out of various decisions.
However, the policy changes we’ve witnessed over recent months regarding long-term strategic competition — how the US relates to allies, approaches to global nuclearization, science and technology funding, and immigration — are much more fundamental. Getting to a better place doesn’t require understanding what made BYD successful, how SMIC is developing its chips, or even China’s new AI policy. These are much more basic issues.
The thesis I operated under during the Biden era was that deeper, more considered understanding of China would lead to smarter policies. That’s now become a sideshow compared to more fundamental questions. If we accept the base case that science is important and immigrants are crucial for better science, then we should pursue those priorities directly. I would choose that approach ten times out of ten.
Returning to the order-of-magnitude questions I asked at the beginning, I would choose a NATO that functions as a genuine alliance ten times out of ten over determining the right tariff level for Chinese electric vehicles or batteries. That’s why I lean toward “be nice to allies” bumper stickers and NSF funding advocacy rather than tightly nuanced “we need to better understand China” approaches when considering ChinaTalk’s decadal competition mission.
Kumar Garg: One thing I’ve been considering, though I don’t have the answer yet, is what new institutions we need. Much of what I care about regarding how science operates in this country has been overturned. The idea that we’ll navigate this period with identical institutions seems unlikely — whether it’s who makes the case for science, who serves as science messengers, or how we conduct science itself.
This raises questions not just about policies, but about institutions themselves. Obviously Renaissance is part of that response, but I have broader concerns. We’ll probably need new institutions because the players on the field will have to change. Systemic change of this magnitude requires that everything else engage in significant adaptive change for us to succeed. That seems unlikely with current structures.
That’s a major meta-question I’ve been asking the team, “We won’t be able to handle everything directly, but what institutions would restore us to better footing? Do we have them? Do we need to create them?”
Jordan Schneider: I’m starting to focus my energy differently because I’m uncertain whether additional ChinaTalk podcasts about the importance of allies will accomplish much. However, one constant you can expect over the next four years is AI development and rapid technological change.
Regardless of controversial Trump policies, the Defense Department will persist, and America will still need to protect itself. America engages in conflicts approximately every three years, so that pattern will likely continue. Perhaps this is just me entering a kind of intellectual monk mode after dynastic change, but I’ve been reading extensive military history and examining periods of rapid technological change — specifically, what it means to deploy these tools more effectively than adversaries.
This doesn’t directly answer your question, but I’m pursuing intellectual journeys rather than policy ones.
Kumar Garg: Here’s what I’d say, which connects to the role you and ChinaTalk are playing. One thing I mentioned to Jordan before we started is that frequently when people reach out to me, they reference hearing a great ChinaTalk episode. You may not have set out to do this, but you’re playing a valuable role in shaping how other people — especially technical professionals — think about problems worth solving and their mental frameworks for our current age.
People are seeking understanding and meaning in this moment. The question isn’t about marginal additional podcasts, but whether you’re providing people with new vectors for their lives and careers. When we first met, you described yourself as a nerd passionate about these topics but uncertain where to channel that energy. You’ve created something quite distinctive.
We might be living in an era of unusually shaped careers, and we need to give people more space for that kind of professional evolution.
Jordan Schneider: That’s fair. Some of this material feels obvious to me but may not be obvious to people who don’t live and breathe these topics daily. It’s strange because I don’t feel like I’m part of some resistance movement — I’m just a guy with opinions on various issues. Some days I wake up feeling helpless, others I feel genuinely empowered. This isn’t a direct response to your question, but that’s my reality.
Kumar Garg: You’re reasoning in public. You’re expanding people’s understanding of how to think through these complex issues and broadening their sense of who engages in this work. What I consistently observe is that people operate in highly siloed environments. They’ll mention their sources, and I immediately know exactly who they’re reading or listening to.
If you can expand their perspective while providing actionable next steps, that’s valuable. Part of my goal is always emphasizing that there are numerous hard, interesting problems to solve across every arena. No arena is the wrong one. People who dismiss politics are missing a crucial point — we’re all living within political systems whether we acknowledge it or not. Don’t dismiss any arena; simply understand that there are multiple dimensions to engage with. There are challenging problems to solve, and nobody benefits if you just remain a passive observer in the cheap seats.
Jordan Schneider: Perhaps the way I justify all my World War I reading is that no one else is doing this specific work. I’m bringing historical insights to current issues as someone who also reads contemporary news and has the freedom to spend ten hours weekly on intellectual journeys exploring topics I consider relevant to today. That seems like a natural conclusion.
Kumar Garg: We’ll do an update in a year. Hopefully, the republic endures.
Jordan Schneider: Kumar, you need to set ambitious goals for yourself — accomplish so much that I have to bring you back in six months.
Kumar Garg: Absolutely. One of our primary goals is becoming more international. We have a partnership with the British government to develop their R&D ecosystem, and we want to expand that model to additional countries. We’re building an organization that continues to internationalize because science and technology are inherently global.
I’m hoping our fund model will attract new donors who have never engaged in philanthropy before. Beyond discussing our approach, I want the actual work to manifest tangible results in the world.
Jordan Schneider: Contact info@renphil.org if you’re wealthy, have innovative ideas, or simply have a technology-related challenge you need help addressing.
Kumar Garg: I’m also available on LinkedIn — please reach out. We consider ourselves fundamentally a talent network, so I’m always eager to connect with people who have compelling ideas.
Jordan Schneider: We should do a little parent corner. We’ll keep this as part of the annual check-in. We talked about slime last year.
Kumar Garg: We did. Here’s something we discussed before we started recording — I asked you about sleep training, and you mentioned being hesitant to push sleep training advice on others. I’m strongly convinced that sleep training is a gift you give your children. We had twins and committed to sleep training. They’re eleven years old now and remain excellent sleepers today. We attribute this directly to that early sleep training.
For any parent listening who’s on the fence and wants random advice from someone they’re hearing — I can’t offer this to everyone, but I usually tell people I know that I’m always happy to be anyone’s texting buddy, providing extra support to get through those terrible first few days when it feels like you’ve made a horrible mistake. On the other end, you have children who can sleep well, which benefits everyone.
Jordan Schneider: I’m with you. I outsourced this decision to my mother — maybe one of the best decisions of my life.
What’s a cute development we’ve observed recently? I bought a ukulele two years ago, thinking it would be nice to play with my kid. What’s been charming is that my daughter is nine months old now, and there was a period where her manual dexterity only allowed her to grab the strings and pull them. But one day she figured out plucking, and now she’s actually plucking the strings. It’s such a cool activation moment for her — realizing “I make the sound now” instead of just dragging this thing around the room.
Kumar Garg: The period from nine months to eighteen months is truly remarkable. You’re approaching walking and first words, then vocabulary takes off exponentially. It’s incredible developmental progress, so I’m very excited for you.
Jordan Schneider: Perfect. Let’s wrap it up there. Kumar, thank you so much for being part of ChinaTalk.
Kumar Garg: Thank you for everything you’re doing. I’m excited to have participated and look forward to future conversations.