Jason Matheny is a delight to speak with, provided you’re up for a lengthy conversation about potential technological and biomedical catastrophe.
Now CEO and president of Rand Corporation, Matheny has built a career out of thinking about such gloomy scenarios. An economist by training with a focus on public health, he dived into the worlds of pharmaceutical development and cultivated meat before turning his attention to national security.
As director of Intelligence Advanced Research Projects Activity, the US intelligence community’s research agency, he pushed for more attention to the dangers of biological weapons and badly designed artificial intelligence. In 2021, Matheny was tapped to be President Biden’s senior adviser on technology and national security issues. And then, in July of last year, he became CEO and president of Rand, the oldest nonprofit think tank in the US, which has shaped government policy on nuclear strategy, the Vietnam War, and the development of the internet.
Matheny talks about threats like AI-enabled bioterrorism in convincing but measured tones, Mr. Doomsday in a casual suit. He’s steering Rand to investigate the daunting risks to US democracy, map out new strategies around climate and energy, and explore paths to “competition without catastrophe” in China. But his long-time concerns about biological weapons and AI remain top of mind.
Onstage with WIRED at the recent Verify cybersecurity conference in Sausalito, California, hosted by the Aspen Institute and Hewlett Foundation, he warned that AI is making it easier to learn how to build biological weapons and other potentially devastating tools. (There’s a reason why he joked that he would pick up the tab at the bar afterward.) The conversation has been edited for length and clarity.
Lauren Goode: To start, we should talk about your role at Rand and what you’re envisioning for the future there. Rand has played a critical role in a lot of US history. It has helped inform, you know, the creation of the internet—
Jason Matheny: We’re still working out the bugs.
Right. We’re going to fix it all tonight. Rand has also influenced nuclear strategy, the Vietnam War, the space race. What do you hope that your tenure at Rand will be defined by?
There’s three areas that I really want to help grow. First, we need a framework for thinking about what [technological] competition looks like without a race to the bottom on safety and security. For example, how can we assure competition with China without catastrophe? A second area of focus is thinking about how we can map out a climate and energy strategy for the country, in a way that is acceptable to our technology requirements, the infrastructure that we have and are building, and gets the economics right.
And then a third area is understanding the risks to democracy right now, not just in the United States but globally. We’re seeing an erosion of norms in how facts and evidence are treated in policy debates. We have a set of very anxious researchers at Rand who are seeing this decay of norms. I think that’s something that’s happening not just in the United States but globally, alongside a resurgence of variants of autocracy.
One type of risk you’ve been very interested in for a long time is “biorisk.” What’s the worst thing that could possibly happen? Take us through that.
I started out in public health before I worked in national security, working on infectious disease control—malaria and tuberculosis. In 2002, the first virus was synthesized from scratch on a Darpa project, and it was sort of an “oh crap” moment for the biosciences and the public health community, realizing biology is going to become an engineering discipline that could be potentially misused. I was working with veterans of the smallpox eradication campaign, and they thought, “Crap, we just spent decades eradicating a disease that now could be synthesized from scratch.”
more than $10 trillion, and yet what we invest in preventing the next pandemic is maybe $2 billion to $3 billion of federal investment.
Another category is intentional biological attacks. Aum Shinrikyo was a doomsday cult in Japan that had a biological weapons program. They believed that they would be fulfilling prophecy by killing everybody on the planet. Fortunately, they were working with 1990s biology, which wasn’t that sophisticated. Unfortunately, they then turned to chemical weapons and launched the Tokyo sarin gas attacks.
the research done by [AI safety and research company] Anthropic has looked at risk assessments to see if these tools could be misused by somebody who didn’t have a strong bio background. Could they basically get graduate-level training from a digital tutor in the form of a large language model? Right now, probably not. But if you map the progress over the last couple of years, the barrier to entry for somebody who wants to carry out a biological attack is eroding.
So … we should remind everyone there’s an open bar tonight.
Unhappy hour. We’ll pick up the tab.
Right now everyone is talking about AI and a super artificial intelligence potentially overtaking the human race.
That’s going to take a stiffer drink.
You are an effective altruist, correct?
According to the newspapers, I am.
Is that how you would describe yourself?
I don’t think I’ve ever self-identified as an effective altruist. And my wife, when she read that, she was like, “You are neither effective nor altruistic.” But it is certainly the case that we have effective altruists at Rand who have been very concerned about AI safety. And it is a community of people who have been worried about AI safety longer than many others, in part because a lot of them came from computer science.
So you’re not an effective altruist, you’re saying, but are someone who’s been very cautious about AI for a long time, like some effective altruists are. What was it that made you think years ago that we needed to be cautious about unleashing AI into the world?
I think it was when I realized that so much of what we depend on protecting us from the misuse of biology is knowledge. [AI] that can make highly specialized knowledge easier to acquire without guardrails is not an unequivocal good. Nuclear knowledge will be created. So will biological weapon knowledge. There will be cyber weapon knowledge. So we have to figure out how to balance the risks and benefits of tools that can create highly specialized knowledge, including knowledge about weapons systems.
It was clear even earlier than 2016 that this was going to happen. James Clapper [former US director of national intelligence] was also worried about this, but so was President Obama. There was an interview in WIRED in October of 2016. [Obama warned that AI could power new cyberattacks and said that he spent “a lot of time worrying” about pandemics. —Editor] I think he was worried about what happens when you can do software engineering much, much faster that’s focused on generating malware at scale. You can basically automate a workforce, and now you’ve got effectively a million people who are coding novel malware constantly, and they don’t sleep.
At the same time, it will improve our cybersecurity, because we can also have improvements in security that are amplified a million-fold. So one of the big questions is whether there will be some sort of cyber offense or cyber defense natural advantage as this stuff scales. What does that look like over the long term? I don’t know the answer to that question.
Do you think it’s at all possible that we will enter any kind of AI winter or a slow-down at any point? Or is this just hockey-stick growth, as the tech people like to say?
It’s hard to imagine it really significantly slowing down right now. Instead it seems there’s a positive feedback loop where the more investment you put in, the more investment you’re able to put in because you’ve scaled up.
So I don’t think we’ll see an AI winter, but I don’t know. Rand has had some fanciful forecasting experiments in the past. There was a project that we did in the 1950s to forecast what the year 2000 would be like, and there were lots of predictions of flying cars and jet packs, whereas we didn’t get the personal computer right at all. So forecasting out too far ends up being probably no better than a coin flip.
How concerned are you about AI being used in military attacks, such as used in drones?
There are a lot of reasons why countries are going to want to make autonomous weapons. One of the reasons we’re seeing is in Ukraine, which is this kind of petri dish of autonomous weapons. The radio jamming that’s used makes it very tempting to want to have autonomous weapons that no longer need to phone home.
But I think cyber [warfare] is the realm where autonomy has the highest benefit-cost ratio, both because of its speed and because of its penetration depth in places that can’t communicate.
But how are you thinking about the moral and ethical implications of autonomous drones that have high error rates?
I think the empirical work that’s been done on error rates has been mixed. [Some analyses] found that autonomous weapons were probably having lower miss rates and probably resulting in fewer civilian casualties, in part because [human] combatants sometimes make bad decisions under stress and under the risk of harm. In some cases, there could be fewer civilian deaths as a result of using autonomous weapons.
But this is an area where it is so hard to know what the future of autonomous weapons is going to look like. Many countries have banned them entirely. Other countries are sort of saying, “Well, let’s wait and see what they look like and what their accuracy and precision are before making decisions.”
I think that one of the other questions is whether autonomous weapons are more advantageous to countries that have a strong rule of law over those that don’t. One reason to be very skeptical of autonomous weapons would be because they’re very cheap. If you have very weak human capital, but you have lots of money to burn, and you have a supply chain that you can access, then that characterizes wealthier autocracies more than it does democracies that have a strong investment in human capital. It’s possible that autonomous weapons will be advantageous to autocracies more than democracies.
You’ve indicated that Rand is going to increase its investment in analysis on China, particularly in areas where there are gaps in understanding of its economy, industrial policy, and domestic politics. Why this increased investment?
[The US-China relationship] is one of the most important competitions in the world and also an important area of cooperation. We have to get both right in order for this century to go well.
The US hasn’t faced a strategic competitor with more than two-thirds of our GDP since the War of 1812. So [we need] an accurate assessment of net strengths and net weaknesses in various areas of competition, whether it’s in economic, industrial, military, human capital, education, or talent.
And then where are the areas of mutual benefit where the US and China can collaborate? Non-proliferation, climate, certain kinds of investments, and pandemic preparedness. I think getting that right really matters for the two largest economies in the world.
I recently had the opportunity to talk with Jensen Huang, the CEO of Nvidia, and we talked about US export controls. Just as one example, Nvidia is restricted from shipping its most powerful GPUs to China because of the measures put in place in 2022. How effective is that strategy in the long term?
One piece of math that’s hard to figure out: Even if the US succeeded in preventing the shipment of advanced chips like [Nvidia] H100s to China, can China get those chips through other means? A second question is, can China produce its own chips that, while not as advanced, might still perform sufficiently for the kinds of capabilities that we might worry about?
If you’re a national security decisionmaker [in China] and you’re told, “Hey, we really need this data center to create the arsenal of offensive tools we need. It’s not going to be as cost-effective as using H100s, we’ll have to pay four times more because of having a bigger energy bill and it’ll be slower,” you’re probably going to pay the bill. So the question then becomes, at what point is a decisionmaker no longer willing to pay the bill? Is it 10X the cost? Is it 20X? We don’t know the answer to that question.
But certain kinds of operations are no longer possible because of those export controls. That gap between what you can get a Huawei chip to do and what you can get an Nvidia chip to do keeps on growing because [the chip technology is] sort of stuck in China, and the rest of the world will keep on getting more advanced. And that does prevent a certain kind of military efficiency in computing that could be useful for a variety of military operations. And I think New York Times reporter Paul Mozur was the first to break the news that Nvidia chips were powering the Xinjiang data center that’s being used to monitor the Uighur prison camps in real time.
That raises a really hard question: Should those chips be going into a data center that are being used for human rights abuses? Regardless of one’s view of the policy, just doing the math is really important, and that’s mostly what we focus on at Rand.
Title: Assessing AI Risks: Insights from a National Security Insider
Introduction:
Artificial Intelligence (AI) has emerged as a transformative technology with the potential to revolutionize various sectors, including national security. While AI offers numerous benefits, it also poses significant risks that must be carefully assessed and managed. In this article, we delve into the assessment of AI risks from the perspective of a national security insider, highlighting key considerations and strategies to mitigate potential threats.
Understanding AI Risks:
As AI technologies advance rapidly, it is crucial to identify and evaluate potential risks associated with their deployment. National security insiders play a vital role in assessing these risks, as they possess invaluable expertise and insights into the potential vulnerabilities and threats that AI systems may introduce.
1. Malicious Use of AI:
One of the primary concerns is the malicious use of AI by state or non-state actors. AI-powered cyberattacks, misinformation campaigns, or autonomous weapon systems pose significant risks to national security. National security insiders must assess the potential for adversaries to exploit AI technologies for nefarious purposes and develop countermeasures accordingly.
2. Bias and Discrimination:
AI systems are only as unbiased as the data they are trained on. National security insiders must evaluate the potential for bias and discrimination in AI algorithms, as these can lead to unfair targeting, profiling, or decision-making processes. Ensuring transparency, accountability, and diversity in AI development can help mitigate these risks.
3. Vulnerability to Adversarial Attacks:
AI systems can be vulnerable to adversarial attacks, where malicious actors manipulate inputs to deceive or exploit AI algorithms. National security insiders must assess the robustness of AI systems against such attacks and develop defenses to prevent their exploitation. Regular testing, monitoring, and updating of AI models are essential to stay ahead of potential threats.
4. Ethical Considerations:
AI technologies raise ethical dilemmas that must be addressed by national security insiders. Questions regarding privacy, surveillance, and the potential for AI to infringe upon human rights require careful assessment. National security insiders must ensure that AI systems adhere to legal and ethical frameworks, striking a balance between security imperatives and individual rights.
Mitigating AI Risks:
To effectively mitigate AI risks, national security insiders employ various strategies and measures:
1. Collaboration and Information Sharing:
National security agencies must foster collaboration with academia, industry, and international partners to share knowledge, best practices, and threat intelligence related to AI risks. This collaborative approach enhances the collective ability to identify and address emerging threats promptly.
2. Robust Regulatory Frameworks:
National security insiders play a crucial role in formulating and implementing robust regulatory frameworks for AI technologies. These frameworks should encompass ethical guidelines, accountability mechanisms, and standards for AI development, deployment, and oversight.
3. Continuous Monitoring and Adaptation:
Given the rapidly evolving nature of AI technologies, national security insiders must continuously monitor and adapt their risk assessment strategies. Regular audits, vulnerability assessments, and technology watch programs enable timely identification of new risks and the development of appropriate mitigation measures.
4. Investment in Research and Development:
National security insiders should advocate for increased investment in research and development to advance AI technologies while simultaneously addressing associated risks. Funding initiatives focused on developing secure AI systems, robust defenses against adversarial attacks, and unbiased algorithms can significantly contribute to risk mitigation efforts.
Conclusion:
The assessment of AI risks by national security insiders is critical to safeguarding national security interests in an increasingly AI-driven world. By understanding the potential risks associated with AI technologies and implementing effective risk mitigation strategies, national security agencies can harness the benefits of AI while minimizing its potential negative consequences.