<p>Visit the <a href="https://www.georgetown.edu/operating-status">operating status page</a> for information on the university's current operating status.</p>
View of stained glass with the Georgetown University seal

Q&A With Jason Matheny, Founding Director of the Center for Security and Emerging Technology

Jason Matheny, founding director of the new Center for Security and Emerging Technology in the School of Foreign Service, explains the unique role Georgetown has to play in bridging the gulf between technology and policy and providing critical insight and analysis to inform the development of Artificial Intelligence (AI) policy.

Q: What is emerging technology? Where will CSET focus first?

“Emerging technologies” is a general term used to refer to rapidly developing technologies such as artificial intelligence (AI), biotechnology and nanotechnology. AI is the use of machines to accomplish cognitive tasks such as perception, reasoning, problem solving and planning, and it's a general purpose technology with many applications ranging from medicine to transportation to manufacturing, law and warfare. CSET decided to focus on AI for our first two years for two reasons. First, AI is an important topic that will have broad effects on security. Second, AI is a topic where the demand for policy analysis has grown much faster than the supply – in the last four years, policymakers have been increasingly interested in how to approach AI and policy, but the think tank community has had trouble keeping up. Before CSET spreads out to other topics, we wanted to ensure we’re keeping pace with the needs of policymakers related to AI.

Q: What challenges and opportunities does AI present and what are policymakers struggling with?

There are a range of challenges related to AI, but national security is a critical area of focus. Where does the U.S. sit competitively relative to other countries in its capabilities, in its workforce, in the amount of data that we have that we can use to train AI, and the amount of hardware and computers that we can use in order to run AI? There are specific application areas that are relevant to national security, such as cybersecurity, intelligence and systems for analysis and collection, as well as AI that is embedded in weapon systems of competing nations, that we have to be aware of, and able to develop countermeasures to.

Q: AI is in the news quite a bit lately. How accurately is it presented in the media, especially in terms of capabilities and threats?

There's a lot of hype right now about AI in the popular press. Sometimes the capabilities of current systems are exaggerated — the things that we should worry the most about related to AI is not how powerful these systems are, but rather is just how brittle and fragile these systems are. Most of the current AI systems can be broken or spoofed or fooled with undergrad computer science-level effort. These are not sophisticated systems that are going to lead to Skynet – instead, they're pretty primitive systems that are more likely to lead to digital Flubber. So the focus right now should be understanding the various failure modes for this technology, the ways in which these systems are fragile. How can the United States and allies produce systems that are more robust, that are less able to break, that don't pose as many security and safety challenges? And how can we also compete effectively in a global environment, while also cooperating internationally on issues of safety and security and ethics?

Q: What is CSET's approach to this terrain and how is it different from that of other research organizations and think tanks?

We’re the largest center in the U.S. focused on AI and policy. Our staff brings significant technical expertise from industry and includes experts in technology policy, technology law, security policy and foreign languages. We also have experienced hands used to working with large data sets on publications, patents and workforce data to make sense of the global trends.

Q: What are CSET’s initial projects?

We’re focused on developments in AI in computing and how they're likely to affect national and international security. Our work is split across three streams. The first is scientific and industrial competitiveness, which measures of investment flows, publications, data and hardware provide the clearest view of AI capabilities in different countries. Second is talent and knowledge workflows. How can strategic trade and workforce policies be designed for global competitions in AI? How can companies, universities and governments best protect information from theft and misuse? And then a third stream involves the interactions of AI with other technologies. What effects will AI have on the future of other strategic technologies such as cyber and how can competing nations avoid technological accidents and strategic miscalculation?

Q: Why did you decide that was Georgetown the right home for CSET?

Several things drew us to Georgetown – the outstanding students, faculty, fellows and staff, the university's historical commitment to public service, and its extraordinary alumni network, many of whom are directly involved in policy. We were particularly impressed by the strength of the School of Foreign Service, where we're housed and its Security Studies and Science, Teechnology and International Affairs programs, which have some of the country's leading security thinkers. It’s the most distinguished school of its type in the world – the faculty, staff and students are really extraordinary, not just in the quality of their intellectual work, but also in their moral commitment to public service. We were also excited about the university's strengths in adjacent areas of policy, ethics and law, and the critical mass that's forming in the Technology and Society Initiative. Finally, we were drawn to the university's location in Washington, DC, which is ideal for engaging directly with the policy community.

Q: How will that engagement play out?

Our offices will be by the Law Center next to the Capitol. This is important, because interacting with lawmakers and policymakers is part of our unique focus and critical to fulfilling our goals – to deliver nonpartisan analysis to the policy community, to help prepare a generation of policymakers, analysts and diplomats to address the security challenges of emerging technologies and to support academic work in security and technology studies that can be used to educate the future workforce.

Q: How does CSET fit into Georgetown's Technology and Society Initiative, which brings together Georgetown’s existing centers and programs that address the societal and governance impacts of new technologies?

It's one of the things that drew us to Georgetown, this historical commitment to thinking about more than just the technology itself, more than just the engineering problems associated with the technology, and considering the social, political, legal and ethical dimensions of the technology. I think Georgetown has a comparative advantage in those areas relative to other schools that sometimes are so focused on the technology itself that they miss the consequences of the technology. I think we complement the other centers involved in the Technology and Society Initiative well, bringing security into ongoing conversations about law, public policy and ethics. We're leveraging the expertise that Georgetown has already attracted and the outstanding faculty, fellows, students and staff of the other centers so that we're able to generate analysis that's more accurate and more relevant.

Q: Tell us about some of the people CSET is bringing to Georgetown.

I'm joined by extraordinary people with backgrounds in AI and computing, law, public policy, international relations, cybersecurity and intelligence analysis. The former job titles of our initial team include the Chan-Zuckerberg Initiative's director of science analytics; the CIA's lead science and technology analyst from China; DeepMind’s principal for research and strategy, the chief judge of the U.S. Court of Appeals for the armed forces, a director of cyber policy at the Pentagon; and OpenAI's policy and ethics advisor. We also have a range of former faculty and fellows from MIT, Harvard, Oxford and Yale. We have Marshall Scholars and Rhodes Scholars. They complement my only skill set, which is random 1980s pop culture trivia.

Q: What opportunities does CSET present for students?

We’re bringing on graduate research fellows with expertise in law, public policy, international relations, a technology field such as AI and computing or foreign language. We're also hiring undergraduates who have research experience to analyze diverse data, conduct literature reviews and build annotated bibliographies. We have a large number of open positions that we're eager to fill with as many talented students as we can find. CSET presents a great opportunity for students interested in AI and security policy – there are very few academic centers in the world exploring the intersection between AI technologies and policy, and we're the one that's focused most directly on the security policy side of AI.

Q: What are the practical applications of CSET’s work?

How will it make a difference in the world? As a government officer, I had to make high-stakes decisions about emerging technologies, including which technologies the country should develop, which technologies we should be prepared for that others might develop and what technologies we need to develop safeguards or countermeasures against. And when I was making those decisions, it was usually with less data and analysis than I would have liked, often because of time pressure, or because relevant expertise wasn't accessible to me or because technologists and analysts weren't working in close proximity. So what I want CSET to provide my colleagues in government is the sort of analysis that I had wished I'd had more of – nonpartisan analysis backed by data that can help inform security decisions about new technologies.