Global Challenges, Global Solutions: A Conversation With Ted Wittenstein

Edward (Ted) Wittenstein YC ’04, YLS ’12 is a former intelligence professional and diplomat. After graduating from Yale College, he served as an intelligence policy analyst for the Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction. He then worked at the Office of the Director of National Intelligence and the U.S. Department of State, before returning to study at Yale Law School. 

Professor Wittenstein is currently a Lecturer in Global Affairs and the Executive Director of the Johnson Center for the Study of American Diplomacy at Yale. He oversees the International Security Studies program at the Jackson School of Global Affairs, including programs related to diplomatic history, grand strategy, global security, and the Schmidt Program on Artificial Intelligence, Emerging Technologies, and National Power.

Can you tell us a bit about your experience breaking into and working in intelligence? How has this translated over into your work at Yale? 

With the Schmidt Program on Artificial Intelligence, Emerging Technologies and National Power, we are very excited about the opportunity to consider how trends in technology are impacting global affairs. This is something I worked on as an intelligence professional at a time when the intelligence community was grappling with trends in cybersecurity and emerging technology, which have really transformed the nature of analysis of global affairs.

My formative experiences in terms of intelligence issues were during the aftermath of 9/11 and the invasion of Iraq. I was a Yale undergrad back then, and it was a very awakening moment on campus for so many young people. I was not a global affairs-oriented student at that time—I was more STEM-oriented. So after 9/11, when everyone was looking for ways to get involved in public affairs, it became clear that my technology-proficient background was relevant to analyzing questions of weapons: nuclear, chemical, biological, and cyber. 

I’ve always tried to bring this background into the classroom at Yale—how do you develop technical fluency among policy-oriented students? And the reverse—how do you expose our technical students to why their areas of research and focus are so relevant to the global affairs landscape? 

Can you define Artificial Intelligence?

You cannot define AI. There are multiple definitions, and if you ask a computer scientist, a cognitive scientist, an ethicist, or a lawyer, they will give very different responses.

In my view, we’re talking about highly sophisticated information processing, towards the frontier of autonomy, often involving algorithms, deep learning, and processing of big datasets—but there is no single way to define it. 

This is something we talk about in our Schmidt program class—what is artificial? Currently, if you’re following advances in synthetic biology, it looks like we’re going to be able to program human cells as computers. Is that artificial? Is that biological? And when we say intelligence for a machine, why are we benchmarking machine performance against human capability? Is that the right framework? How do you understand a machine that’s intelligent in certain functions—it can beat you at a complex game, but can’t do first-grade math. Is that intelligent?

How was Artificial Intelligence part of the equation in intelligence collection during 9/11 and the Iraq War? 

AI is always difficult to define. AI, as we currently see it, was not at all something that was on people’s minds at the time of 9/11 and the Iraq War. I come from a generation where I remember the first time I saw a computer. I think there was certainly an understanding that technology and global communications needed to be more integral to how you thought about challenges and analyzed national security issues, but it wasn’t really AI. No one was saying, at that time, “we have to try to automate the function or be able to do it at a scale you can now.” I remember a time when people would say that Microsoft Excel was AI because you could automate a spreadsheet to do the sums or averages!

Since then, the technology has significantly advanced, but the human nature of analysis is something that is never going to go away. This is really a question of how you equip humans with tools, and partner with the machine as a team. The machines can help your work, but to be a human analyst requires a deeper understanding of global affairs, language expertise, functional expertise, cultural expertise—things that a machine is unlikely to replicate or replace.

The current challenge facing the intelligence community is how much global information is already available through open sources. What does that mean for information that might be collected through any kind of clandestine means? For example, information on my phone about the war in Ukraine would never have been accessible to me as an intelligence analyst 15 years ago. I would have had to bury myself in a nondescript compartment to see the imagery analysis and read the field reports. In some ways, a lot of intelligence has been democratized. So, the question is, what is the value of intelligence? What makes something intelligence, as opposed to information? This is a real problem in the digital age. We have too much information, not too little.

How has the democratization of intelligence collection impacted the intelligence space? What role does the private sector play in intelligence analysis?

Well, it means that your network of contacts and expertise needs to be more diverse, to include private sector actors and non-profit academic entities. These are individuals and organizations that have a lot of valuable insights. 

Let’s just take, for example, AI and emerging technologies—what are potential risks, areas of malicious use, or vulnerabilities? The expertise on that question likely does not reside in the government but rather in the U.S. private sector. That doesn’t mean the government analyst can’t analyze the question, but that they need to have relationships where they draw on and incorporate those expertise and insights to inform their thinking about world events.

I don’t think the fundamental concepts are going away. To be an astute analyst you have to be able to see the world from different vantage points, challenge your own assumptions, communicate your confidence judgments clearly to policymakers who may not have time in their day, or don’t want to hear bad or conflicting news.

What are the ways you can envision AI being a threat or disadvantage to us?

AI is like most technology tools in that it is dual use, so it has both a beneficial as well as a potential nefarious use. It is a value neutral tool, so it is up to the developer to program in ethical restraints, and up to the user to be responsible. Ultimately, a determined malicious user will either undermine those constraints or develop a tool that doesn’t have those constraints. 

I think the most immediate concern with advances in natural language processing, like ChatGPT and GPT-4, is disinformation.  These tools have the ability to automate human-looking text at scale, but also narrowly target text to you based on your social media presence or profile in a way that might convince you. So, the ability to create very sophisticated disinformation campaigns is a real area of concern. 

The automation of weaponry is another area of significant concern. These tools are, again, not in themselves a concern—it depends on the intent of the attacker. For example, if you look at the way that Russia has conducted its humanitarian atrocity and genocide in Ukraine without autonomous weapons, it is reasonable to believe that they would use an autonomous tool indiscriminately in a way that’s not aligned with our values. 

In theory, an autonomous weapon might actually enhance your ability to adhere to law of war principles. It could limit collateral damage because it might be able to distinguish a military from a civilian target better than a human might be able to. But other countries might not have that same concern or might even be purposefully looking to target a school or a hospital. So, this is the future that we’re headed for.

I think that the advances in drones that we’ve seen in the war in Ukraine present a rather destabilizing future in autonomous warfare. However, it is not truly autonomous in a sense that this technology takes the humans fully out of the loop. The humans are still involved. It is still the human making a decision. The challenge is what the U.S. Defense Department calls the question of “meaningful human control.”  So yes, you might have your finger on the trigger button, but what about all of the information that has gone into the decision up until that final point? What if all of that information is based on AI imagery, facial recognition, voice recognition, and geolocation? And, as the human, do you have a meaningful, real-time ability to explain, understand, or even question the analysis that will underpin your decision? So, at that point, it may not matter whether you’re on the button or not. I think this is part of where we’re headed in the future of conflict.

How should the United States minimize the risk of misuse of AI and emerging technologies? 

Understanding foreign advancements in AI and potential malicious uses has to be a top priority of the U.S., which means understanding AI developments taking place in the People’s Republic of China. It’s very important to be sure the U.S. is not vulnerable to strategic surprise on this question. Now, there are potential misuses that can come from all parts of the world. I’m not saying you only focus on one, but I do think China has expressed public desire to use these tools for its own efforts at disinformation and narrative controlling, both within its own country and globally. 

Another challenge is that you have to consistently analyze your own systems to identify their own vulnerabilities and how malicious actors might exploit them. Yesterday on campus we hosted Ram Kumar from Microsoft, who is running an AI Red Team to conduct adversarial machine learning research. The goal is to predict how other entities might try to undermine, exploit, or repurpose their tools. The challenge is that this type of research—although defensive in nature— produces insights that can be useful to another malicious actor. When Microsoft and the U.S. engage in that work, we know that it’s defensive, but another country might conduct the same research to look for offensive ways to manipulate or undermine our own systems. The difficulty is deciphering intent between offensive and defensive. This ambiguity creates a volatile international system that you have to be very aware of. 

How do you program an ethical system into AI? Based on your experience, how does one construct this system of ethics?

There are a number of questions embedded in that question. One is, what is your ethical code or constraints? The other would be, how would you build it into a machine? Then there is the broader question of AI safety and trustworthiness. Even if you created constraints in AI, how do you ensure they can’t be manipulated or undermined? It’s very difficult.

If you look at generative AI and large language models like ChatGPT, even with the constraints built in—you can’t ask it to do something illegal, or use hateful or racist language—you still could get it to do that. It’s not difficult to create the so-called “jailbreaks,” or methods of subverting these ethical constraints. Malicious actors are thinking about and trying to do that all the time. Ethics are extremely important, but it’s actually the safety architecture that embeds and protects the ethics from subversion. 


I am delighted that Professor Luciano Floridi has joined Yale as the founding director of the new Digital Ethics Center. The Schmidt Program has partnered with Professor Floridi to develop a weekly Digital Ethics Workshop that examines these complex questions.