The Future of Policymaking: The Benefits and Drawbacks of AI in Governance

In January 2023, Massachusetts State Senator Barry Finegold proposed a bill to regulate generative Artificial Intelligence (AI) models. When writing this legislation, Finegold hoped to mitigate the dangers of using AI-generated information by broadly controlling its usage. But ironically, neither Finegold nor his legislative assistants wrote the bill. ChatGPT—one of the very AI tools Finegold was seeking to regulate—did. 

Finegold is one of the first policymakers to use generative AI to produce legislation. These algorithms, which have initiated a slew of ethical and academic questions across different disciplines, are programmed to respond to various prompts based on a large amount of training data. Finegold’s legislative experiment offers a glimpse into one of the ways future policymaking could use AI. 

Anat Lior YLS ’21, a lecturer at the Jackson School of Global Affairs, researches how AI can be applied in the criminal justice system and politics more broadly. She notes that AI may perpetuate pre-existing biases, especially against minority groups. Because generative AI models are based on human-made algorithms and datasets, Lior worries that they could reproduce the gender, racial, and socioeconomic biases inherent in society. Lior emphasizes the need to regulate and vet these technologies before they are widely deployed for the larger public. 

AI tools have already demonstrated discriminatory practices in circumstances like hiring, financial lending, and housing. Lenders using AI tools have a pattern of dramatically overcharging people of color seeking loans to purchase homes. Their algorithms use data on eviction and criminal histories, metrics that disproportionately penalize these communities. Employers have also begun to entrust hiring decisions to AI, using models that discriminate against people with disabilities. For these reasons, the American Civil Liberties Union (ACLU) has called on the Biden administration to focus on civil rights and equity when designing technology policy.

The ACLU fears that the biases ingrained in generative AI algorithms will be amplified when designing policy. For Lior, eliminating biases from AI software is harder than preventing potential physical or emotional damage caused by that technology. If a piece of technology attacks someone, victims can respond by filing lawsuits, mobilizing social media campaigns on their behalf, or exercising other forms of protest. For example, in 2020, widespread outcry erupted on social media after it was revealed that Roomba, a robot vacuum cleaner, took pictures of people in their homes without their consent. Biases, on the other hand, are tricky to prove and, more importantly, to correct. 

Politicians could utilize AI’s intrinsic biases to pass harmful legislation or achieve a malicious personal end. If a politician wanted to harm marginalized communities, they could argue that AI-generated data or advice supports their legislation. As AI systems improve, people may believe that machines are smarter, more objective, and more neutral than humans, causing the public to trust politicians that use AI to justify their policies. “If you use Waze to go to a place you’ve been a thousand times and it takes you a different route, sometimes you think there’s something you don’t know and just follow it,” says Lior. “It’s intuition – we think it’s better than us.”

Could artificial intelligence replace human politicians? In 2017, the first virtual politician, an AI robot named SAM, launched a campaign for New Zealand Prime Minister for the 2020 general election. SAM was created by 49-year-old entrepreneur Nick Gerritsen, who sought to show that an AI robot would be superior to human politicians. SAM’s campaign was based on the ways that an AI politician would correct some of the problems inherent in human politicians, such as emotional and irrational thinking, human error, and personal bias. The campaign sparked conversation about how AI could lead to the creation of “programmed politicians” and how this would impact political and cultural division. Due to legal and logistical boundaries, SAM did not end up running for office. Nevertheless, his introduction served as an interesting thought experiment about the boundaries of artificial intelligence in politics.

Though there are significant concerns regarding AI in politics, Lior believes it could be an asset to policymaking in many fields. Administrators in law and politics could use AI to create legal memos, find evidence in support of particular policy agendas, and explain complicated issues to constituents. 

In economic policymaking, AI plays a role in all five stages of the policy cycle: agenda-setting, formulation, decision-making, implementation, and evaluation. It can help policymakers determine which issues are most important for their constituents. Additionally, legislators can use AI-generated data to create accurate models that predict how potential policy interventions would impact economic outcomes. Connectedly, AI can comb through large amounts of data to reveal the effectiveness of economic policies. For example, the United Nations Conference on Trade and Development has used artificial neural networks to forecast how trade policies would impact global trade.

It is important to evaluate whether policies are working effectively, and AI can assist with that too. AI gives policymakers highly accurate, evidence-based data about economic policies, offering them the necessary steps to resolve certain shortcomings. 

AI and machine learning algorithms have also helped design climate policy. Researchers from the UK used AI algorithms to assess the effect carbon tax prices had on emissions rates.

Finegold’s use of AI is a testament to the value of AI despite its dangers. Finegold believes that this is a critical stage to regulate and consider the future of AI, but that other politicians are often too busy dealing with the present to do so. “Some people don’t realize how powerful AI is and how much change it can cause for society,” he said. “It’s tough to argue that we need to get ahead of something when there’s not a crisis.”

Finegold believes that AI is a good tool to help in the imbalance of access to resources for different American policymakers. “I’m blessed that I have a lot of staff, but many legislators across the country don’t. Now they can draft bills, and that’s pretty powerful. That’s a good thing.”