How AI could take over elections and undermine democracy

Could organizations use AI language models like ChatGPT to get voters to behave in specific ways?

Senator Josh Hawley asked OpenAI CEO Sam Altman this question at a May 16, 2023 US Senate hearing on artificial intelligence. Altman replied that he was really concerned that some people might be using language patterns to manipulate, persuade, and engage in face-to-face interactions with voters.

Altman didn’t elaborate, but he may have had something similar to this scenario in mind. He imagines that soon political technologists will develop a machine called the Clogger – political campaigning in a black box. Clogger relentlessly pursues one goal: to maximize the chances that its candidate the campaign that purchases Clogger Inc.’s services will prevail in an election.

While platforms like Facebook, Twitter and YouTube use forms of artificial intelligence to get users to spend more time on their sites, Clogger’s artificial intelligence has a different goal: to change people’s voting behavior.

How would Clogger work

As a political scientist and jurist studying the intersection of technology and democracy, we believe something like Clogger could use automation to dramatically increase the scope and potentially effectiveness of the behavior manipulation and microtargeting techniques that political campaigns have used since the early 2000s . Just as advertisers now use your browsing and social media history to target commercial and political ads individually, Clogger would be paying attention to you and hundreds of millions of other voters individually.

It would offer three advances over the current state-of-the-art algorithmic behavior manipulation. First, its language model would generate text messages, social media and emails, perhaps including images and videos tailored to you personally. While advertisers strategically place a relatively small number of ads, language models like ChatGPT can generate countless unique messages for you personally and millions for others over the course of a campaign.

Second, Clogger would use a technique called reinforcement learning to generate a succession of messages that become more and more likely to change your vote. Reinforcement learning is a machine learning, trial-and-error approach where the computer takes actions and gets feedback on which ones work best to learn how to achieve a goal. Machines that can play Go, Chess, and many video games better than any human have used reinforcement learning.

How reinforcement learning works.

Third, over the course of a campaign, Clogger messages could evolve to account for your responses to machines before submissions and what it has learned about changing others’ minds. Clogger would be able to carry on dynamic conversations with you and millions of other people over time. Clogger messages would be similar to ads that follow you on different websites and social media.

The nature of AI

Three other features or bugs are worth noting.

First, the messages sent by Clogger may or may not have political content. The machines sole goal is to maximize vote share, and it will likely devise strategies to achieve this that no human activist would have thought of.

One option is to send likely opposing voters information about non-political passions they have in sports or entertainment to bury the political messages they receive. Another possibility is the sending of discouraging messages, such as incontinence announcements timed to coincide with opponents’ messages. And another is to manipulate voters’ social media friend groups to give the impression that their social circles support your candidate.

Secondly, Clogger has no regard for the truth. In fact, he has no way of knowing what is true or false. Language model hallucinations are not a problem for this machine because its goal is to change your grade, not provide accurate information.

Third, because it’s a black box type of AI, people would have no way of knowing what strategies it uses.

The field of explainable AI aims to open up the black box of many machine learning models so that people can understand how they work.

Clogocracy

If the Republican presidential campaign were to field Clogger in 2024, the Democratic campaign would likely be forced to respond in the same way, perhaps with a similar machine. Call him Dogger. If campaign managers believed these machines to be effective, the presidential contest could very well boil down to Clogger versus Dogger, and the winner would be the customer of the most effective machine.

Political scientists and pundits would have a lot to say as to why one or the other AI prevailed, but probably no one would really know. The president will have been elected not because his policy proposals or political ideas convinced more Americans, but because he had the most effective artificial intelligence. The content that won the day would come from AI focused solely on winning, with no political ideas of its own, rather than candidates or parties.

In this very important sense, a machine would have won the election rather than a person. The election would no longer be democratic, even though all the ordinary activities of democracy would have occurred the speeches, announcements, messages, voting and counting of votes.

The AI-elected president could then go one of two ways. He or she could use the mantle of election to pursue Republican or Democratic party policies. But since the party’s ideas may have had little to do with why people voted the way Clogger and Dogger voted don’t care about political views, the actions of presidents wouldn’t necessarily reflect the will of voters. Voters would be manipulated by AI rather than freely choosing their political leaders and policies.

Another path is for the president to pursue the messages, behaviors and policies that the machine predicts will maximize the chances of re-election. Down this road, the president would have no particular platform or agenda other than retaining power. The president’s actions, led by Clogger, would be the ones most likely to manipulate voters rather than serve their genuine interests or even the president’s own ideology.

Avoid clogocracy

It would be possible to avoid AI electoral manipulation if candidates, campaigns and advisers all gave up the use of such political AI. We think that is unlikely. If politically effective black boxes were developed, the temptation to use them would be almost irresistible. Indeed, political consultants may very well see using these tools as required by their professional responsibility to help their candidates win. And once a candidate uses such an effective tool, opponents can hardly be expected to resist by unilaterally disarming.

More privacy protection would help. Clogger would depend on access to large amounts of personal data to target people, tailor messages to persuade or manipulate them, and track and retarget them in the course of a campaign. Every little bit of information that companies or policy makers withhold from the machine would make it less effective.

Strong data privacy laws could help prevent AI from being manipulative.

Another solution lies in the electoral commissions. They may try to ban or strictly regulate these machines. There is heated debate as to whether such replicant discourse, even if political in nature, can be regulated. The U.S. tradition of extreme free speech leads many leading academics to say that’s not possible.

But there’s no reason to automatically extend First Amendment protection to the product of these machines. The nation could very well choose to entitle machines, but that should be a decision grounded in today’s challenges, not the misplaced assumption that the views of James Madison in 1789 were meant to apply to AI.

European Union regulators are moving in this direction. Policymakers have revised the European Parliament’s draft of its AI law to designate AI systems for influencing voters in campaigns as high-risk and subject to regulatory scrutiny.

A constitutionally safer, albeit smaller, step already taken in part by European internet regulators and in California is to ban bots from impersonating people. For example, regulation may require campaign messages to be provided with disclaimers when the content they contain is machine-generated rather than human-generated.

It would be like the advertising disclaimer paid for by Sam Jones for the Congressional Committee but modified to reflect its AI origin: This AI-generated ad was paid for by Sam Jones for the Congressional Committee. A stronger version might prompt: This AI-generated message was sent to you by the Sam Jones Committee for Congress because Clogger predicted that doing so will increase your chances of voting for Sam Jones by 0.0002%. At the very least, we believe voters deserve to know when a bot is talking to them, and they should know why.

The possibility of a system like Clogger shows that the path to collective human helplessness may not require superhuman artificial general intelligence. It may just require overeager activists and consultants to have powerful new tools in place that can effectively push millions of people to many buttons.

Find out what you need to know about AI by signing up for our newsletter series of four emails sent over the course of a week. You can read all of our stories about generative AI at TheConversation.com.

#elections #undermine #democracy

Leave a Comment