Survey Shows Concerns About AI Manipulation Influencing 2024 Election
Survey findings and implications for the upcoming election
- Overview of the survey results
- Concerns about AI manipulation in elections
- Public perceptions on AI biases
- Confidence in the accuracy of voting processes
- Expert opinions on the impact of AI misinformation
Seventy-eight percent of American adults anticipate artificial intelligence (AI) systems to be exploited in ways that could influence the outcome of the 2024 presidential election, as revealed in a recent national survey conducted by the Elon University Poll and the Imagining the Digital Future Center at Elon University. The survey delved into various aspects of AI manipulation and its potential impact on the democratic process.
The findings from the survey indicate that a significant proportion of Americans harbor concerns about the misuse of AI technologies during the upcoming election:
73% believe AI will likely be used to manipulate social media to sway the election results through fake accounts or distortions of campaign-related information.
70% think AI-generated fake information, videos, and audio could affect the election outcome.
62% suspect that targeted AI interventions might dissuade certain voters from casting their ballots.
Overall, 78% foresee at least one form of AI abuse impacting the presidential election, with over half expressing concerns about all three manipulation scenarios.
This heightened apprehension about AI's potential negative influence on elections is further accentuated by public sentiments regarding the consequences for candidates engaged in malicious alterations of media content:
93% of respondents favor some form of penalty for candidates intentionally spreading falsified photos, videos, or audio files.
46% support removal from office as a suitable punishment, while 36% advocate for criminal prosecution of offenders.
Despite the use of AI in various applications, a significant majority (nearly 8 to 1) believe that AI-related activities are more likely to undermine rather than enhance the electoral process.
The survey also sheds light on the populace's doubts about their ability to discern manipulated media content:
More than half of the participants lack confidence in identifying altered or faked audio, video, and photographic material.
Critical self-assessment extends to the perception of others' capabilities, with approximately 70% expressing skepticism about fellow voters' aptitude to detect manipulated media content.
Lee Rainie, Director of Elon University's Imagining the Digital Future Center, highlighted the challenging information landscape within which the upcoming election will unfold. Rainie emphasized the anticipated proliferation of misinformation, AI-enabled manipulations, and voter-influence tactics, underscoring the need for enhanced media literacy among the electorate to sift through the potential deluge of deceptive campaign content.
Additionally, the survey explored public interactions with large language models (LLMs) and chatbots, such as ChatGPT and Gemini, acknowledging prevalent concerns about the fairness and biases embedded in these AI systems, particularly in addressing political and public policy inquiries.
The study revealed notable partisan disparities in perceptions of AI biases:
Republicans show greater apprehension towards AI biases, with a higher tendency to believe that AI systems exhibit biases against their ideologies.
Republicans are also more likely to suspect biases against men and White individuals within AI systems compared to Democrats.
While Democrats and Republicans diverge on AI bias perceptions, both groups share uncertainties about the fairness of AI responses in politically significant contexts.
Furthermore, the survey indicated varying levels of confidence in the accuracy of voting processes, with notable discrepancies between Democrats and Republicans highlighting divergent views on electoral integrity.
Expert insights from Jason Husser, Professor of Political Science and Director of the Elon University Poll, underscore the potential ramifications of AI-generated misinformation on the 2024 election. Husser noted the evolving landscape of misinformation dissemination facilitated by AI advancements and urged for vigilant scrutiny of campaign-related information to counteract the potential impact of AI-generated distortions on voter behavior and engagement.
The survey, conducted in April 2024 among 1,020 U.S. adults, provides valuable insights into the public apprehensions surrounding AI manipulations and their implications for the democratic process.