AI has enabled the mass creation of deepfake videos, which poses a threat to democratic processes everywhere.
If left unchallenged, political deep fake videos could have profound impacts on Australian elections.
It’s getting harder to spot a deepfake
Images have stronger persuasive power than text. Unfortunately, Australians are not great at spotting fake videos and images.
The prevalence of deepfakes on social media is particularly concerning, given it is getting harder to identify which videos are real and which are not.
Studies suggest people can accurately identify deepfake facial images only 50% of the time (akin to guessing) and deepfake faces in videos just 24.5% of the time.
AI-based methods for detection are marginally better than humans. However, these methods become less effective when videos are compressed (which is necessary for social media).
As Australia faces its own election, this technology could profoundly impact perceptions of leaders, policies, and electoral processes.
Without action, Australia could become vulnerable to the same AI-driven political disinformation seen in the US.
Deepfakes and disinformation in Australia
When she was home affairs minister, Clare O'Neil warned technology is undermining the foundations of Australia’s democratic system.
Senator David Pocock demonstrated the risks by creating deepfake videos of both Prime Minister Anthony Albanese and Opposition Leader Peter Dutton.
The technology’s reach extends beyond federal politics. For example, scammers successfully impersonated Sunshine Coast Mayor Rosanna Natoli in a fake video call.
We’ve already seen deepfakes already in Australian political videos, albeit in a humorous context. Think, for example, of the deepfake purporting to show Queensland premier Steven Miles, which was released by his political opponents.
While such videos may seem harmless and are clearly fabricated, experts have raised concerns about the potential misuse of deepfake technology in future.
As deepfake technology advances, there is growing concern about its ability to distort the truth and manipulate public opinion. Research shows political deepfakes create uncertainty and reduce trust in the news.
The risk is amplified by microtargeting – where political actors tailor disinformation to people’s vulnerabilities and political views. This can end up amplifying extreme viewpoints and distort people’s political attitudes.
Not everyone can spot a fake
Deepfake content encourages us to make quick judgments, based on superficial cues.
Studies suggest some are less susceptible to deepfakes, but older Australians are especially at risk. Research shows a 0.6% decrease in deepfake detection accuracy with each year of age.
Younger Australians who spend more time on social media may be better equipped to spot fake imagery or videos.
But social media algorithms, which reinforce users’ existing beliefs, can create “echo chambers”.
Research shows people are more likely to share (and less likely to check) political deepfake misinformation when it shows their political enemies in a poor light.
With AI tools struggling to keep pace with video-based disinformation, public awareness may be the most reliable defence.
Deepfakes are more than just a technical issue — they represent a fundamental threat to the principles of free and fair elections.
Renee Barnes, Associate professor of Journalism, University of the Sunshine Coast; Aimee Riedel, Senior lecturer in marketing, Griffith University; Lucas Whittaker, Lecturer in Marketing, Swinburne University of Technology, and Rory Mulcahy, Associate Professor of Marketing, University of the Sunshine Coast
This article is republished from The Conversation under a Creative Commons license. Read the original article.