Disinformation and deepfakes played a part in the US election. Australia should expect the same | UniSC | University of the Sunshine Coast, Queensland, Australia

Accessibility links

Disinformation and deepfakes played a part in the US election. Australia should expect the sam

As America takes stock after Donald Trump’s re-election to the presidency, it’s worth highlighting the AI-generated fake photos, videos and audio shared during the campaign.

A slew of fake videos and images shared by Trump and his supporters purported to show his opponent, Kamala Harris, saying or doing things that did not happen in real life.

Of particular concern are deepfake videos, which are edited or generated using artificial intelligence (AI) and depict events that didn’t happen. They may appear to depict real people, but the scenarios are entirely fictitious.

Microsoft warned in late October that:

Russian actors continue to create AI-enhanced deepfake videos about Vice President Harris. In one video, Harris is depicted as allegedly making derogatory comments about former President Donald Trump. In another […] Harris is accused of illegal poaching in Zambia. Finally, another video spreads disinformation about Democratic vice president nominee Tim Walz, gaining more than 5 million views on X in the first 24 hours.

AI has enabled the mass creation of deepfake videos, which poses a threat to democratic processes everywhere.

If left unchallenged, political deep fake videos could have profound impacts on Australian elections.

It’s getting harder to spot a deepfake

Images have stronger persuasive power than text. Unfortunately, Australians are not great at spotting fake videos and images.

The prevalence of deepfakes on social media is particularly concerning, given it is getting harder to identify which videos are real and which are not.

Studies suggest people can accurately identify deepfake facial images only 50% of the time (akin to guessing) and deepfake faces in videos just 24.5% of the time.

AI-based methods for detection are marginally better than humans. However, these methods become less effective when videos are compressed (which is necessary for social media).

As Australia faces its own election, this technology could profoundly impact perceptions of leaders, policies, and electoral processes.

Without action, Australia could become vulnerable to the same AI-driven political disinformation seen in the US.

Deepfakes and disinformation in Australia

When she was home affairs minister, Clare O'Neil warned technology is undermining the foundations of Australia’s democratic system.

Senator David Pocock demonstrated the risks by creating deepfake videos of both Prime Minister Anthony Albanese and Opposition Leader Peter Dutton.

The technology’s reach extends beyond federal politics. For example, scammers successfully impersonated Sunshine Coast Mayor Rosanna Natoli in a fake video call.

We’ve already seen deepfakes already in Australian political videos, albeit in a humorous context. Think, for example, of the deepfake purporting to show Queensland premier Steven Miles, which was released by his political opponents.

While such videos may seem harmless and are clearly fabricated, experts have raised concerns about the potential misuse of deepfake technology in future.

As deepfake technology advances, there is growing concern about its ability to distort the truth and manipulate public opinion. Research shows political deepfakes create uncertainty and reduce trust in the news.

The risk is amplified by microtargeting – where political actors tailor disinformation to people’s vulnerabilities and political views. This can end up amplifying extreme viewpoints and distort people’s political attitudes.

Not everyone can spot a fake

Deepfake content encourages us to make quick judgments, based on superficial cues.

Studies suggest some are less susceptible to deepfakes, but older Australians are especially at risk. Research shows a 0.6% decrease in deepfake detection accuracy with each year of age.

Younger Australians who spend more time on social media may be better equipped to spot fake imagery or videos.

But social media algorithms, which reinforce users’ existing beliefs, can create “echo chambers”.

Research shows people are more likely to share (and less likely to check) political deepfake misinformation when it shows their political enemies in a poor light.

With AI tools struggling to keep pace with video-based disinformation, public awareness may be the most reliable defence.

Deepfakes are more than just a technical issue — they represent a fundamental threat to the principles of free and fair elections.The Conversation

 

Renee Barnes, Associate professor of Journalism, University of the Sunshine Coast; Aimee Riedel, Senior lecturer in marketing, Griffith University; Lucas Whittaker, Lecturer in Marketing, Swinburne University of Technology, and Rory Mulcahy, Associate Professor of Marketing, University of the Sunshine Coast

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Media enquiries: Please contact the Media Team media@usc.edu.au