<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=192888919167017&amp;ev=PageView&amp;noscript=1">
Wednesday,  April 17 , 2024

Linkedin Pinterest
News / Opinion / Columns

Flam: Political deepfakes hijack brains

By F.D. Flam
Published: February 26, 2024, 6:01am

Realistic AI-generated images and voice recordings may be the newest threat to democracy, but they’re part of a longstanding family of deceptions. The way to fight so-called deepfakes isn’t to develop some rumor-busting form of AI or to train the public to spot fake images. A better tactic would be to encourage a few well-known critical thinking methods — refocusing our attention, reconsidering our sources and questioning ourselves.

Some of those critical thinking tools fall under the category of “system 2” or slow thinking as described in the book “Thinking, Fast and Slow.” AI is good at fooling the fast thinking “system 1” — the mode that often jumps to conclusions.

We can start by refocusing attention on policies and performance rather than gossip and rumors. So what if former President Donald Trump stumbled over a word and then blamed AI manipulation? So what if President Joe Biden forgot a date? Neither incident tells you anything about either man’s policy record or priorities.

Obsessing over which images are real or fake may be a waste of the time and energy. Research suggests that we’re terrible at spotting fakes.

“We are very good at picking up on the wrong things,” said computational neuroscientist Tijl Grootswagers of the University of Western Sydney. People tend to look for flaws when trying to spot fakes, but it’s the real images that are most likely to have flaws.

People may unconsciously be more trusting of deepfake images because they’re more perfect than real ones, he said. Humans tend to trust faces that are symmetrical, so AI-generated images can often look more attractive and trustworthy than the real thing.

Asking voters to simply do more research when confronted with social media images or claims isn’t enough. Social scientists recently made the alarming finding that people were more likely to believe made-up news stories after doing some “research” using Google.

That wasn’t evidence that research is bad for people, or for democracy for that matter. The problem was that many people do a mindless form of research. They look for confirmatory evidence, which, like everything else on the internet, is abundant — however crazy the claim.

Real research involves questioning whether there’s any reason to believe a particular source. Is it a reputable news site? An expert who has earned public trust? Real research also means examining the possibility that what you want to believe might be wrong.

AI has made it cheaper and easier than ever to use social media to promote a fake news site by manufacturing fake people to comment on articles, said Filippo Menczer of the Observatory on Social Media at Indiana University. For years, he’s been studying the proliferation of fake accounts known as bots, which can have influence through the psychological principle of social proof — making it appear that many people like or agree with a person or idea.

If AI makes it impossible to trust what we see on television or on social media, that’s not altogether a bad thing, since much of it was untrustworthy and manipulative long before recent leaps in AI. Decades ago, the advent of TV notoriously made physical attractiveness an important factor for all candidates. There are more important criteria on which to base a vote.

Contemplating policies, questioning sources and second-guessing ourselves requires a slower, more effortful form of human intelligence. But considering what’s at stake, it’s worth it.

F.D. Flam is a Bloomberg Opinion columnist covering science. She is host of the “Follow the Science” podcast.