So, while many professors could help define artificial intelligence, I interviewed Chown because he knows me well enough to dumb down the discussion. And because he co-authored a book this year titled “Meaningful Technologies: How Digital Metaphors Change the Way We Think and Live.”
“AI is impacting your life right now in ways people don’t recognize,” he said. “Anything with a predictive analytic component is being shifted to AI.”
On an individual basis, that influences which stories are directed to you on Facebook and which movies Netflix recommends for you and which tweets you are most likely to see. You probably knew that, but the most interesting portion of our discussion involved the risks of artificial intelligence.
After all, a recent one-sentence statement from the Center for AI Safety said, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” The statement was signed by more than 350 executives, researchers and engineers working in artificial intelligence.
“It is a bit of hyperbole,” Chown said. “By the way, we’re doing a fine job of self-extinction without AI.”
The danger, as Chown sees it, is allowing computer thinking and smartphones and social media to have too much control.
“We think, ‘It’s AI, so it’s good,’ ” he said. “But we don’t look at it critically. We don’t understand how AI makes its decisions. If you ask AI a question, the answer seems so right, why would I go check? One of the fundamental aspects of our program at Bowdoin is accountability — who’s accountable? People have agency, and particularly collectively they have a lot of agency.”
And that speaks directly to journalism and information and misinformation.
“Anytime you get news, you have to stop,” Chown said. “You have to verify and not react right away. We need to have courses on how to find out what’s real; that is a vital skill in our society.”
Of course, that always has been important. But it is imperative at a time when information — and misinformation — can spread so quickly and when much of that “information” is generated by machines.
“One thing that is happening right now is that the internet is being flooded by stuff written by AI,” Chown said. “That means the next generations of AI are being trained on stuff that they’ve written themselves. That is going to make further improvement super difficult.”
He also mentioned several demonstrations of artificial intelligence programs spewing out information that sounds plausible but is wildly inaccurate — including papers written by ChatGPT.
All of that might increase demand for reliable news sources — if only we are intelligent enough to recognize the need.