<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=192888919167017&amp;ev=PageView&amp;noscript=1">
Monday, March 4, 2024
March 4, 2024

Linkedin Pinterest

From the Newsroom: Can robots learn to do journalism?

By , Columbian Editor
Published:

Have you been following all of the stories about artificial intelligence that were produced after the release of a tool called ChatGPT? This AI system has been getting a lot of media attention. Many of the stories I’ve read deal with the issue that lazy or poor students could use it to write their term papers. The software apparently does a passable job, although using it would be cheating.

But ChatGPT, a product of OpenAI, has many worthwhile applications too, some of which we explored in a Wednesday article by Sarah Wolf and Griffin Reilly. ZoomInfo, the Vancouver-based company that markets software used in business-to-business sales, sees it as useful for setting priorities and improving email pitches to customers. A local marketing agency, GTMA, reports it has been looking into marketing applications that use artificial intelligence.

But will artificial intelligence find a home in journalism? Web Editor Amy Libby has been our point person on this, following the trends and doing the research. Last year, she was part of a working group formed by The Associated Press to study the issue.

Artificial intelligence is already used by some media to generate short stories about corporate earnings and sports contests. But what else can it do? Amy asked ChatGPT to “write a profile of journalist Amy Libby.” Sure enough, it produced a story about her, including that she had previously worked at The New York Times and was a winner of the Pulitzer Prize in Journalism.

“Craig, you are not paying me enough!” she said when she saw the write-up. Now, don’t get me wrong. Amy is very talented and could use more pay. But the facts are she previously worked at the Albany Democrat-Herald in Oregon and has won numerous Society of Professional Journalists awards.

In summary, I don’t think we will be using ChatGPT for any purpose in the near future. But technology changes quickly, and we may revisit this policy. If we do, we will be sure to let our readers know. But for now you can be assured that stories from The Columbian are written by humans.

Art generated using artificial intelligence is another issue. We illustrated the ChatGPT story with an image that was created by an AI program called DALL-E 2, also created by OpenAI. Amy gave it a prompt: “photo of robot typing on a computer in an office,” and with a few editing changes, she soon got what she wanted.

Like the college student and the term paper, using artificial intelligence to generate images conjures up a bunch of ethical questions. Our first question was regarding copyright. Do we own an image a computer draws in response to our prompt? Does OpenAI own it? The answer is, apparently, no one owns it— though OpenAI claims the copyright. The law hasn’t caught up to technology in this area.

Our second question: How do we label these? Our robot typist was labeled as “Illustration created using DALL-E 2 AI image generation.” We will continue this clear labeling policy going forward.

We also agreed that AI-generated images will be rarely used, and only if there is a specific purpose, such as illustrating a story on AI. I would also avoid using generated images that would look lifelike. For example, we wouldn’t want to use a computer image of “Rattlesnakes infesting Esther Short Park.” (There is an evil part of me that wants to see “King Kong climbs Clark County Courthouse,” but I will restrain myself.)

Artificial intelligence is an interesting tool, and may become very useful in journalism. It’s not there yet, but when it arrives, we will let readers know about it.

 
View this post on Instagram
 

A post shared by The Columbian (@thecolumbian)

Loading...
Tags