Artificial intelligence was once common only on television and in movies. But with the evolution of software programs like ChatGPT, DALL-E and Microsoft’s Bing bot, AI-generated materials aren’t just novel, they’re everywhere.
Some fear it, but many have embraced it. From doctors offices, to classrooms to small businesses, ChatGPT and similar programs are making a splash.
In February, Vancouver-based ZoomInfo announced that the company plans to integrate GPT technology into its go-to-market platform.
“The software world is abuzz over what the future of products like ChatGPT can bring, and we’re thrilled to invent the future of go-to-market with generative (artificial intelligence),” ZoomInfo founder and chief executive Henry Schuck said in a statement to the press.
The software’s integration is expected to help customers with things like prioritizing scheduling, writing or shortening emails and isolating a call to action for sales representatives after a sales call.
ChatGPT’s ability to almost instantly generate entire articles, emails and stories has led many in the worlds of business and education to welcome the technology.
ResumeBuilder.com found in a recent survey of 1,000 American businesses that 49 percent of those surveyed currently use ChatGPT and an additional 30 percent plan to.
Nearly all of those companies that said in the survey they used the software said they planned to expand their use of it. They reported using the AI to write code and create content, provide customer support, create summaries of meetings or documents and to generate task lists.
But most recognize the application’s limits.
A fast evolution
Alexandra Watson is the director of media at Vancouver’s GTMA marketing agency. The agency has been looking into marketing applications with various artificial intelligence software for more than a year.
“The technology of Open AI’s ChatGPT is considerably more sophisticated than similar tools were just months ago,” said Watson.
Watson said GTMA sees use for the integration of artificial intelligence in search advertising and bulk campaign automations. It could also help brainstorm ideas for content writing in blogging, social media posts and social media advertising headlines and descriptions.
Still, the company isn’t sure if it will integrate artificial intelligence more into its workflow.
“Although we see some benefits of utilizing (artificial intelligence) technology in our ideation process, there are limitations and concerns with the technology that require us to use it strategically and sparingly,” said Watson. The company is open to integrating it more if it proves promising, however.
Programs like ChatGPT have the ability to quickly process large amounts of data and provide summaries of topic information, she said. Plus, they’re getting better at matching particular tones and writing styles.
But GTMA shares in the concerns and hesitations around the technological advancement.
“It’s important for users to understand, first and foremost, that (artificial intelligence) content technologies like ChatGPT and Bard are taught by a finite data set of information,” said Watson.
These programs, she said, are not updated in real-time, don’t source material from crawling websites and don’t cite references for the information they use.
“While the purpose of these technologies is to synthesize data and provide useful information based on search queries, they are limited by their coding and the restraints of whatever ethics and decision frameworks they were built upon,” Watson added. So, the programs actually reflect the points of view of their programmers.
“They cannot ideate, strategize, or be creative outside of those confines,” she said.
GTMA is also concerned that these technologies could be used to replace uniquely human, creative work. The value of creative human work will never be able to be replaced by these artificial intelligence programs, said Watson.
Another concern shared by Watson was the lack of sources that the language bots cite.
“It is very difficult to fact-check them and give proper credit for prior work where it is due,” said Watson. “That means there is a risk of plagiarism and of spreading false or misleading information if the (artificial intelligence) work is not double-checked before publication.”
Embracing the tool
Programs like ChatGPT aren’t just raising eyebrows in business, but in education, too.
William Luers is a professor of creative media and digital culture at Washington State University Vancouver. He uses ChatGPT and some of the artificial-intelligence-generated image programs in his classes.
“I make it clear that this is not replacing creativity,” said Luers. “It’s an assistant for creative work.”
Clark College web coding professor Bruce Elgort says he, too, views ChatGPT as little more than a classroom resource as opposed to something of concern.
“We’re embracing it, not to cheat, but it’s really just like another tool at Home Depot for us,” Elgort said.
Luers acknowledges concerns in the media and in education around plagiarism and false information. In education, he believes students who are going to plagiarize are going to plagiarize. Assignments that could be easily done with ChatGPT aren’t very good assignments, he added. In the long term, the education system and teachers will need to create assignments that can’t easily be written by a chatbot.
“One of the things I think is going to become quite obvious is that there’s very low creativity in these tools,” said Luers. “In other words, they just repeat stuff.
“Unless you’re pushing it as a human in certain directions, you’re not going to get at all interesting results,” he added.
These technologies are all currently generalists, said Watson.
“The more niche the request, the murkier the data and logic they can bring to an exchange, and the higher the chance of error,” Watson said.
But for students in a creative writing class, language bots could be used as rapid research in helping to develop worlds or characters’ backstories.
“Obviously, that’s not real storytelling. That’s more like suggesting names and possible characters and then the student can work through those things,” said Luers.
Luers thinks the language bots could be helpful for journalists as well in guided research and building worlds. He also sees use for it in rapidly building websites and working through coding problems.
A coding aid
In coding, Elgort’s students use ChatGPT to help suggest portions of code within a larger application or do fact-checking. It’s comparable to using a spell-check or grammar-check as a writer, he said.
ChatGPT isn’t a new kind of resource Elgort’s students’ use, in fact, he’s also used a similar resource — Microsoft CoPilot — to serve the same purpose before ChatGPT entered the public lexicon.
“In coding, most of the applications you create are like huge LEGO sets made out of these huge LEGO bricks that are common and readily available. It’s the same thing that goes for the suggestions that ChatGPT and CoPilot make,” Elgort explained. “It’s our job to make that finished LEGO product that’s comprised of all these little things. And we still wouldn’t be able to do all that without all the learning we take students through to get there.”
Oftentimes, Elgort said his students find that artificial intelligence resources like ChatGPT and CoPilot are more of a distraction than an assistant. For example, both applications are known to suggest overcomplicated options and be over-aggressive. In such cases, students often turn it off, which serves as another lesson altogether.
Being further evaluated
At the K-12 level in districts in the Vancouver area, ChatGPT has yet to make a legitimate splash. In both Evergreen Public Schools and Vancouver Public Schools, educators are working on scheduling trainings for teachers on what to look for and how to potentially integrate it into the classroom.
Evergreen banned the software on all district-issued Chromebooks in January, citing a need to learn more about the program’s role and application first, since it’s still so new. Vancouver is offering online staff trainings throughout March, with the focus on using ChatGPT and other artificial intelligence resources to facilitate teaching and learning.
Elgort said, at least in his department, there’s little concern on it being used maliciously. He said he looks forward to finding more ways to learn from it.
“I haven’t seen instances of students leaning on it too hard. They know when to turn it off, and they do so all the time,” Elgort said. “Knowing how to regulate the tool from a student perspective, I think that’s awesome.”
“I don’t shy away from these tools at all,” added Luers.
Of course, Luers says, there will be abuses. He points to the potential for spam as one example. But it also has potential.
“If we treat it as human work, we’re going down the wrong path,” concluded Luers. “If we focus on it as a tool to create very human kinds of media, I think that will open up some possibilities.”