X

ChatGPT Is a Stunning AI, but Human Jobs Are Safe (for Now)

Large language models can surprise and delight, but they're not perfect.

Jackson Ryan Former Science Editor
Jackson Ryan was CNET's science editor, and a multiple award-winning one at that. Earlier, he'd been a scientist, but he realized he wasn't very happy sitting at a lab bench all day. Science writing, he realized, was the best job in the world -- it let him tell stories about space, the planet, climate change and the people working at the frontiers of human knowledge. He also owns a lot of ugly Christmas sweaters.
Jackson Ryan
5 min read
chatgpt1

Should you worry about ChatGPT coming for your job? 

Getty Images

If you've spent any time browsing social media feeds over the last week (who hasn't), you've probably heard about ChatGPT. The mesmerizing and mind-blowing chatbot, developed by OpenAI and released last week, is a nifty little AI that can spit out highly convincing, human-sounding text in response to user-generated prompts. 

You might, for example, ask it to write a plot summary for Knives Out, except Benoit Blanc is actually Foghorn Leghorn (just me?), and it'll spit out something relatively coherent. It can also help fix broken code and write essays so convincing some academics say they'd score an A on college exams.

Its responses have astounded people to such a degree that some have even proclaimed, "Google is dead." Then there are those who think this goes beyond Google: Human jobs are in trouble, too.

The Guardian, for instance, proclaimed "professors, programmers and journalists could all be out of a job in just a few years." Another take, from the Australian Computer Society's flagship publication Information Age, suggested the same. The Telegraph announced the bot could "do your job better than you."

I'd say hold your digital horses. ChatGPT isn't going to put you out of a job just yet.

A great example of why is provided by the story published in Information Age. The publication utilized ChatGPT to write an entire story about ChatGPT and posted the finished product with a short introduction. The piece is about as simple as you can ask for -- ChatGPT provides a basic recounting of the facts of its existence -- but in "writing" the piece, ChatGPT also generated fake quotes and attributed them to an OpenAI researcher, John Smith (who is real, apparently).

This underscores the key failing of a large language model like ChatGPT: It doesn't know how to separate fact from fiction. It can't be trained to do so. It's a word organizer, an AI programmed in such a way that it can write coherent sentences.

That's an important distinction, and it essentially prevents ChatGPT (or the underlying large language model it's built on, OpenAI's GPT 3.5) from writing news or speaking on current affairs. (It also isn't trained on up-to-the-minute data, but that's another thing.) It definitely can't do the job of a journalist. To say so diminishes the act of journalism itself.

ChatGPT won't be heading out into the world to talk to Ukrainians about the Russian invasion. It won't be able to read the emotion on Kylian Mbappe's face when he wins the World Cup. It certainly isn't jumping on a ship to Antarctica to write about its experiences. It can't be surprised by a quote, completely out of character, that unwittingly reveals a secret about a CEO's business. Hell, it would have no hope of covering Musk's takeover of Twitter -- it's no arbiter of truth, and it just can't read the room.

It's interesting to see how positive the response to ChatGPT has been. It's absolutely worthy of praise, and the documented improvements OpenAI has made over its last product, GPT-3, are interesting in their own right. But the major reason it's really captured attention is because it's so readily accessible. 

GPT-3 didn't have a slick and easy-to-use online framework and, though publications like the Guardian used it to generate articles, it made only a brief splash online. Developing a chatbot you can interact with, and share screenshots from, completely changes the way the product is used and talked about. That's also contributed to the bot being a little overhyped.

Strangely enough, this is the second AI to cause a stir in recent weeks. 

On Nov. 15, Meta AI released its own artificial intelligence, dubbed Galactica. Like ChatGPT, it's a large language model and was hyped as a way to "organize science." Essentially, it could generate answers to questions like, "What is quantum gravity?" or explain math equations. Much like ChatGPT, you drop in a question, and it provides an answer.

Galactica was trained on more than 48 million scientific papers and abstracts, and it provided convincing-sounding answers. The development team hyped the bot as a way to organize knowledge, noting it could generate Wikipedia articles and scientific papers. 

Problem was, it was mostly pumping out garbage -- nonsensical text that sounded official and even included references to scientific literature, though those were made up. The sheer volume of misinformation it was producing in response to simple prompts, and how insidious that misinformation was, bugged academics and AI researchers, who let their thoughts fly on Twitter. The backlash saw the project shut down by the Meta AI team after two days.

ChatGPT doesn't seem like it's headed in the same direction. It feels like a "smarter" version of Galactica, with a much stronger filter. Where Galactica was offering up ways to build a bomb, for instance, ChatGPT weeds out requests that are discriminatory, offensive or inappropriate. ChatGPT has also been trained to be conversational and admit to its mistakes.

And yet, ChatGPT is still limited the same way all large language models are. Its purpose is to construct sentences or songs or paragraphs or essays by studying billions (trillions?) of words that exist across the web. It then puts those words together, predicting the best way to configure them. 

In doing so, it writes some pretty convincing essay answers, sure. It also writes garbage, just like Galactica. How can you learn from an AI that might not be providing a truthful answer? What kind of jobs might it replace? Will the audience know who or what wrote a piece? And how can you know the AI isn't being truthful, especially if it sounds convincing? The OpenAI team acknowledges the bot's shortcomings, but these are unresolved questions that limit the capabilities of an AI like this today.

So, even though the tiny chatbot is entertaining, as evidenced by this wonderful exchange about a guy who brags about pumpkins,  it's hard to see how this AI would put professors, programmers or journalists out of a job. Instead, in the short term, ChatGPT and its underlying model will likely complement what journalists, professors and programmers do. It's a tool, not a replacement. Just like journalists use AI to transcribe long interviews, they might use a ChatGPT-style AI to, let's say, generate a headline idea.

Because that's exactly what we did with this piece. The headline you see on this article was, in part, suggested by ChatGPT. But it's suggestions weren't perfect. It suggested using terms like "Human Employment" and "Humans Workers." Those felt too official, too... robotic. Emotionless. So, we tweaked its suggestions until we got what you see above. 

Does that mean a future iteration of ChatGPT or its underlying AI model (which may be released as early as next year) won't come along and make us irrelevant? 

Maybe! For now, I'm feeling like my job as a journalist is pretty secure.