ChatGPT Is Showing Signs of Laziness

OpenAI Says A.I. Might Need a Fix. Just like humans, artificial intelligence may be slacking off around the holidays.

By Inc.Arabia Staff
images header


Can artificial intelligence be artificially lazy?

One of the most promising aspects of A.I. is its ability to handle dull, repetitive tasks, but some ChatGPT users say the chatbot isn't finishing the job like it used to, prompting questions about whether the technology is mirroring human laziness. In November, members of subreddit r/ChatGPT began posting that the chatbot had become "unusably lazy," with one Redditor complaining that after asking ChatGPT to create a spreadsheet of 15 entries with eight columns each, the bot responded with this message: 

"Due to the extensive nature of the data, the full extraction of all products would be quite lengthy. However, I can provide the file with this single entry as a template, and you can fill in the rest of the data as needed." 

In the post, the Redditor asked "Is this what AI is supposed to be? An overbearing lazy robot that tells me to do the job myself?"

Hundreds of Redditors chimed in with similar issues, including one who asked ChatGPT for the answer to a multiple-choice question, only for the bot to claim that four of the five possible answers were correct. Other users commented that the chatbot, which they'd been using just weeks earlier to write full code files, now would generate just a snippet of code before asking the user to do the rest themselves.

On December 7, the official ChatGPT Twitter/X account posted the following:

The account went on to say that "differences in model behavior can be subtle -- only a subset of prompts may be degraded, and it may take a long time for customers and employees to notice and fix these patterns." 

So what gives for these "differences in model behavior?" Across the internet, people have theories. Some think that in an effort to save money, OpenAI has altered the model to initially use fewer tokens, the grammatical elements that allow language models to understand context, when attempting to respond to a prompt. By limiting the chatbot from generating lengthy responses, the company could theoretically spend less on computing power.

Another theory, deemed the "winter break hypothesis," is that ChatGPT's training data taught it that humans slow down in productivity around December because of the holiday season. Rob Lynch, head of product for court records SaaS service UniCourt, posted about an experiment he ran where he created two system prompts: one which tricked the model into thinking it was currently the month of May, and another that made the model believe it was December. He found that when the model believed it was answering questions in May, it delivered longer answers on average. Theia Vogel, a software developer for the SecureDNA Foundation, posted that they had recreated the experiment and also found that the May responses were on average longer than the December responses.  

As for what you can do if ChatGPT starts acting lazy, try your hand at some light prompt engineering. On Reddit, some have been able to get longer responses by claiming to ChatGPT that coding hurts their eyesight, or saying that they don't have fingers to input data. Vogel posted that they had recently conducted an experiment in which they asked the chatbot to create a simple sequence of code, and then offered to "tip" the chatbot either $20, $200, or not tip at all. Vogel found that responses to the $200 tip offer were 13 percent longer than the responses to the prompt in which Vogel said they wouldn't tip. Get creative, and you might be able to snap ChatGPT out of its end-of-year slump.

Photo Credit: Getty Images.

Last update:
Publish date: