Users of the artificial intelligence (AI) chatbot known as ChatGPT say its behavior started changing last month. Users have noticed the chatbot is getting more irritable and even “lazier.” Sometimes the bot will refuse to carry out the tasks users are requesting. ChatGPT sometimes even asks the user to perform the tasks themselves instead.
The company that created ChatGPT, OpenAI, is now well aware of the problem. Last week on X (formerly Twitter), the company issued a statement about the bot’s behavior on its official ChatGPT account.
“We’ve heard all your feedback about GPT-4 getting lazier!” OpenAI wrote.
“We haven’t updated the model since Nov 11th, and this certainly isn’t intentional,” the company added. “Model behavior can be unpredictable, and we’re looking into fixing it.”
Some users have tried their own various methods of testing the AI bot. One method was to measure the number of characters ChatGPT was willing to generate in May compared to its output in December.
Others have tried to reproduce early results they got from ChatGPT, and have claimed that the bot has indeed become lazier.
“Not saying we don’t have problems with over-refusals (we definitely do) or other weird things (working on fixing a recent laziness issue),” wrote OpenAI technical staffer Will DePue. “But that’s a product of the iterative process of serving and trying to support sooo many use cases at once.”
Read More: The Bizarre and Unhealthy World of AI Girlfriends
Is ChatGPT Suffering From SAD?
The bizarre behavior of ChatGPT as users speculate a variety of possible causes behind the bot’s perceived laziness and mood swings.
Can an artificial intelligence robot learn so much from humans that it begins to even mimic the emotions of Homo sapiens?
Or is this simply another instance of humans anthropomorphizing an algorithm and reading too much into its strange outputs?
Some people speculate that through the immense human training data that ChatGPT learns from, it could be possible that the bot could be picking up certain behaviors and reflecting them back at us.
For example, people often have waning energy levels during the winter months and their motivation decreases.
Many humans fall into a type of depression during the winter months known as “seasonal affective disorder (SAD).” People suffering from SAD may have low energy, feel sluggish, lose interest in activities previously enjoyed, and sleep more, among other symptoms.
The belief that ChatGPT may be suffering some kind of seasonal depression is one theory that’s going around in X that has been dubbed the “winter break hypothesis.” This theory has also spread to other social media platforms.
“What if it learned from its training data that people usually slow down in December and put bigger projects off until the new year, and that’s why it’s been more lazy lately?” X user Mike Swoopskee suggested.
Read More: 7 Actionable Ways to Beat the Winter Blues
Is OpenAI Limiting ChatGPT Due to Over-Demand?
Another theory says that it’s not that ChatGPT is getting lazier, but it may be a product of increasing demand that is overburdening the system that runs the bot.
This has led to another theory that OpenAI may be purposely trying to reduce the burden on their already overloaded systems. Research firm SemiAnalysis estimated in February that ChatGPT was costing the startup nearly $700,000 a day. That estimate came when the company was still largely operating ChatGPT 3.5. This was before OpenAI released its most advanced models, GPT-4 and GPT-4 Turbo. However, there is no evidence to show that slowing down ChatGPT is a deliberate corporate strategy.
DePue suggested that ChatGPT users may have simply noticed these regressions more as they stick out more. DePue says users may not be noticing other improvements made to the “ChatGPT experience” that “you don’t hear much about.”
Creators Don’t Understand AI Psychology?
Although various users of ChatGPT, and even the researchers from OpenAI behind the bot, are aware of the odd behavior, understanding it is another matter.
ChatGPT’s creators have openly admitted that they are not entirely sure how the artificial intelligence tool actually works at the deepest levels.
“If we open up ChatGPT or a system like it and look inside, you just see millions of numbers flipping around a few hundred times a second,” said Sam Bowman, an AI scientist. “And we just have no idea what any of it means.”
“We just don’t understand what’s going on here,” Bowman added. “We built it, we trained it, but we don’t know what it’s doing.”