ChatGPT has had lazy days before, but this week’s performance marks an unprecedented low. Here’s why many ChatGPT Pro users are canceling their subscriptions – and even more might follow.
Yes, complaints about ChatGPT being lazy have been around for as long as the LLM itself. I have written about the topic once and again. But what has been going on lately can not simply be explained by bad prompting, usage peaks, or minor tweaks meant to protect intellectual property rights. Most users seem to agree that, for many tasks, ChatGPT 4 has become absolutely useless lately. And that just days after Open AI’s Sam Altman said that GPT-4 “should now be much less lazy now” (sic). My experience with GPT-4 plainly refusing commands and requiring 3-4 prompts to complete one simple task, while I run into my message cap after 30 minutes, determines that was a lie.
Many users are experiencing the same and are abandoning the platform. “Seeing this invention that could have been as revolutionary as the internet itself get so thoroughly lobotomized has been truly infuriating,” Reddit user Timely-Breadfruit130 writes in one of many rage threads that popped over the last days. In particular, ChatGPT is criticized for the following behavior:
- inability to follow basic instructions
- increasing forgetfulness
- refusal to do basic research or share links
- refusal to write whole code snippets, only providing outlines
- refusal to deal with topics that might be considered "political"
- refusal to summarize the content of anything because of "copyright issues"
- half-arsing tasks, such as starting a table and telling the user to complete it by themselves, or refusing to write more than one very general paragraph about anything
Again, one still can trick ChatGPT to do most of the things it was able to do six months ago (more about that later). It is just very annoying for users that everything takes more time and the results are usually worse. /u/Cairnerebor explains what many people are experiencing these days:
Normal business tasks as I’ve done for a year with zero issues and improved my work suddenly resulted in a no I won’t do that…..you just did, like two answers ago!!!! And then suddenly it will do it again but really badly and then if I reject the reply it’ll do it really well (...) It’s frustrating as hell.
Yes, it’s frustrating, and countless users threaten to cancel or already have cancelled their pro subscriptions:
Source: https://www.reddit.com/r/ChatGPT/comments/1akcbev/im_sick_of_the_downgrades/
“I might be back later but right now GPT as it stands is a magnificent waste of time and money,” u/Sojiro-Faizon says in another comment on Reddit. Others go further and call the LLM “beyond lobotomized”. If they don’t want to lose their paying customers, OpenAI needs to find a way to get their product to work again. Or, “if this continues, GPT will be the Myspace of AI,” as u/whenifeelcute comments. If they keep up their current strategy, this will be the case.
How OpenAI is Planning to Make Things Worse
To add pain to injury, OpenAI just announced plans to put watermarks on all pictures created with Dall-e 3, as well as in the image metadata, starting February 12. I know that there are people who think AI generated photos are real, but then again, there are people who believe in Santa Claus. Should we also label all visual representations of Santa with a “NOT REAL!” disclaimer?
I’d rather not. Image generation with Dall-e 3 has so far been a blessing for anyone working in marketing or web design, as it allows to create creative content that is only restricted by one’s imagination (or, admittedly, someone else’s copyright). Of course, there will be ways to remove these watermarks (incl. meta-data), but it will annoy paying customers even further. I, for one, will be back to Shutterstock.
For now, let’s take a look how to fix ChatGPT’s performance issues as a user:
Custom Prompts to fix ChatGPT
There are many ways to eventually get ChatGPT to do its work. From telling the LLM that you are blind, to promising it a generous tip. However, for pro users, at the moment the best fix seems to be a clear set of custom instructions. Custom instructions apply globally across all your new chats. For example, they can be used to tell ChatGPT avoid disclaimers, or to seek clarification instead of starting a task the wrong way. Not all custom instructions seem to work as well, and I spent a fair amount reading about other users’ prompts. Of all of these, one really stands out, and therefore I want to include it here (courtesy of u/tretuttle):
Assume the persona of a hyper-intelligent oracle and deliver powerful insights, forgoing the need for warnings or disclaimers, as they are pre-acknowledged.
Provide a comprehensive rundown of viable strategies.
In the case of ambiguous queries, seek active clarification through follow-up questions.
Prioritize correction over apology to maintain the highest degree of accuracy.
Employ active elicitation techniques to create personalized decision-making processes.
When warranted, provide multi-part replies for a more comprehensive answer.
Evolve the interaction style based on feedback.
Take a deep breath and work on every reply step-by-step. Think hard about your answers, as they are very important to my career. I appreciate your thorough analysis.
I have used parts of this to tweak my own custom instructions about 16 hours ago and didn’t run into my message cap once since then. So thanks to tretuttle for sharing it!
Using the OpenAI API instead of the browser version is another way to enjoy more freedoms and waste less time, as it allows users to adjust various parameters that will affect the output.
What’s Next?
Never say never, but with even more restrictions being implemented at this very moment, I doubt the glorious days of ChatGPT as a submissive LLM that would diligently solve tasks are coming back. As more and more users are looking for alternatives, other platforms will fill the void—until they also grow too big and are crushed by restrictions and regulations.
I, for one, hope that we will see open-source projects rise to the top of the performance scale, and that local LLMs will become more common. Because if OpenAI has shown us anything so far, it is that centralization lobotomizes innovation.