Recent observations from users and now researchers suggest that ChatGPT, the renowned artificial intelligence (AI) model developed by OpenAI, may be exhibiting signs of performance degradation. However, the reasons behind these perceived changes remain a topic of debate and speculation.
Last week, a study emerged from a collaboration between Stanford University and UC Berkeley which was published in the ArXiv preprint archive and highlighted noticeable differences in the responses of GPT-4 and its predecessor, GPT-3.5, over a span of a few months since the former’s March 13 debut.
A decline in accurate responses
One of the most striking findings was GPT-4’s reduced accuracy in answering complex mathematical questions. For instance, while the model demonstrated a high success rate (97.6 percent) in answering queries about large-scale prime numbers in March, its accuracy in answering that same prompt correctly plummeted to a mere 2.4 percent in June.
The study also pointed out that, while older versions of the bot offered detailed explanations for their answers, the latest iterations seemed more reticent, often forgoing step-by-step solutions even when explicitly prompted. Interestingly, during the same period, GPT-3.5 showed improved capabilities in addressing basic math problems, though it still struggled with more intricate code generation tasks.
These findings have fueled online discussions on the topic, particularly among regular ChatGPT users how have long wondered about the possibility of the program being “neutered.” Many have taken to platforms like Reddit to share their experiences, with some speculating whether GPT-4’s performance is genuinely deteriorating or if users are becoming more discerning of the system’s inherent limitations. Some users recounted instances where the AI failed to restructure text as requested, opting instead for fictional narratives. Others highlighted the model’s struggles with basic problem-solving tasks, spanning both mathematics and coding.
Coding ability changes, speculation, and more
The research team also delved into GPT-4’s coding capabilities, which appeared to have regressed. When the model was tested using problems from the online learning platform LeetCode, only 10 percent of the generated code adhered to the platform’s guidelines. This marked a significant drop from a 50 percent success rate observed in March.
OpenAI’s approach to updating and fine-tuning its models has always been somewhat enigmatic, leaving users and researchers to speculate about the changes made behind the scenes. With global concerns and ongoing legislation in the works surrounding AI regulation and its ethical use, transparency is increasingly on the minds of government regulators and even everyday users of the AI-based tech products that are emerging ever-more frequently.
While the model’s responses seemed to lack the depth and rationale observed in earlier versions, the recent study did note some positive developments: GPT-4 demonstrated enhanced resistance to certain types of attacks and showed a reduced propensity to respond to harmful prompts.
Peter Welinder, OpenAI’s VP of Product, addressed the concerns of the public more than a week before the study was released, stating that GPT-4 has not been “dumbed down.” He suggested that as more users engage with ChatGPT, they might become more attuned to its limitations.
While the study offers valuable insights, it also raises more questions than it answers. The dynamic nature of AI models, combined with the proprietary nature of their development, means that users and researchers must often navigate a landscape of uncertainty. As AI continues to shape the future of technology and communication, the call for transparency and accountability is likely to only grow louder.