Clicky chatsimple

ChatGPT “Goes Off The Rails,” Responding With Meaningless Text

Category :

AI

Posted On :

Share This :

OpenAI is encountering a significant obstacle just a few days after stunning the globe with strikingly lifelike demonstrations of its new AI video-generating model, Sora.

Yesterday afternoon, Tuesday, February 20, 2024, saw the startup’s flagship product, ChatGPT, start giving consumers incomprehensible, gibberish outputs. Many users flocked to X (previously known as Twitter) to voice their complaints.

While some of ChatGPT’s outputs repeatedly repeated sentences or jumbled together unintelligibly in Spanish and English, others were made-up terms that the chatbot driven by a large language model (LLM) wasn’t allowed to perform.

The unsettling “weird horror” alien graffiti from Jeff VanderMeer’s groundbreaking 2014 novel Annihilation was likened by one perceptive user to the seemingly random strings of disconnected words. It is true, in my opinion, that both have an unsettling quality of an inhuman intelligence that seems out of whack or illogical.

Some respondents dismissed the strange outputs as glitches, while others joked that they were the start of a “robot uprising” similar to what is shown in many sci-fi film franchises, such as The Matrix and Terminator. Others pointed out that they cast doubt on the idea that generative AI tools could perform tasks like writing and generating code just as well as humans.

OpenAI announced that it was aware of the problem on its public status dashboard webpage as of 3:40 p.m. PST on February 20. The company announced that it had located the problem and was “remediating” it at 3:47 p.m. PST. Additionally, the business stated that it was “continuing to monitor the situation” at nearly five p.m. PST.

The official, verified ChatGPT account on X made the following announcement at 10:30 a.m. PST today: “went a little off the rails yesterday but should be back and operational!”

Later, the ChatGPT account shared a screenshot of an OpenAI post-mortem update on the incident from its website. The update said that “a bug with how the model processes language was introduced by an optimization to the user experience,” but that the business has “rolled out a fix” that fixed the issue.

Even with a quick fix, the erroneous and rambling answers that appeared out of nowhere made me and others doubt the underlying integrity and reliability of ChatGPT and the use of it or other OpenAI products, like the GPT-3.5 and GPT-4 LLMs, for enterprise uses, particularly for “safety critical” tasks like those in engineering, healthcare/medicine, transportation, and power.