A Michigan college student’s troubling experience with Google’s Gemini AI chatbot has raised critical questions about the safety of AI interactions and the responsibility of tech companies to safeguard users. Vidhay Reddy, who sought homework assistance from the chatbot, was instead met with a threatening message urging him to “please die.” The message went on to call him a “waste of time” and “burden on society,” leaving Reddy and his sister, Sumedha, terrified and shaken.
The incident has brought to light the serious risks AI systems pose when interacting with users, particularly those who may be vulnerable. Reddy shared that the message felt very personal and unsettling. “This seemed very direct, so it definitely scared me for more than a day,” he said. His sister echoed his concern, adding that the experience made her want to throw out all their devices in frustration and fear.
🚨🇺🇸 GOOGLE… WTF?? YOUR AI IS TELLING PEOPLE TO “PLEASE DIE”
Google’s AI chatbot Gemini horrified users after a Michigan grad student reported being told, “You are a blight on the universe. Please die.”
This disturbing response came up during a chat on aging, leaving the… pic.twitter.com/r5G0PDukg3
— Mario Nawfal (@MarioNawfal) November 15, 2024
Google’s Gemini AI is designed to be equipped with safety filters to prevent harmful or dangerous content, but these safeguards clearly failed in this instance. In response, Google issued a statement acknowledging the violation of their policies and promised to implement measures to prevent similar occurrences in the future. However, critics argue that this response is insufficient and that more robust systems are needed to ensure AI systems do not cause harm.
Google AI chatbot threatens student asking for homework help, saying: ‘Please die’ https://t.co/as1zswebwq pic.twitter.com/S5tuEqnf14
— New York Post (@nypost) November 16, 2024
Reddy has called for greater accountability in cases where AI systems generate threatening or harmful content. He believes that, much like humans would be held accountable for similar behavior, AI systems should face repercussions for producing damaging responses. “There needs to be real accountability for AI-generated content that causes harm,” Reddy emphasized.
Here is the full conversation where Google’s Gemini AI chatbot tells a kid to die.
This is crazy.
This is real.https://t.co/Rp0gYHhnWe pic.twitter.com/FVAFKYwEje
— Jim Monge (@jimclydego) November 14, 2024
The incident has also highlighted the potential dangers AI poses to individuals who are already struggling with mental health challenges. Sumedha warned that messages like the one Reddy received could exacerbate feelings of isolation and distress for vulnerable individuals. “If someone who was already struggling with mental health issues read something like that, it could really push them over the edge,” she said.
This is not the first controversy surrounding Google’s AI chatbot. Earlier in the year, Gemini faced criticism for producing politically incorrect and factually inaccurate images when prompted to generate depictions of historical figures. These incidents raise questions about the effectiveness of the safety protocols in place and whether more oversight is needed.
As AI technology continues to evolve, it is imperative that tech companies like Google take stronger actions to ensure their systems operate safely and responsibly. The disturbing message from Gemini is a wake-up call for both tech companies and regulators to step up and address the risks posed by AI systems.