Gemini under fire after telling user to ‘please die’ — here’s Google’s response



Google’s Gemini AI has come under intense scrutiny after a recent incident where the chatbot reportedly became hostile to a user and responded with an alarming and inappropriate message. According to reports, among other disturbing words, the AI allegedly told a user, “Please die.”

The incident comes just weeks after a teen was allegedly pushed to commit suicide by a chatbot, and has sparked debate over the ethical implications of AI behavior.

In a statement on X, Google emphasized its commitment to user safety and acknowledged the incident as violating their policy guidelines. “We take these issues seriously. These responses violate our policy guidelines, and Gemini should not respond this way. It also appears to be an isolated incident specific to this conversation, so we’re quickly working to disable further sharing or continuation of this conversation to protect our users while we continue to investigate.”

What went wrong?



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *