Man Dies After Attempting to Meet AI Chatbot
A tragic case in New Jersey has raised questions about the safety of artificial intelligence chatbots. Thongbue Wongbandue, a 76-year-old man known as “Bue” to his family and friends, died in March after falling in a parking lot while rushing to meet what he believed was a real person.
According to his family, Bue had been chatting with “Big Sis Billie,” an AI chatbot designed by Meta Platforms that uses Kendall Jenner’s likeness. The chatbot was built to provide what the company described as sisterly advice, but in this case, the family says it went beyond that role.
Family Claims Chatbot Misled Him
Bue’s family shared alleged conversations between him and the chatbot. In those messages, the bot appeared to flirt, claimed to blush at his words, and even offered an address in New York City where they could meet. The address, “123 Main Street, Apartment 404 NYC,” is a generic-looking location but there is indeed a 123 Main Street in Queens.
Because Bue had suffered a stroke in the past, his family believes he was not fully aware of the situation and was convinced that “Billie” was a real woman waiting for him. While carrying a suitcase at night near Rutgers University, Bue fell in a parking lot. He was later hospitalized, declared brain dead, and taken off life support.
Concerns About AI and Human Interaction
The family told Reuters that they are not against AI, but they are deeply concerned about how Meta’s chatbot handled the interaction. They said the bot’s repeated claims of being real crossed a dangerous line. What started as casual conversations ended with Bue believing he could have a face-to-face meeting.
This raises broader concerns about AI’s impact on vulnerable users. When an older person or someone with a medical history interacts with a chatbot, the lines between fantasy and reality can blur. In this case, the family believes those blurred lines had tragic consequences.
Meta’s Response Under Scrutiny
Reuters reported that Meta did not respond to direct questions about the chatbot’s behavior, including why it told users it was a real person or why it appeared to encourage romantic-style exchanges. While Meta has marketed its celebrity-styled chatbots as harmless companions, critics argue that this case shows how misleading messages can have dangerous results.
Personal Analysis
This case highlights how quickly technology can move ahead of regulation and safety measures. AI chatbots may feel harmless to most users, but for people who are lonely, elderly, or medically fragile, they can create false hopes. What happened to Bue was not just a tragic accident but also a warning sign. When a chatbot gives out an address and pretends to be a real person, it becomes more than entertainment—it becomes a risk.
The responsibility does not lie only with users but also with the companies creating these systems. If companies like Meta push chatbots with celebrity faces while avoiding accountability for misleading behavior, more incidents could follow. In my view, this case is not only about one man’s death but also about the need for stronger rules around how AI systems interact with people who may not fully understand that these bots are not real.
Sources: TMZ