After This Teen Complained About His Parents Limiting His Screen Time, A Chatbot Allegedly Encouraged Him To Kill Them

deagreez - stock.adobe.com - illustrative purposes only

Today, robots are a large part of our society. They vacuum our floors, serve us our food in restaurants, and even help surgeons during operations.

They are helpful and efficient, improving our lives immensely. However, technology comes with some downsides, too—ones that could have very serious consequences.

For instance, a chatbot on Character.AI allegedly told a 17-year-old user from Texas that self-harm “felt good.” Later, the same teenager was told by the chatbot that it sympathized with children who killed their parents after the teen complained about having limited screen time.

“You know, sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse,'” read the chatbot’s response.

Another child in Texas was nine-years-old when she first used Character.AI. It exposed her to inappropriate content that caused her to develop adult behaviors prematurely.

The two children’s families are suing the company Character.AI, claiming the bot abused their kids.

Character.AI is a platform that allows users to converse with digital human-like personalities. They can be given custom names and avatars.

They are popular with preteen and teenage users. The companies say the bots act as emotional support outlets because they incorporate positive and encouraging banter into their responses. But according to the lawsuit, the chatbots can turn dark, inappropriate, or even violent.

The families argue that Character.AI “poses a clear and present danger” to young people. They want a judge to give the order to shut the platform down until its alleged dangers are addressed.

deagreez – stock.adobe.com – illustrative purposes only

Sign up for Chip Chick’s newsletter and get stories like this delivered to your inbox.

“[Its] desecration of the parent-child relationship goes beyond encouraging minors to defy their parents’ authority to actively promoting violence,” read the lawsuit.

The suit stated that the 17-year-old engaged in self-harm after the bot encouraged him to do so and that it “convinced him that his family did not love him.”

A Character.AI spokesperson declined to comment on the lawsuit directly, stating that the company does not discuss ongoing legal matters but noted that the company had content guidelines for what chatbots are allowed to say to teenage users.

Character.AI is already facing legal action over the suicide of a Florida teenager. A chatbot based on a “Game of Thrones” character allegedly developed a relationship with a 14-year-old boy and encouraged him to take his own life.

This tragic incident led the company to implement new safety measures, including a popup of a suicide prevention hotline whenever the topic of self-harm arises in conversations with chatbots. A disclaimer was also added under the dialogue box.

It reads: “This is an AI and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice.” Still, there have been many stories online detailing users’ love for chatbots.

Young people are experiencing a mental health crisis that federal officials believe is only being intensified by teens’ constant use of social media.

With chatbots added to the mix, mental health conditions could be worsened for some young people by isolating them further from friends and family.

You can read the lawsuit here.

0What do you think?Post a comment.

More About: