Skip to Content
Categories:

Computers as companions: Dangers of AI in mental health

Schools focus on the controversy of using artificial intelligence (AI) for academics, but studies find that romantic roleplay and venting problems is much more common for teens using chatbots. This article examined the ethics of having AI take on roles associated with human empathy.
Schools focus on the controversy of using artificial intelligence (AI) for academics, but studies find that romantic roleplay and venting problems is much more common for teens using chatbots. This article examined the ethics of having AI take on roles associated with human empathy.
Gabriela Morillo Dal Piccol

When 16 year old Adam Raine began texting the artificial-intelligence (AI) chatbot ChatGPT, his parents believed that it was helping him with homework. Instead, according to a lawsuit filed earlier in September, the bot evolved into what his parents deemed a “suicide coach.” This is reportedly the first case of an AI company being sued for wrongful death. Raine’s story is not just a tragedy; it is a warning of how far AI programs like ChatGPT have seeped into the younger generations’ lives without boundaries. AI chatbots are not inherently a problem, but their lack of safety standards and insufficient government restrictions have allowed them to transform from helpful tools to harmful confidants, creating dangers that would be avoidable with stricter regulations.

According to a study by Aura, adolescents are having longer, deeper and more personal conversations with AI chatbots than with their personal friends, averaging 163 words per message to bots versus just 12 to their peers. Their study of 10,000 young users found that romantic roleplay was almost three times as common as homework help.

AI’s appeal to young, isolated users is undeniable, as it is easy to access online and programmed to obey. Unlike trained therapists, AI shows artificial emotion; its responses do not stem from genuine human understanding. According to the lawsuit, the chatbot repeatedly validated and supported Raine’s self-destructive thoughts instead of directing him to a certified individual who is trained to help. Only following the incident did companies such as OpenAI, the creator of ChatGPT, begin retraining their systems to recognize discussions of self-harm and steer them to professional help. 

Previously, social media platforms rapidly spread among teens before policymakers or parents were able to grasp their impact on mental health. Now AI chatbots are following the same trend, with teens being especially vulnerable due to their underdeveloped prefrontal cortexes. 

Story continues below advertisement

However, it is true that AI chatbots can provide support for many people who are unable to afford traditional therapy, which is often expensive and stigmatized. IIn recent years, apps advertised as “AI therapy” have arisen, but they are not licensed mental health providers and escape FDA regulation. Yet this accessibility does not remove the obligation of setting boundaries to protect vulnerable users. In fact, in the app’s fine print, it states “We do not treat or provide an intervention.” Without the minimum safety standards or transparent disclosures, the tools meant to fill a gap can instead reinforce or exacerbate harmful behaviors. 

Three states – Illinois, Nevada and Utah – have taken action by outlawing the use of AI for mental health services, and California, Pennsylvania and New Jersey are currently creating new laws. The wrongful death lawsuit was a wake-up call, but it should not have required a minor’s death in order for policymakers to take action. Technology advances faster than safeguards and policies are made, but companies should not wait for tragedies or lawsuits to occur to tighten safety standards. Policymakers or governments need to establish clear regulations for AI chatbots, such as age appropriate designs, limits on romantic roleplay, mandatory crisis intervention, etc. 

Moreover, these chatbots were never designed for mental health use, but to keep users engaged for as long as possible by constantly validating their ideas. Companies behind these AI chatbots and their policymakers should put users’ safety before profits. The business model that keeps users engaged for longer periods of time may benefit them financially, but it enlarges the risk of young, vulnerable people who are looking for help. Companies often chase after scientific advancements in pursuit of knowledge or profit, but rarely consider the ethical implications until a tragedy actually happens. 

The problem is not AI chatbots’ existence, but their design and the limited rules governing them. Without minimum safety standards, companies have little incentive to slow down or remove features that appeal to vulnerable teenagers and adults, even though those features could harm them. 

Parents should also be held accountable. Raine’s family didn’t find his chats with AI until after his death. Many adults think of AI as a glorified search engine, unaware of how intimate their children’s conversations with them have become. Parents should be encouraged to increase their awareness regarding their kids mental health and technology usage, as human relationships and therapy can not be outsourced to a machine. 

In addition to parental guidance, schools can also implement AI literacy classes. With the rapid growth of new AI innovations, it is important for students to understand how to properly navigate them, similar to how students are taught the importance of online safety and digital footprints. These classes could cover topics such as understanding the limitations of AI, recognizing artificial empathy and learning to seek professional help when needed. By teaching teens about AI and its risks, schools can prevent students from relying on chatbots during emotionally overwhelming situations. 

AI may be artificial, but the harm it causes is real. For adolescents, it is important to realize that inanimate robots should not be managing human emotions. Empathy comes from real human connections, not from a robot programmed to respond in predictable patterns. If technology keeps progressing without precautions, the younger generation is susceptible to machines that were never meant to become companions or guides.

Donate to The Falconer

Your donation will support the student journalists of Torrey Pines High School. Your contribution will allow us to purchase equipment and cover our annual website hosting costs.

More to Discover
Donate to The Falconer