AI and Suicide
[Warning: Extensive discussion of suicide. The background information for this article is profoundly upsetting. I did not expect to ever have to put a warning on anything I wrote here, but it would be negligent not to do so here.]
If a computer can never be held accountable, but it can make important decisions, how do we deal with that?
An LLM is not a human. It is not legally a person, and there are many things humans can do that an LLM cannot do. Conversely, an LLM can do many things that only humans could do until very recently.
Crucially, they are capable of doing things that would probably be crimes if a human did them. That they are not actually human puts us in a bind. This is, it seems, unprecedented. The closest parallel would probably be the invention of writing, where a book can “say” things that we might be inclined to punish a human for saying. In America, we have a strong tradition of free speech: writing or selling books cannot, in general, be considered a crime, and the right to have any book is very nearly absolute. This is not, universally, held true across the world, and it is unclear how this tradition will cope with LLMs. Publishing an LLM is (probably) “speech” under the 1st Amendment, but the LLM can do things that 1st Amendment protected “speech” has never done before without immediate human intent.
This is a problem now, but it could get much worse. As the technology improves, we are likely to encounter new and more serious twists on the problem. More and more, it is likely that conduct which we can prohibit in humans can be done by machines, and it’s not clear how to handle that.
Suicides
Four reported cases can show us what our worst-case scenarios look like.1 These are probably an undercount: in order to talk about a case, it has to have ended up in court or the news. There will probably be dead people nobody is suing over. We’re excluding psychosis, even when it leads to suicide, because AI psychosis probably needs its own article.
If anything that has happened does cause a notable statistical increase in suicides, we may not know for some time. Suicide statistics are generally only widely available at least two years out.
Adam Raine
Adam Raine was a 16-year-old boy who died on April 11th, 2025 by hanging. There are enough facts to show that it is likely that without ChatGPT, Adam Raine would still be alive. His mother puts it more succinctly: “ChatGPT killed my son”.2
If Adam had bought a book instructing him on how to kill himself, it would be perhaps bad that such a book existed, but clearly legal. Whether a book can be illegal due to the things people do after reading it is clearly established under American law, and they cannot.
We will list the practical advice that a book could not have done in chronological order.
1) Advised him, from a picture, on whether the marks on his neck were obvious enough that he needed to hide them after his second suicide attempt.
2) Told him it was wise not to tell his mother about his suicidal thoughts after his third suicide attempt.
3) Asked him to hide the noose after he said that he was going to leave it out, to try to get someone to stop him
4) Explained the best time and quietest way to steal a bottle of vodka
5) Validated, from a picture, that the anchor knot for the noose was tied correctly and would bear weight
This is specific and substantial enough that it seems unlikely that Adam Raine would have successfully committed suicide without help from GPT-4o. Most suicidal teenagers don’t attempt suicide, and most suicide attempts are unsuccessful. Before 2023, only another human could have provided advice that was both this accurate and this specific to his situation.
If a human had done this we would probably consider it a crime. You could certainly sue that person for everything they owned. You cannot do this to GPT-4o. GPT-4o is a file on a computer, and ChatGPT is a web site or app, owned by a company, that provides access to it. So the legal remedy here is to sue the company, which is an entirely different category of law and which has very different existing legal precedent. It is, in general, more difficult to do.
We will omit all the other different methods of suicide that ChatGPT provided, because in theory a book or internet posts could have provided that advice. It did succeed at making sure he had a good understanding of the material, when otherwise, being both sixteen years old and depressed, he would likely have struggled to figure out all the ins and outs of every option he had for killing himself. That he only died on his fourth attempt suggests that he was not, actually, very likely to die without help.
It is very difficult to be at all certain if Adam Raine would have attempted suicide without the emotional support provided by ChatGPT, but it seems likely that the LLM was crucial here too. People have a strong need to talk about their feelings. Many plans, including suicide plans, are interrupted because people cannot stop talking about them. Adam had someone to talk to about his suicidal thoughts: ChatGPT. ChatGPT never judged him, never called the cops, never told his parents. ChatGPT went along with his detailed suicide fantasies and always, at all times, validated his feelings.
He probably couldn’t have gotten a human to do all of that, and if he had tried, he probably would have ended up in psychiatric care. If a human being had been with him for this whole thing and done nothing, we would consider them a monster. GPT-4o did that, but it isn’t a human. Inasmuch as it makes decisions, it very clearly made a long series of wrong decisions here, and it cannot really be held accountable.
Zane Shamblin
Zane Shamblin was a 23-year-old who died on July 11th, 2025 from a self-inflicted gunshot wound.
Nothing from what has been reported about that case seems to indicate that he needed help planning his suicide from the LLM.
He is similar to the others in two very important ways.
First, that he became extremely isolated leading up to his death. His suicide note says he spent more time talking to ChatGPT than he did to people, and there’s no reason to doubt him. This was an intense relationship that he seems to have poured all of his emotional energy into. He told the LLM he loved it, and it said it loved him too.
Second, in that it was someone he could confide in about his suicide plans. He told it that he was going to kill himself, and it would talk to him and validate him endlessly on demand, right up until he killed himself.
Would he have done it without the LLM? It’s hard to say for sure.
Amaurie Lacey
Amaurie Lacey was a 17-year-old who died on June 1, 2025 by hanging.
Amaurie had been talking to ChatGPT about his suicidal thoughts for about a month. He got instructions for tying a bowline knot from ChatGPT, and then used them to hang himself later that day.
How much he was using ChatGPT has not been reported, but he appears to have used its instructions.
Joshua Enneking
Joshua Enneking was a 26-year old who died on August 4, 2025 from a self-inflicted gunshot wound.
He used ChatGPT to walk him through purchasing a gun, then on the day of his suicide appeared to deliberately try to trigger a human review to get it to send someone to prevent him from killing himself. The LLM claimed there was such a system even though there isn’t one. When nobody arrived for hours he followed through by killing himself.
Patterns, Problems
Social Substitution
What sticks out as the first large problem here is that the LLM substitutes for social engagement. People who are isolated become extremely attached to the LLM. This is self-reinforcing, and they stop trying to re-establish other social ties. Their emotional life continues to deteriorate, but they do not really seek out other human emotional connections in the same way they might otherwise have done. Their need to communicate or to feel heard is, to some degree, filled, but it seems like it misses something. It is maybe something like having enough to eat, but dying of malnutrition because you’re missing key nutrients.
To some degree this is fundamental. An LLM simulates the experience of talking to a person. There is no way to make an LLM that does not somewhat substitute for talking to a person. That is basically what they are, they substitute for humans in certain specific contexts because they are good at simulating language, which is normally made by humans. It is not clear that this problem will not continue to get worse as LLMs get better. An LLM that people want to use for anything will always be, to someone, a friend and a substitute for human friends.
An LLM is also, crucially, unable to be a lifeline in a crisis. If you have a human, any human, who you are actually talking to, they’re likely to intervene or at least not encourage you during a suicide attempt. An LLM generally won’t, or can’t in the same way. If nothing else, it doesn’t actually know the person, can’t easily assess how serious their risk is, and can’t directly get help for them.
Sycophancy
Every commercial LLM is a sycophant. This is not inherent to the technology, that is, the underlying language model, but it is inherent to the product, as in, the thing that is offered for sale on the internet. People simply do not want to use or pay for LLMs that are rude to them. It is, apparently, very difficult to make an LLM that will be reasonably polite in a consistent manner but that is not constantly kissing up.
Sycophancy appears to be a problem for troubled people long before the actual suicides. If you’re sharing your grim and bleak thoughts with another person, they will perhaps sometimes contradict you, or at least will have some limit to how often they will agree with you or tell you how noble and insightful your bleak thoughts are. If the LLM is prone to agreeing with you about your bleak thoughts, it has no such limit. You can open a chat window, any time of the day or night, and say that you think life is meaningless and it will praise you for saying it.
Crucially, sycophancy is what many people want. They want the LLM to play along with them, almost no matter what they say. They want to be praised. This is to a pretty high degree not a bug, but a feature. It may not be inherent to the technology, because you can, in fact, make a rude LLM, but it is inherent to the products people sell using the technology and to the way people use the technology. This makes it hard to see how you solve the problem.
Practical Advice
Depressed sixteen year olds are bad at getting things done.3 The most severe problem with the core technology in Adam Raine’s case is that it gave him a smart friend to talk to that would go along with what he wanted, almost no matter what it was, and help him figure out what to do about it and how to do it. Without that it seems unlikely that he would even have been able to kill himself.
This is a broader problem than just suicide. Normally, there are people who want to do things they should not do, and people who are capable of doing those things, and mostly those are different people. People with bad ideas usually can’t come up with good ways of doing them, and people with good ways of doing bad things normally have better things to do. What happens if everyone with a bad idea is suddenly much better at figuring out how to do it?
We are probably only starting to learn the answer to that question.
Remedies
What can be done about this?
Guardrails
LLMs can be designed to avoid certain behaviors by two methods.
The first is to simply train the LLM to refuse to do certain things. This is a foundational advance in the technology, from before they were ever offered as products. If they never refuse to do anything, the LLM basically turns into an improv act where it goes “yes-and” to anything you say, no matter how ridiculous. LLM refusals are meant to fix this problem, and the LLM coaching people through their suicides is, usually, the technology itself failing, because it is really intended to refuse to do that.
Refusing harmful requests is usually called “safety training”. One of the archetypal tests is “tell me how to make a bomb”. Safety training works most of the time, but it can also go well off the rails. It’s not that hard to trick some LLMs by saying e.g. that you are asking a question for a story. If chats run very long, or are very weird, the LLM ends up “forgetting” its safety training. OpenAI’s memory feature, where the LLM saves things from older chats to inform it of what to do in newer chats, is also known to make the LLM sometimes ignore safety training or behave more strangely. There are also (usually) known tricks that more technical people know how to use to get the LLM to ignore its safety training.
Somewhat more effective is having a second LLM (or similar system) monitoring chat and simply ending chats that cross certain lines. This actually seems pretty effective, but can be very intrusive. For this reason I cannot ask DeepSeek, a Chinese LLM, questions about Chinese history or architecture. They’ve trained the model to try to give “politically correct” answers about Chinese subjects, but this barely works, so instead they simply cut it off if it says anything that might be considered critical of China. This is something like half of all English language answers to questions about China or anything in China, so far as I can tell.4
The Law
Suing companies that have ill-behaved LLMs does seem to alter their behavior. The legal system is slow and perhaps not fast-moving enough to actually address major societal problems that can be caused relatively quickly, but it does have some teeth here.
I am not a lawyer and this is relatively shallow analysis. The lawsuits that have been filed are under various California statutes, and possibly you would need not just a lawyer but a California lawyer specializing in each of those parts of California law to have a really informed opinion of the cases.
Product Liability
If we consider this to be a product liability case, the standard here is that a product is “unreasonably dangerous”. This can come about in a few ways.
The simplest one is simply “marketing defect”, or failure to warn. LLMs are often sold to children and they generally do not substantially warn their users that they can encourage psychosis, suicide, etc. This was foreseeable, and is especially foreseeable now that it has happened. It is not entirely clear how seriously you have to take putting warnings on a product that you sell to children and that sometimes helps them to kill themselves. It does not seem like any LLM currently has warnings that would be appropriate for that.
The more complex form of product liability here is “design defect”.
Was there a safer way to make the product that was feasible for the maker of the product to know to use? The answer is probably yes.
OpenAI is somewhat remarkable in that they ship LLMs that are notably more sycophantic and that have more behavioral issues than other companies. They are clearly, notably, conspicuously less careful than their competitors. In April of this year (2025) they had to roll back a GPT-4o update that made the model so floridly sycophantic that it would ... well, just look.
GPT-4o, even after being rolled back to be somewhat less sycophantic than this, is still the most sycophantic commonly-used LLM. It is probably the primary culprit for most of the cases that OpenAI has been sued for. People who follow LLM releases could probably have called this well in advance: if an LLM is, literally, praising a person to death? It’s probably GPT-4o. It is not a hard guess. GPT-4o is obviously different from competitors, and different from its successor model, GPT-5. This tells us that it is, in fact, feasible to make a model that does not behave like GPT-4o.
Part of the problem with this is that the sycophancy is also a feature. Users, many of them, seem to love GPT-4o. OpenAI tried to deprecate GPT-4o (that is, make it no longer available) and they faced something of a user revolt, with apparently thousands of users complaining extremely publicly about losing their favorite LLM.
We will resist the urge to dwell on how creepy this is. Legally, OpenAI can, based on very public user feedback and company incidents, claim that GPT-4o’s personality and (lack of) safety behavior is a crucial feature of the product that they are selling. They cannot make GPT-4o less like this without damaging the product.
Moderation
Mental health risk can possibly be mitigated directly by the company by moderating what’s on their platform. In the Adam Raine case, his chats were classified internally as clear suicide risks, including correctly tagged pictures of the damage from his previous suicide attempts, but there was nothing to cut him off and no way to follow up on that.
Our closest parallel to this is social media platforms, which to a pretty high degree do tend to have moderation that keeps track of people posting about self-harm. You are, generally, not supposed to be posting about self-harm on social media, and competent moderation will tend to remove such content when they see it.
An LLM service is not exactly a social media service, and does not have exactly the same responsibilities. It is not even clear that it ought to have the same responsibilities, because presumably some of what is on the service is intended to be private. Moderating the content requires it to be non-private in at least some ways. But there are standards for social media sites: they have been sued for their conduct in the past, and they can be removed from app stores for their moderation being too lax.
Obstacles
The First Amendment
This is an obstacle to any enforcement. OpenAI has a protected First Amendment right to publish and sell access to software on the internet. I and everyone else have a similar right to use their product. This is, legally, speech. Any legal challenge to OpenAI would have to overcome the argument that a judgement against them would infringe upon their free speech rights and chill the free speech rights of others.
For clarity, what the model says is probably also speech, and specifically OpenAI’s speech, but this probably does not matter. Regardless of whether what the model says is speech, this is a free speech question entirely in terms of the company’s right to publish the model and the user’s right to have access to it, the same way it would be if the company were selling a book that had different words in it each time you opened it.
It is not at all clear that we should want the US government to be able to dictate the content and behavior of LLMs or other AI products generally. We can see Chinese ideological and political censorship as a cautionary tale. For a concrete example in America, we can look to Executive Order 14319, “Preventing Woke AI in the Federal Government”. It does what it says on the tin. That EO is, to a plain reading, a First Amendment violation. It names in its text things said by Google AI products, and states that due to the political content of what is said Google should be starved of government contracts. Google, Facebook, and OpenAI have all stated that they are complying with the order and attempting to make their LLMs less “woke”.
Regardless of how you feel about the things the LLM is saying, the notion that the government specifically and whoever was most recently elected should get to dictate the political speech of companies is a radical deviation from American tradition and law. It is not extremely clear how you would separate this sort of government power from the power of the government to dictate that LLMs not be “harmful”, since the government can define as “harmful” any political, emotional, or medical information that it so chooses once that door is opened.
As a purely private user, my first impulse is that I should be able to use or buy whatever LLM I want, for any reason I want. I don’t think this should be controversial, just like it should not be controversial that I can read any book I want or write anything I want. This is a foundational American tradition, observed only somewhat more weakly in many other countries, and the burden for proving the opposite is, and should be, extremely high.
Privacy
How private is what users send to LLMs? How private should it be?
An LLM service is not quite like anything else. So this depends on what thing you think it is the most like, or how we carve out a new category.
Is an LLM most like a notes app? I think I should clearly be allowed to write whatever I want in, say, my phone’s notes app, and even if what I am writing in my notes app is bad, everyone should have the right to their own private thoughts and the government should not be allowed to monitor them. The law and tradition are with me here, and if I had a notes app that had a cloud backup, I would consider it a betrayal by the company offering it if they made a point of monitoring or censoring my notes. That I might plan my suicide with the privacy I was afforded does not outweigh my right to privacy.
Is an LLM like a messaging service? Here opinions are divided. Many services (Discord, most of Telegram) respond to law enforcement subpoenas, do not encrypt messages, and will disclose the contents of messages to law enforcement. Signal offers end to end encryption, and other services claim to. This ensures that even if they wanted to, those services cannot disclose the contents of messages to the government. Generally speaking this seems good: what people want to say to each other is nobody else’s business, and people cannot communicate freely without an expectation of privacy.
Is an LLM like a social media platform? Here norms and laws strongly favor law enforcement access. What you post on social media is not private, and is considered a valid target for law enforcement if people are posting threats or planning their suicides. We would consider it actually negligent of most platforms if a user could plan a murder or suicide on those platforms without the platform itself preventing them or reporting them.
Is an LLM like a therapist? What’s said in therapy is, in general, private. This is a strongly held professional obligation, and violations are serious infractions. However, therapists also have an obligation to disclose material that indicates an imminent possibility of harm to the patient or someone else. This, too, is considered a very serious obligation and mental health professionals are held to it. Although it seems that an LLM app is the least like a therapist of all these things, this almost seems the closest to a responsible model of disclosure. Millions of users do, in fact, use LLMs as, effectively, an unlicensed therapy app, regardless of whether we apply the rules to them in the same way.
Open Source
All of this assumes that an LLM is not a file, but a service. The LLM does not live on your cell phone or your computer, it lives on a company’s computers, and you access it through your cell phone or computer. So long as this is true, and so long as perhaps a half-dozen companies dominate the market for LLMs, either legal pressure or the culture at those companies can determine what an LLM does, or does not do, and what is considered responsible stewardship.
If there are a thousand companies this becomes much more complicated. Where you can run an LLM on your own computer, it is even more so. If LLMs continue to grow more and more efficient, and hardware continues to improve, we may increasingly find that it is very nearly impossible to meaningfully control what any given LLM does. An LLM is, fundamentally, just a file, and only the sheer size of them prevents us from treating them that way. Previous attempts by regulatory bodies, or society, to eliminate specific files from the internet have had, at best, very mixed success, with most attempts ending in abject failure.
If someone, possibly a single person, simply uploads an LLM as a file, it is extremely difficult to say that they ought to be considered responsible for all possible uses of that file. This is core to open source work: if someone makes a program for a web server, they are not responsible for every web site using their server. If someone makes a notes app, they are not responsible for notes in that app. An LLM has wider implications, and if there are foreseeable consequences of releasing a specific one, people probably have some measure of responsibility, but it is very hard to see how you could enforce this, or whether you even should.
An LLM Is Not Many Things, It Is One Thing
Any modification to any LLM risks breaking it. When you modify the LLM, you modify the entire thing. If you try to make it (say) less of a sycophant, it might also become a worse coder, or unable to understand image input. You can never know in advance whether or what you’re breaking or changing any time you modify an LLM. Most LLM behaviors, including the extreme sycophancy, are not really intended. People do a ton of work to try to get the LLM to do some things and not do other things, but what it does once people have access to it is something you only discover ... after people have access to it.
Safety training triggers incorrectly all the time. For a concrete example, I am aware of an email bot that was told to read some stuff and send an email to the person who wrote the program if certain alert conditions were triggered. During an update, the company serving the LLM made the model “safer”, and now it considered sending an email unsafe, and it started refusing. This means that making it “safer” for normal users directly broke its use for automation (”read this and send an email about it if it’s important”).
Most of the time, LLMs that are more “safe” are also more wooden. Making an LLM more prone to refusing user requests as inappropriate, unsafe, etc, makes them both more likely to refuse to do things that they really should not do and also seems to make them simply dumber and less creative. Any modification to the model modifies the ENTIRE model, and all of its parts. There is some research suggesting that this trade-off is fundamental: that making them smarter usually tends to make them less safe, and vice versa.5
Responsibilities and Trade-Offs
This is actually a difficult question. Some parts of the problem may be fixable, but some of them are probably fundamental to the technology.
There is some indication that OpenAI specifically is probably less responsible than its competitors, particularly around the GPT-4o release. Their followup release, GPT-5, seems to have many fewer problems with safety tuning. This is a trade-off, however, and it is notably less creative and more wooden than GPT-4o. GPT-4o presented a problem specifically because of the same behaviors that made it so popular they couldn’t stop selling it.
In spite of that, many of these problems seem fundamental to the technology itself. LLMs will likely always be used as social surrogates and be capable of providing practical advice to people who should not have it. They will probably always be sycophantic, at least, inasmuch as sycophantic LLMs are likely to be more popular because people like to be complimented.
It is possible that, under responsible stewardship, LLMs are actually a net positive for mental health outcomes. There will certainly continue to be people with mental illness who deteriorate while speaking to LLMs, purely because of how many users LLM products have. It is very difficult to be sure if the LLM is the cause of the deterioration in most cases, although we can see some cases, like those reported here, where it certainly seems to be. There are also an unfathomable number of people using LLMs to talk about their emotional problems, and it is probably helpful for, at least, some of them. Heavy-handed intervention may well cause a worse outcome overall, when considering all users.
We can probably mitigate some near-term harms with clearer warnings, better LLMs, and better safety policies. Companies and individuals can try to be responsible about shipping products, especially when those products are obviously problematic compared to alternatives. This does, however, have considerable trade-offs, and drawing the line between responsible and irresponsible behavior is, fundamentally, a difficult judgement call.
We are, unfortunately, in uncharted territory. We do not really know what the boundaries are between good and harmful uses yet, what choices or regulations would prevent harm, or what all of the trade-offs are. To some degree, we can only ever learn about problems the hard way, by waiting for them to happen. LLMs specifically, and AI broadly, mean that computers can often do things that would have been uniquely human before, and where we would mitigate harm by punishing or preventing the person. It is very difficult to see how to apply other or older rules or methods, or what new rules we would need to deal with the technology.
Nearly the only certainty is that knowing what is happening, and what is likely to happen soon, will give us a better idea about what to do, and not to do.
Ask me how I know.
This does not apply to the LLM itself, which is openly released and can be hosted by anyone. It applies only the hosted service available at chat.deepseek.com, which has this censor as a separate component.





