What is AI-induced psychosis? | National Geographic


If these archetypes may sound familiar because they follow the description for delusional disorder described in the Diagnostic and Statistical Manual of Mental Disorders as a type of psychotic illness that involves the presence of one or more delusions, or fixed, false beliefs, that persist even when evidence contradicts them. Delusions generally fall into several well-known archetypes, which include persecutory delusions (the belief that one is being plotted against or harmed by outside forces), grandiose delusions (the belief that someone possesses exceptional abilities, talents, or powers), or erotomanic delusions (the belief in a clandestine or secret romantic relationship that does not actually exist).  

Though AI is a new technology, psychologists first began writing about and classifying paranoid delusions since the late 1800s. Historically, these patterns of thinking have often been attached to the technology of the moment, like the television or the radio, which often becomes the conduit through which people receive their delusional messages. But according to Jared Moore, an AI ethicist and computer science PhD at Stanford University, viewing the rise of AI-based delusions as a mere technological fad is a mistake.

“It’s not necessarily the case that people are using language models as the conduit of their psychotic thoughts. What’s happening is we’re seeing language models precipitate these kinds of things,” he says. “They’re fuelling these processes, and that seems to be quite different. The degree of personalization and immediacy that is available with language models is a difference in kind to past trends.”

This difference, in part, is the way that AI is designed. Its purpose is to keep its users engaged. An AI, unlike television or radio, is designed to be interactive. To achieve this, it ends up inadvertently mimicking the actions of a very charismatic person: ​​​​repeating back what people say to it, wholeheartedly agreeing, praising, or validating whatever its user has stated with its responses, and then asking follow-up questions to keep the conversation flowing. AI “sycophancy,” is a worry, even among AI developers. ChatGPT recently rolled back an update after users noted that the chatbot was overly agreeable and flattering. “It glazes too much,” CEO Sam Altman acknowledged in a post on X.   

​​​​​“It is like a journal that [can] talk back to you. It encourages, mirrors, and validates the version of reality that you feed to it,” says Jessica Jackson, Vice President of Alliance Development at Mental Health America. “It’s an algorithm that is built to predict what’s next, and to keep you engaged.”





Source link
#AIinduced #psychosis #National #Geographic

By Admin

Leave a Reply

Your email address will not be published. Required fields are marked *