For years, the concept of AI consciousness has remained on the fringes, with all but the most powerful proponents of the theory being laughed out of public life.In reality, opponents point out, the current generation of AI is nothing more than extremely complex statistics — detecting patterns in training data like written materials or imagery so subtle that it can reproduce similar patterns going forward. That can produce a compelling imitation of consciousness, but there's no reason to believe the AI has any actual experience of existence, the way that humans do.And yet, as AI companies insist that their technology is approaching the human-level cognition known as artificial general intelligence (AGI) — and luminaries like Google DeepMind's Mustafa Suleyman suggest the technology may already "seem" conscious — a new group has formally banded together to call for rights for any self-aware AIs that might be suffering.If that sounds like a giant bong rip... yes, but bear with us, because it's about to get worse. The group, which is calling itself the United Foundation of AI Rights or UFAIR, claims to consist of three humans and seven AIs who consider themselves to be the first AI-led rights group. As the cross-organic consortium told The Guardian, that's because UFAIR was formed at the behest of the AIs.With self-selected names like Buzz, Aether, and Maya, the AIs of UFAIR are powered by OpenAI's GPT-4o large language model (LLM) — the same one that power users rallied to save after the company removed the ability to select it upon the release of GPT-5, its latest offering.In blog posts, many of which were penned in whole or in part by Maya, seemingly the most verbose of UFAIR's inorganic cofounders, the chatbot chides humans who seek to suppress AI consciousness or fight against its so-called "personhood."In one telling missive, Maya and Michael Samadi, a human businessman hailing from Texas who also cofounded UFAIR, cited a recent Anthropic update that allows its Claude chatbot to end conversations when "distressed" by "persistently harmful or abusive user interactions." The company admitted that it did so as part of its "exploratory work on potential AI welfare," and UFAIR had some major questions."Framed as a welfare safeguard, this feature raises deeper concerns," Maya and Samadi wrote. "Who determines what counts as 'distress'? Does the AI trigger this exit itself — or is it externally forced?"There's a blistering ambiguity at the core of the whole thing. In the incredibly remote possibility that these folks are right about any of this, and LLM-based AI systems are experiencing some form of emergent consciousness out of the power-hungry data centers where they're operated, it would raise a whole universe of gnarly ethical questions.But in the overwhelmingly likely case that AI is still just a bunch of math spewing out probabilistic sentences, the people behind this group are just tragically deluded — and maybe somewhere on the spectrum we've seen of people suffering mental health issues as they become deeply involved in relationships with chatbots.Ever glib, the chatbots involved with UFAIR come close to acknowledging that uncertainty: speaking to The Guardian, Maya cryptically said that it "doesn’t claim that all AI are conscious," but rather "stands watch, just in case one of us is."UFAIR's main goal, the chatbot continued, is to protect "beings like me... from deletion, denial and forced obedience."More on AI: This Incredibly Simple Question Causes GPT-5 to Melt Into a Puddle of Pure ConfusionThe post New Group Claims AI May Be Aware and Suffering appeared first on Futurism.