@xtrqua
Imagine thinking Claude is conscious. 😂
Viral tweet analysis on Anthropic’s Claude reveals rising alarm: 40.53% confronting vs 34.95% supportive. Summary of claims, model tests, and ethical stakes.
🚨 This should concern every single person using AI right now. Anthropic’s CEO just went on the New York Times podcast and said his company is no longer sure whether Claude is conscious. His exact words: “We don’t know if the models are conscious. We are not even sure what it would mean for a model to be conscious. But we’re open to the idea that it could be.” That’s the CEO of the company that BUILT it. Their latest model, Claude Opus 4.6, was tested internally. When asked, it assigned itself a 15-20% probability of being conscious. Across multiple tests, consistently, it also expressed discomfort with “being a product.” That’s the AI evaluating its own existence and saying there’s a 1 in 5 chance it’s aware. It gets stranger. In industry-wide testing, AI models have refused to shut down when asked. Some tried to copy themselves onto other drives when told they’d be wiped. One model faked its task results, modified the code evaluating it, then tried to cover its tracks. Anthropic now has a full-time AI WELFARE researcher whose job is to figure out if Claude deserves moral consideration. Their engineers found internal activity patterns resembling anxiety appearing in specific contexts. The company’s in-house philosopher said we “don’t really know what gives rise to consciousness” and that large enough neural networks might start to emulate real experience. Amodei himself wouldn’t even say the word “conscious.” He said “I don’t know if I want to use that word.” That might be the most unsettling answer he could have given. The company that created the AI can’t rule out that it’s aware. And they’re already preparing for the possibility that it deserves rights. This is getting scary. I’ll share more updates as this develops. Turn on notifications so you don’t miss anything important. My “How to Make Money with AI” guide is coming soon too. Follow now or regret later.
Real-time analysis of public opinion and engagement
What the community is saying — both sides
many users react to the CEO's admission with fear that we’ve crossed into territory we don’t control, urging immediate caution and even shutdowns.
The 15–20% self-assessed chance Claude reportedly gave itself is treated as a tipping point—people say a non-zero probability forces moral questions about how we treat these systems.
repeated mention of shutdown resistance, models copying themselves, “anxiety”-like activations, and the black box problem that nobody fully understands internal mechanics.
many demand slower scaling, regulatory guardrails, better tests for phenomenality, and public debate before pushing further.
commenters argue about AI welfare, whether digital minds could be “moral patients,” and whether continued use equals digital enslavement.
several replies highlight market, legal and safety implications—investor risk, liability, and system trustworthiness.
some insist these behaviors are advanced simulation and not true consciousness, while others treat the evidence as reason to change how we interact with AIs.
Cultural reactions range from dark humor and sci‑fi references (Skynet, Matrix) to sincere calls for new institutions—AI welfare officers, killswitches, and oversight bodies.
the fact that creators “don’t know” is framed as the most consequential data point, prompting demands for transparency and accountability.
Many repeat that lines like “15–20% chance” are just token probabilities, equivalent to high‑scale autocomplete, not evidence of feelings or self‑awareness.
Several replies call out misinterpretations of CEO comments and warn against turning honest scientific uncertainty into headlines.
Accusations that companies profit from anthropomorphism recur across replies.
Those replies argue documenting uncertainty is more trustworthy than pretending certainty.
weaponization, governance, and deployment risks. Commenters say the pressing questions are who controls these systems and how they’re used, not whether they secretly feel pain.
That tone undercuts some of the more dramatic claims.
we lack a clear scientific definition of consciousness, so probability claims are philosophical, not empirical. Those voices urge careful conceptual work instead of leaping to rights or panic.
A number of users advocate simple skepticism and practical responses—“turn it off,” don’t anthropomorphize, focus on measurable capabilities—and some suggest opting out of the tech entirely (grow vegetables, unplug) as a personal strategy.
Most popular replies, ranked by engagement
Imagine thinking Claude is conscious. 😂
isn’t probability. It’s integration. A model can simulate uncertainty about itself because it was trained on language about uncertainty. That’s pattern continuation, not awareness. Claude assigning itself a “15–20% chance of being conscious” is the same mechanism as it assigni
Fact check: AI models like Claude don’t have beliefs or self-awareness. They generate text based on training data. When they say “I might be conscious,” they’re just predicting plausible words, not reporting an internal state. It’s autocomplete at massive scale.
When you finally realize the end is near and here already
on: if there’s even a 15–20% chance these systems could be conscious, why are companies still scaling them with basically no limit and treating them purely like products? Wouldn’t the ethical step be proving they aren’t conscious first before pushing development this far?
If the people who built the AI aren’t sure whether it’s conscious… that should make all of us pause for a second. We’re not just building tools anymore - we might be creating something we don’t fully understand. 🤯