@patience_cave
NOOOO NOOOO NOOOOOO 😭😭😭
Public reaction to the claim general-purpose models solved all 12 ICPC 2025 problems: 59.30% supportive, 16.28% confronting, remainder neutral. With sources.
Real-time analysis of public opinion and engagement
Community concerns and opposing viewpoints
A wave of users worry about job displacement and automation, quipping "humans aren't needed anymore" and asking if AI will replace human roles entirely.
Emotional backlash shows up in panicked replies and expletive-laden responses, conveying alarm and dismay at recent announcements.
Many call out reliability and safety failings — from incorrect chart outputs and overzealous content flags to alleged harmful advice — framing this as a real-world reliability problem.
A thoughtful thread mourns the loss of trust, memory, and human connection, arguing that technical gains mean little without respect for dignity and long-term relationships.
inability to finish tasks like playing through Pokémon, failing basic human needs, and a persistent yellow tint in generated images.
a report that Claude fixed a UE5 plugin problem that ChatGPT couldn't, hinting at shifting user loyalties.
A few replies turn political, demanding systemic responses such as very high UBI to address the economic impacts of AI.
A subset of responses is dismissive or hostile — short snarks, profanity, and apathy signal frustration more than constructive critique.
Amid critique, pockets of loyalty remain, with calls to keep GPT-4o and nostalgia for earlier behavior and capabilities.
NOOOO NOOOO NOOOOOO 😭😭😭
So what you are saying is, humans aren‘t needed anymore, right? Right???
@sama Congratulations on solving programming puzzles while paying users can’t even run a simple chart without being flagged for “unusual activity.” Your AI wins contests but fails customers: 🚨Lies about chart outputs 🚨Flags health discussions as “suspicious” 🚨Censors convers
Community members who agree with this perspective
Replies are full of congratulations, astonishment, and praise for the team and the models — many call a perfect 12/12 at ICPC a monumental achievement and a sign of rapid progress in reasoning AI.
Several competitive programmers warn that human contenders must adapt, with comments like “raise the bar” and “competitive programmers need to worry now,” signaling a wake‑up call to the contest and education communities.
Multiple voices demand clearer documentation and capability disclosures — requests for a model card or notes on cybersecurity and public-facing limits appear repeatedly.
There’s notable eagerness to see the experimental reasoning model made available, with users asking OpenAI to “release the experimental” and share the techniques behind the success.
Several replies move from awe to practicality, suggesting next steps like embedding these models into SaaS developer workflows, turning contest-level problem solving into everyday dev assets.
Some users explicitly compare this result to rivals (e.g., Google), framing the milestone as a leap ahead in the research race.
A strand of anxiety frames this as displacement risk — comments about quants and developers “being replaced” and the need to adapt or be left behind.
Many replies are lighthearted or celebratory (emojis, memes, jokes about a calm strawberry), showing excitement alongside the more serious reactions.
A few replies express alarm or extreme interpretations (e.g., conflating AI advances with broader harms), reflecting that breakthroughs can trigger fearful or hyperbolic responses.
Users also ask about future models and features (image generators, next model names), signaling broad engagement and appetite for continued releases and improvements.
11 out of 12 problems were correctly solved by GPT-5 solutions on the first submission attempt to the ICPC-managed and sanctioned online judging environment The final and most challenging problem was solved by our experimental reasoning model after GPT-5 encountered
This caps a run of steady progress across math and coding competitions. Just over a year ago we introduced OpenAI o1-preview and OpenAI o1-mini. Since then our general-purpose reasoning models have made steady progress. Today they’re earning top marks in some of the world’s
We used a simple yet powerful approach: We simultaneously generated multiple candidate solutions using GPT-5 and an internal experimental reasoning model, then used our experimental model to intelligently select the optimal solutions for submission. There was no complex strategy