@N8Programs
what model ghostwrote this? or did you painstakingly mimic the horrifying n-grams of the original yourself.
A take on a Google paper advocating transformer-only models: no recurrence, full parallelism, scalable encoders-decoders, and faster, simpler AI pipelines.
Real-time analysis of public opinion and engagement
Community concerns and opposing viewpoints
Repliers keep noting it’s from 2017, calling the post clickbait and a reheated “new” claim.
Many read it as an algorithm/engagement test and a way to surface bot accounts, citing deliberate rage-bait.
The thread leans into jokes, memes, and sarcasm, with plenty of digs at the LinkedIn-style writing and a few “delete this” reactions.
RNN/LSTM (with nods to linear transformers, edge use cases, and “images need CNNs”), plus tongue-in-cheek hot takes like “LSTMs FTW. ”
”—and link proofs that it’s not new.
Meta-parodies compare it to “discovering” Turing, Markov chains, or Galileo to mock the framing.
”
”—mostly played for laughs.
what model ghostwrote this? or did you painstakingly mimic the horrifying n-grams of the original yourself.
I hate the linkedin style of writing.
internet trolling at its finest
Community members who agree with this perspective
Replies call the work “revolutionary,” “mind‑blowing,” and a “game changer”, with some claiming it could change AI forever or even brush up against AGI.
Many underline that “Attention Is All You Need” became the field’s backbone, praising simplicity unlocking scale and the shift from sequential bottlenecks to globally aware computation.
Thoughtful questions ask whether progress comes from refining attention or entirely new paradigms, with practical nods to positional encodings and architecture tuning.
People anticipate faster training, richer models, cleaner design—and speculate about ChatGPT integration and claims like “language is about to be solved.”
High-energy support—buying the newsletter, posting to LinkedIn, 10/10, “banger”—with praise that the breakdown is on to something big.
Calls to read the paper (and more links), sprinkled with memes and playful lines (“refactor my life into a Transformer stack,” “taoism operator”), plus rare sarcasm that doesn’t dent the surging excitement.
Wait till you read this paper
Wow! You're certainly on to something here. This isn't just intriguing—it's potentially revolutionary.
you are absolutely right!