Three years and two months ago to the day, I wrote a story about memes. If you haven’t read it yet, go read that first. Now that Large Language Models have burst onto the scene, it’s a good time to revisit what I wrote half a year before GPT-3 was first released. What came true? What did we miss?

“These people figured things out about memes you wouldn’t believe. You’d be dazzled by what they’d learned about ideas, marketing, psychology, marketing psychology, neurons, biochemistry, vulnerabilities in the human psyche. They knew it all. They were psychological surgeons. Memes were their scalpel.”

“I found pockets of the internet where they’d unlocked the secrets of contagious ideas. Back then, their message boards and chats were only partially hidden.”

“Despite their willingness to share the low-hanging fruit, they didn’t just tell anyone about their most powerful discoveries about humans and how we tick.”

Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.

“They’d run experiments to see what kind of memes would capture public attention the most effectively.”

  • Ok, no, I think I botched this one

“Every super-contagious meme they investigated ended up doing something really weird to our brains. As crazy as it sounds, being exposed to the memes actually changed the way our brains think.”


“It’s accelerating.”

  • debatable

“One of these days, someone is going to release a new meme: powerful, catchy, so contagiously viral it will infect all of us. When the ultimate mind-virus inevitably fades from our consciousness, everything that we truly care to think or feel is going to disappear with it.”

  • don’t know where to begin with this one