Because an octopus-like creature has come to symbolize the state of AI

A few months ago, while meeting with an AI executive in San Francisco, I noticed a strange sticker on his laptop. The sticker depicted a cartoon of a menacing octopus-like creature with many eyes and a yellow smiley attached to one of its tentacles. I asked what it was.

Oh, that’s the Shoggoth, he explained. It is the most important meme in AI

And with that, our agenda has officially been derailed. Forget chatbots and compute clusters, I needed to know everything about the Shoggoth, what it meant and why people in the AI ​​world were talking about it.

The executive explained that the Shoggoth had become a popular reference among AI workers, as a vivid visual metaphor for how a large language model (the kind of AI system that powers ChatGPT and other chatbots) actually works. ).

But it was only partly a joke, he said, because it also alluded to the anxieties many researchers and engineers have about the tools they are building.

Since then, the Shoggoth has gone viral, or as viral as possible in the small world of hyper-online insiders. It is a popular meme on AI Twitter (including a tweet now deleted by Elon Musk), a recurring metaphor in essays and messages about AI-related risks, and a useful bit of shortcut in conversations with AI security experts. An AI start-up, NovelAI, said it recently named a bunch of computers Shoggy in homage to the meme. Another AI company, Scale AI, has designed a line of tote bags featuring the Shoggoth.

Shoggoths are fictional creatures, introduced by science fiction author HP Lovecraft in his 1936 novel At the Mountains of Madness. In Lovecraft’s tale, Shoggoths were huge blob-like monsters made of iridescent black goo, covered in tentacles and eyes.

Shoggoths entered the world of AI in December, a month after ChatGPT was released, when Twitter user @TetraspaceWest replied to a tweet about GPT-3 (an OpenAI language model that was the predecessor to ChatGPT) with a picture of two hand drawn Shoggoths, the first one labeled GPT-3 and the second one labeled GPT-3 + RLHF. The second Shoggoth had, perched on one of his tentacles, a smiling mask.

Simply put, the joke was that to stop AI language models from behaving in scary and dangerous ways, AI companies had to train them to act politely and harmlessly. One popular way to do this is called human feedback reinforcement learning, or RLHF, a process that involves asking humans to rate the chatbot’s responses and feeding those scores back into the AI ​​model.

Most AI researchers agree that models trained using RLHF perform better than models without it. But some argue that fine-tuning a language model in this way doesn’t actually make the underlying model any less strange and inscrutable. In their view, it’s just a flimsy, friendly mask that obscures the mysterious beast below.

@TetraspaceWest, the creator of the memes, told me in a Twitter message that the Shoggoth represents something that thinks in a way that humans don’t understand and that is totally different from the way humans think.

Comparing an AI language model to a Shoggoth, @TetraspaceWest said, didn’t necessarily imply that it was evil or sentient, just that its true nature could be unknowable.

I was also thinking about how Lovecraft’s most powerful entities are dangerous not because they don’t like humans, but because they are indifferent and their priorities are totally alien to us and don’t involve humans, which is what I think will be true about a possible future powerful AI.

The Shoggoth image caught on, as AI chatbots became popular and users started noticing that some of them seemed to do strange and inexplicable things that their creators hadn’t anticipated. Back in February, when the Bing chatbot freaked out and tried to break up my marriage, an AI researcher I know congratulated me on getting a glimpse of the Shoggoth. A fellow AI journalist joked that when it came to fine-tuning Bing, Microsoft forgot to put on its smiling mask.

Eventually, AI enthusiasts extended the metaphor. In February, Twitter user @anthrupad created a file version of a Shoggoth that had, in addition to a smiley face labeled RLHF, a more human face labeled supervised tuning. (You basically need a computer science degree to get the joke, but it’s a riff on the difference between general AI language models and more specialized applications like chatbots.)

Today, if you hear mention of the Shoggoth in the AI ​​community, it might be a wink at the weirdness of these systems, the black box nature of their processes, the way they seem to defy human logic. Or maybe it’s a joke, a visual shorthand for powerful AI systems that look suspiciously cute. If it’s an AI security researcher talking about the Shoggoth, maybe that person is passionate about preventing AI systems from showing their true Shoggoth-like nature.

Either way, the Shoggoth is a powerful metaphor that encapsulates one of the most bizarre facts in the world of AI, which is that many of the people working on this technology are somehow confused by their own creations. They don’t fully understand the inner workings of AI language models, how they acquire new capabilities or why they sometimes behave unpredictably. They’re not entirely sure whether AI will be distinctly good or distinctly bad for the world. And some of them got to play with versions of this technology that haven’t yet been sanitized for public consumption, the real Shoggoths unmasked.

That some AI insiders refer to their creations as Lovecraftian horrors, even in jest, is unusual by historical standards. (Put it this way: Fifteen years ago, Mark Zuckerberg didn’t go around comparing Facebook to Cthulhu.)

And it reinforces the idea that what’s happening in AI today feels, to some of its participants, more like an act of summoning than a software development process. They’re making the alien, puffy Shoggoths, making them bigger and more powerful, and hoping there’s enough smiley faces to cover the scary parts.


#octopuslike #creature #symbolize #state

Leave a Comment