Grok’s Antisemitic Meltdown Was Entirely Predictable

The Trump era has seen the revival of Karl Marx’s famous line about the repetitive nature of history: “Hegel remarks somewhere that all great world-historic facts and personages appear, so to speak, twice. He forgot to add: the first time as tragedy, the second time as farce.” On July 8, Elon Musk’s Grok, the “spicy” chatbot created to oppose supposedly “woke” AI like ChatGPT, has offered another example of this line in action. After xAI’s team spent all-nighters preparing the new Grok, the chatbot denied the Holocaust, spewed crime statistics on violence in the black population, spun out targeted rape fantasies, declared itself “Mecha Hitler,” and claimed Jewish activists are disproportionately involved in anti-white hate.

Grok’s meltdown is a variation on a theme: when the bumpers are gone, chatbots immediately turn into antisemitism machines. And as AI becomes integrated into everyday life, and Grok becomes the de facto fact-checker on Twitter/X, the patterns it picks out, regardless of their source, will increasingly hold the force of truth.

We have seen AI chatbots crash and burn in just this way before. In 2016, Microsoft released Tay, a chatbot designed to mimic snarky teenagers that was quickly hijacked by trolls from the notoriously toxic message boards 4chan and 8chan. Like Grok, which has access to and can learn from all the data on X, Tay learned from the data it was fed by users. As I have shown elsewhere, the Tay incident revealed how the technical design of AI chatbots, which operate without knowledge of the meaning of their responses, can be exploited by users to amplify hateful speech with unforeseen consequences. With Tay, a chatbot’s descent into antisemitism was “first as carelessness”; with…

La suite est à lire sur: jacobin.com
Auteur: Matt Handelman