It’s Sunday afternoon, and I’m staring at a picture of a tanned woman in a swimsuit that doesn’t quite cover her nipples. This is an unexpected twist in my quest to track down and understand the daily challenges faced by the people who moderate spam on Twitter. But I’m not focusing on the picture. Instead my eye is drawn to the comment below. Here, another user has written, “Follow for follow.”
The comment has been sent to me by John, not his real name, who works on Twitter’s secretive “spam project.” According to him, if I were part of that team, my job would be to look at comments like this one and decide whether this account genuinely wants the woman in the swimsuit to follow them back, or whether they are trying to manipulate Twitter’s system and should be labeled as spam.
In the legal battle between Elon Musk and Twitter, the term “spam bots” has become key. Yet how Twitter finds spam and how it differentiates bots from humans remains vague. Musk has claimed that even Twitter CEO Parag Agrawal couldn’t explain the criteria used to define a bot. And the billionaire himself resorted to using a controversial free tool to detect and estimate the number of fake accounts on the platform. But legal documents allude to a mysterious group of moderators who are making calls on what is and isn’t spam on Twitter on a daily basis. To better understand the platform’s spam problem, I’ve made it my mission to track that team down.
If bots are apparently easy to spot on Twitter, the people the social network employs to shut them down are not. In all, finding Twitter’s secretive bot squad takes me two months and dozens of interviews with people living in six countries across three different continents. I would read hundreds of pages of legal documents, run into a series of dead ends, and get stonewalled by more people than I care to count.
I start my search for Twitter’s bot team in France. I know that four NGOs there have recently taken Twitter to court to try and force the company to reveal how it polices hate speech, its budget for moderation, and the number of moderators in the French team. In January, a court agreed with the NGOs and ordered Twitter to hand over the documents.
In August, I get in touch, optimistically thinking Twitter must have turned over the documents by now, cutting short my search. But when I reach the NGOs involved in the case, they claim the platform has ignored the court. “To this day, we have seen nothing from Twitter,” says David Malazoué, president of SOS Homophobia, one of the four groups involved in the trial. There is a feeling in France that Twitter has not been dedicating enough resources to effectively moderate hate speech posted by both human and spam accounts—but the NGOs can’t prove it. Most of Twitter’s European operations are run out of the Irish capital Dublin, which is beyond France’s reach, Malazoué explains. “So we’re kind of stuck.” Twitter declines to comment on the case.