Instagram’s recommendation algorithms promote pedophile networks

Instagram’s algorithms actively promote networks of pedophiles who commission and sell child sexual abuse content through Meta’s popular image-sharing app.

a joint research The Wall Street Journal and academics from Stanford University and the University of Massachusetts Amherst have revealed the extent to which Instagram’s recommendation systems “connect pedophiles and direct them to content sellers.”

Accounts found by the researchers are advertised with blatant and explicit hashtags such as #pedowhore, #preteensex, and #pedobait. They provide “menus” of content that users can purchase or commission, including videos and images of self-harm and bestiality. When researchers created a test account and viewed content shared by these networks, they immediately received a recommendation to follow more accounts. As the WSJ reports, “Following just a handful of these recommendations was enough to flood a test account with content that sexualizes children.”

“Following just a handful of these recommendations was enough to flood a test account with content that sexualizes children.”

In response to the report, Meta said it was setting up an internal task force to address the issues raised by the investigation. “Child exploitation is a heinous crime,” the company said. “We are constantly exploring ways to actively defend against this behavior.”

Meta noted that it removed 490,000 accounts that violated child safety policies in January alone and that it has removed 27 pedophile networks in the past two years. The company, which also owns Facebook and WhatsApp, said it also blocked thousands of hashtags related to the sexualization of children and restricted those terms to user searches.

Alex Stamos, head of Stanford’s Internet Observatory and former chief security officer for Meta, told the WSJ that the company can and should do more to address this problem. “That a team of three academics with limited access could find such a huge network should set off alarms at Meta,” Stamos said. “I hope the company reinvests in human researchers.”

In addition to problems with Instagram’s recommendation algorithms, the investigation also found that the site’s moderation practices often ignored or dismissed reports of child abuse.

Meta said it is looking into the issues in the report

The WSJ recounts incidents where users reported posts and accounts that contained suspicious content (including an account promoting underage abuse with the caption “this teen is ready for you perverts”), and the content was approved by Instagram’s review team or in an automated message was told: “Due to the volume of reports we receive, our team has not been able to review this message.” Meta told the log that it had not acted on these reports and that it was reviewing its internal processes.

The report also looked at other platforms, but found them less suitable for growing such networks. According to the WSJ, the Stanford researchers found “128 accounts that offered to sell child sexual abuse material on Twitter, less than a third of the number they found on Instagram,” despite Twitter having far fewer users, and that such content ” doesn’t seem to be growing” on Twitter. TikTok. The report noted that Snapchat did not actively promote such networks as it is mainly used for direct messaging.

David Thiel, chief technologist at the Stanford Internet Observatory, told the log that Instagram just didn’t strike the right balance between recommendation systems designed to encourage and connect users, and safety features that investigate and remove abusive content.

“You have to put guardrails on something that’s growth intensive to still be nominally safe, and Instagram hasn’t done that,” Thiel said.

You can read the full report from The Wall Street Journal here.