Instagram’s algorithm ‘connects paedophiles’ flogging sick pictures of children

Instagram's recommendation algorithm connects a "vast" network of paedophiles flogging sick child abuse images, according to a Wall Street Journal exposé.

The WSJ worked with researchers from Stanford University and the University of Massachusetts Amherst in the US to investigate child sex abuse images on the social media site.

"Instagram doesn’t merely host these activities. Its algorithms promote them," the WSJ reported.

READ MORE: Teen 'arrested over vile dog sex video' slams people for 'being cruel to animals'

"Instagram connects pedophiles and guides them to content sellers via recommendation systems that excel at linking those who share niche interests."

The investigation found accounts advertising the content with paedophile-themed hashtags.

Test accounts were used to view a page associated with the content, then "were immediately hit with ‘suggested for you’ recommendations of purported child-sex-content sellers and buyers, as well as accounts linking to off-platform content trading sites".

The report added: "Following just a handful of these recommendations was enough to flood a test account with content that sexualizes children."

Selling accounts were found to offer "menus" for sick content including self-harm images and bestiality. Children were also advertised as available for in-person meet-ups, the report stated.

Stanford Internet Observatory research team found 405 sellers of what they called "self-generated" child sex abuse material – accounts purportedly run by children themselves.

Data was cited showing 112 of those accounts collectively had 22,000 unique followers.

The WSJ report also stated that the selling accounts "generally don’t publish it openly", rather linking to other websites for content trading.

A spokesperson for Meta – the company that owns Instagram and Facebook – said it set up an internal task force to investigate the issues raised by the report.

In some instances, they acknowledged the company had failed to act on reports of child sexual abuse. A software error (which Meta says has been fixed) was cited as the reason.

"Child exploitation is a horrific crime. We work aggressively to fight it on and off our platforms, and to support law enforcement in its efforts to arrest and prosecute the criminals behind it," the firm said.

"Predators constantly change their tactics in their pursuit to harm children, and that’s why we have strict policies and technology to prevent them from finding or interacting with teens on our apps and hire specialist teams who focus on understanding their evolving behaviors so we can eliminate abusive networks."

Meta said it took down 490,000 accounts in January alone for violating child-safety policies, and "dismantled 27 abusive networks" between 2020 and 2022.

For the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletter by clicking here.

For the fourth quarter of 2022, the company said Meta’s technology removed more than 34million pieces of child sexual exploitation content from Facebook and Instagram. It claimed 98% of this content was removed before it was reported by users.

Meta also said thousands of hashtags associated with the sexualisation of children have now been blocked.

The WSJ investigation also looked at other social media platforms. Snapchat and TikTok were found not to facilitate networks of paedophiles.

On Twitter, researchers found 128 accounts offering to sell child sex abuse material. The report stated Twitter removed accounts quicker and did not recommend the content to the same degree as Instagram.

If you or somebody you know has been affected by this story, contact Victim Support for free, confidential advice on 08 08 16 89 111 or visit their website, www.victimsupport.org.uk.

Source: Read Full Article

Previous post MLB Mulling Postponement Of White Sox vs. Yankees Game Due To Wildfire Smoke
Next post Love Island’s Mitchel Taylor’s life off-screen including his real job