Twitter 'failed to pick up on 40 child porn images'

Twitter failed to pick up on 40 child porn images that had already been flagged as harmful over two-month period, Stanford research group claims

  • The Stanford Internet Observatory analyzed 100,000 tweets over two months
  • They claim to have found 40 images that all appear on PhotoDNA – a database that identifies harmful content 

Twitter failed to pick up on 40 images of child sexual abuse over a two month period this year even though they had already been flagged as harmful to the social media site, according to a new report by the Stanford Internet Observatory. 

The group, which monitors safety on the internet and specifically social media, found the images between March and May.

The photos were discovered among a trove of 100,000 tweets that were analyzed, and all already belonged to databases like PhotoDNA, which companies and organizations use to screen for such content, according to The Wall Street Journal. 

Twitter CEO Elon Musk has not commented on the claims in the new Stanford Internet Observatory report 

‘This is one of the most basic things you can do to prevent CSAM online, and it did not seem to be working,’ David Thiel, of the Observatory, said.

Twitter has not yet commented on the claims. 

The Observatory and its researchers fear oversight of Twitter’s practices will become limited this year when the price of its application programming interface  – API – is increased to $42,000-per-month. Previously, access was free. 

Musk has taken aim at the group in the past, calling it a left-wing ‘propaganda machine’.

The group had been involved in flagging what it considered to be disinformation to Twitter during the 2020 election. 

Its involvement in removing some tweets was exposed in the release of the Twitter Files, documents that were made public by Musk to in an effort to show the public how biased the site was before he took over, and how intrinsically linked it was with government. 

The SIO report claims part of the problem is that Twitter allows some adult nudity, which makes it more difficult to identify and outlaw child sexual abuse materials. 

According to the report, researchers told Twitter bosses in April that there were problems with its systems for identifying harmful content but nothing was done until late May. 

The Twitter findings were part of a larger project that the group says will become public later this week in The Wall Street Journal.  

In February, Twitter announced it had suspended 404,000 accounts that were harmful – an increase of 112 percent – in the month of January alone.  

Source: Read Full Article

Previous post Mom of Missing Girl Arrested After Family Concern Leads Police to Charred Remains In Apartment
Next post Taylor Swift and The 1975s Matty Healy split after whirlwind romance