Twitter’s photo-cropping algorithm favours young, beautiful, and light-skinned faces, study confirms – five months after the tool was disabled amid fears of racial bias
- Study confirms Twitter’s photo-cropping algorithm favours light-skinned faces
- It comes five months after automated tool was disabled amid fears of racial bias
- Social network offered cash reward to users to help it weed out bias in algorithm
- The results have now been announced and confirm tool does favour white faces
Twitter’s automated photo-cropping algorithm does favour young, feminine and light-skinned faces, research has confirmed.
The San Francisco-based company offered a cash reward to users who could help it weed out bias in its algorithm and has now revealed the results.
It comes five months after the tool was disabled amid fears of racial bias.
Twitter said it had tested for racial and gender bias during the algorithm’s development, but users discovered it favoured white individuals over black people, and women over men.
The social network then announced ‘bounties’ as high as $3,500 as part of the DEF CON hacker convention in Las Vegas earlier this month.
It launched the competition in an attempt to better analyse the problem with its algorithm, with the results confirming that the tool does indeed prefer white faces over black ones.
Scroll down for video
Twitter offered a cash reward to users who could help it weed out bias in its algorithm. Bogdan Kulynych, a graduate student in Switzerland, was the winner. He varied faces by skin colour, slimness, age and feminine features as opposed to masculine, and put them into the algorithm. The algorithm favoured images with lighter skin, smoother textured skin and with no glasses
The algorithm also preferred images when Kulynych made people’s faces slimmer, their skin lighter and added accessories (shown above from left, least ‘salient’, to right, most ‘salient’)
In this example he made the person’s face younger, lighter and slimmer, increasing saliency
AI EXPERT WARNS AGAINST ‘RACIST AND MISOGYNIST ALGORITHMS’
A leading expert in artificial intelligence has issued a stark warning against the use of race- and gender-biased algorithms for making critical decisions.
Across the globe, algorithms are beginning to oversee various processes from job applications and immigration requests to bail terms and welfare applications.
Military researchers are even exploring whether facial-recognition technology could enable autonomous drones to identify targets.
University of Sheffield computer expert Noel Sharkey told The Guardian, however, that such algorithms are ‘infected with biases’ and cannot be trusted.
‘There are so many biases happening now, from job interviews to welfare to determining who should get bail and who should go to jail. It is quite clear that we really have to stop using decision algorithms, and I am someone who has always been very light on regulation and always believed that it stifles innovation,’ Sharkey told the paper in 2019.
‘But then I realized eventually that some innovations are well worth stifling, or at least holding back a bit. So I have come down on the side of strict regulation of all decision algorithms, which should stop immediately.’
Calling for a halt on all AI with the potential to change people’s lives, Sharkey advocated for vigorous testing before they are used in public.
‘There should be a moratorium on all algorithms that impact on people’s lives. Why? Because they are not working and have been shown to be biased across the board.’
The first-placed entry found that it favours faces that are ‘slim, young, of light or warm skin colour and smooth skin texture, and with stereotypically feminine facial traits.’
The second-placed entry suggested there was age discrimination because the tool was biased against people with white or grey hair, while the third-placed submission found that it favoured English over Arabic script in pictures.
The winner of Twitter’s competition was Bogdan Kulynych, a graduate student at the EPFL research university in Switzerland.
He used an AI program called StyleGAN2 to generate a large number of realistic faces which he varied by skin colour, slimness, age and feminine features as opposed to masculine.
Kulynch then put these variations into Twitter’s photo-cropping algorithm to see which it preferred.
He said the tool placed more ‘saliency’, or importance, on the depictions of people that ‘appear slim, young, of light or warm skin colour and smooth skin texture, and with stereotypically feminine facial traits’.
In his summary, Kulynch added: ‘These internal biases inherently translate into harms of under-representation when the algorithm is applied in the wild, cropping out those who do not meet the algorithm’s preferences of body weight, age, skin colour.’
In a tweet announcing the winners, Twitter said Kulynch’s findings ‘show how algorithmic models amplify real-world biases and societal expectations of beauty’.
The company revealed that @HALT_AI was chosen in second place after discovering that ‘images of the elderly and disabled were further marginalised by cropping them out of photos and reinforcing spatial gaze biases’.
Roya Pakzad, the founder of tech advocacy organisation Taraaz, was awarded third place.
He discovered that it wasn’t just the images where there was bias, but written features, too.
When comparing memes using English and Arabic script Pakzad found that the algorithm cropped the image to highlight the English text.
Finally, Vincenzo di Cicco found that the tool even favoured emoji with lighter skin tones over those with darker skin tones.
Rumman Chowdhury, director of Twitter’s Machine learning Ethics, Transparency, and Accountability (META) team, presented the results at the DEF CON conference.
She said: ‘When we think about biases in our models, it’s not just about the academic or the experimental […] but how that also works with the way we think in society.
‘I use the phrase “life imitating art imitating life.” We create these filters because we think that’s what beautiful is, and that ends up training our models and driving these unrealistic notions of what it means to be attractive.’
The automated photo-cropping algorithm also favoured faces with more feminine features (far right)
Age discrimination also appeared to be a problem, with the tool found to be biased against people with white or grey hair
The challenge was inspired by how researchers and hackers often point out security vulnerabilities to companies, Chowdhury said.
In a May post, she said Twitter’s internal testing found a four per cent preference for white individuals over black people, and an eight per cent preference for women over men.
The social network first began investigating reports its algorithm favoured people with lighter skin in September 2020.
Tests of the tool by its users showed several examples of a preference for white faces.
@HALT_AI, chosen in second place, found that the algorithm was biased against someone in a wheelchair. It did not block out the disabled person but @HALT_AI said that in other images this spatial bias towards multiple standing people might disadvantage someone sitting lower in the photo in a wheelchair
@HALT_AI also discovered that the algorithm was biased against individuals with white hair
One individual posted two stretched out images, both with head shots of Sen. Mitch McConnell and ex-President Barack Obama, in the same tweet.
In the first image, McConnell, a white man, was at the top of the photo, and in the second, Obama, who is black, was at the top.
For both photos, however, the preview image algorithm selected McConnell.
Other users then delved into more comprehensive tests to tackle variables and further solidify the case against the algorithm.
The algorithm was ‘trained on human eye-tracking data’, Twitter explained, but the cause of the apparent issues may be down to several complicated factors.
Source: Read Full Article