‘Orwellian’ tech firm used by UK police illegally stored MILLIONS of images of Britons: AI facial recognition company is fined £7.5M after collecting global database of 20billion social media and online images without consent
- ‘Orwellian’ tech firm Clearview AI is fined £7.5million by UK’s privacy watchdog
- ICO said US-based company illegally collected 20billion images of people
- It is likely that millions of these images will be of British residents
- A number of UK police forces ‘have used live facial recognition technology’
A controversial AI facial recognition company which uses ‘Orwellian spying technology’ tried by police forces in Britain has been fined more than £7.5million by the UK’s privacy watchdog after illegally collecting ‘millions’ of images of Britons from the Internet.
The Information Commissioner’s Office (ICO) said that Clearview AI has collected more than 20 billion images of people’s faces globally to create an international online database for facial recognition.
It has ordered the controversial firm to stop obtaining and using the personal data of UK residents, and to delete the data that it has already collected.
Scotland Yard, the National Crime Agency, Northamptonshire Police, North Yorkshire Police, Suffolk Constabulary and Surrey Police are among the forces alleged to have used or tried facial recognition technology as of February 2020, according to documents reviewed by Buzzfeed News.
Privacy campaigners hailed the ICO’s ‘important’ enforcement action, but warned ‘it may be difficult to enforce’ and called on MPs to ‘impose an immediate ban on excessive facial surveillance’.
Clearview AI customers can upload an image of a person to the company’s app, which is checked against a database and then provides a list of images similar to the photo provided by the customer.
The ICO said the company has broken UK data protection laws by failing to use information of UK residents in a fair and transparent way, failing to have a lawful reason for collecting that information, and failing to have a process in place to stop the data being retained indefinitely.
An ‘Orwellian’ facial recognition company which uses technology tried by police forces in Britain has been fined more than £7.5million by the UK’s privacy watchdog
Pictured, a stock image of facial recognition technology used in a crowd of people
The Information Commissioner’s Office (ICO) said that Clearview AI has collected more than 20 billion images of people’s faces globally (company logo pictured)
Clearview AI CEO Hoan Ton-That, who founded the controversial firm
A screen demonstrates facial-recognition technology at the World Artificial Intelligence Conference (WAIC) in Shanghai, China, on August 29, 2019
How does it work?
Facial recognition software works by matching real time images to a previous photograph of a person.
Each face has approximately 80 unique nodal points across the eyes, nose, cheeks and mouth which distinguish one person from another.
A digital video camera measures the distance between various points on the human face, such as the width of the nose, depth of the eye sockets, distance between the eyes and shape of the jawline.
This produces a unique numerical code that can then be linked with similar photos across the internet – including social media giants such as Facebook, Twitter and YouTube.
Clearview AI fills its database by scouring sources like Facebook, YouTube, Venmo and millions of other sites, according to the company.
Has it been used in the UK before?
Yes.
The Met has used the technology several times since 2016, including at Notting Hill Carnival in 2016 and 2017, Remembrance Day in 2017, and Port of Hull docks, assisting Humberside Police, in 2018.
The force has also undertaken several trials in and around London.
Why is it controversial?
Campaigners say it breaches human rights.
Liberty says scanning and storing biometric data ‘as we go about our lives is a gross violation of privacy’.
Big Brother Watch says ‘the notion… of turning citizens into walking ID cards is chilling’.
The fine and enforcement order comes after a joint investigation with the ICO’s Australian counterpart.
Information Commissioner John Edwards said: ‘Clearview AI Inc has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20 billion images.
‘The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable.
‘That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice.
‘People expect that their personal information will be respected, regardless of where in the world their data is being used. That is why global companies need international enforcement.
‘Working with colleagues around the world helped us take this action and protect people from such intrusive activity.
‘This international co-operation is essential to protect people’s privacy rights in 2022. That means working with regulators in other countries, as we did in this case with our Australian colleagues.
‘And it means working with regulators in Europe, which is why I am meeting them in Brussels this week so we can collaborate to tackle global privacy harms.’
Big Brother Watch director Silkie Carlo told MailOnline: ‘This important enforcement action by the ICO should be another nail in the coffin for facial recognition in the UK.
‘Clearview AI has hoarded multiple photos of each and every one of us from the internet and made it available to the highest bidder.
‘The use of facial recognition on billions of photos will end anonymity as we know it. Already, several police forces, banking firms and a university in the UK used this Orwellian spying tech.
‘Facial recognition used as a mass surveillance tool like this has a serious, irreversible impact on all of our privacy.
‘The ICO’s order for Clearview AI to delete all UK images is extremely welcome, but it may be difficult to enforce. Parliament must now to take action on facial recognition and impose an immediate ban on excessive face surveillance.’
MailOnline has contacted Clearview AI for comment.
In February 2020, the NCA said: ‘The NCA deploys numerous specialist capabilities to track down online offenders who cause serious harm to members of the public, but for operational reasons we do not routinely confirm or deny the use of specific investigative tools or techniques.’
Clearview AI says it will soon have 100 BILLION photos in its database to ensure ‘almost everyone in the world will be identifiable’ and wants to expand beyond law enforcement
A controversial AI company has announced it aims to put an image of nearly every human face in its facial recognition database, making it possible for ‘almost everyone in the world [to] be identifiable.’
In its latest report in December, facial recognition firm Clearview AI told investors that the company is currently collecting 100 billion photos of human faces for the unprecedented campaign, which will be stored in its dedicated database.
The collection of images – approximately 14 photos for each of the 7 billion people on the entire planet, scraped from social media and other sources – would extensively bolster the company’s extensive surveillance system, already the most elaborate of its kind.
The American company has already been used by myriad law enforcement and government agencies around the world, helping police make thousands of arrests by aiding in various criminal investigations.
A North Yorkshire Police spokesperson said: ‘North Yorkshire Police has never used Clearview AI in a live environment but used its image search facility, as part of a trial, to help identify online offenders who cause serious harm to the public and to safeguard vulnerable victims of crime whose images are already online.
‘As soon as we became aware of the data breach at Clearview AI last year, we terminated the trial and took appropriate steps to request that any data relating to North Yorkshire Police was removed from their systems.
‘No arrests were made through the use of Clearview.’
A Suffolk Constabulary spokesperson said: ‘The constabulary has no imminent plans to implement such technology.
‘However, we remain open-minded to the use of technology to support policing activities and will review the outcomes of any trials conducted before making a final decision on viability.’
A Surrey Police spokesperson said: ‘Surrey Police has not procured the services of Clearview AI but we have used the technology on a small number of occasions on a trial basis.’
MailOnline has contacted the Metropolitan Police for further information. In February 2020, the force appears to have declined to comment when approached by Buzzfeed News.
European countries including the UK, France, Italy, Greece and Austria have all condemned Clearview AI’s method of extracting information from public websites, saying it violates privacy policies.
In March 2020, Clearview AI was sued by the American Civil Liberties Union, which contended the company illegally stockpiled images of three billion people scraped from internet sites without their knowledge or permission.
Clearview AI was founded in 2016 by Hoan Ton-That, an Australian tech entrepreneur and one-time model, and Richard Schwartz, an aide to Rudy Giuliani when he was mayor of New York.
It is backed financially by Peter Thiel, a venture capitalist who co-founded PayPal and was an early investor in Facebook.
Mr Ton-That describes his company as ‘creating the next generation of image search technology’, and in January 2020 the New York Times reported that Clearview AI had assembled a database of three million images of Americans, culled from social media sites.
The paper published an expose of the company in which Ton-That described how he had come up with a ‘state-of-the-art neural net’ to convert all the images into mathematical formulas, or vectors, based on facial geometry – taking measurements such as how far apart a person’s eyes are.
Clearview AI, which was founded in 2016 as a facial recognition firm
Facial recognition technology uses cameras to scan the structure of faces in a crowd, then creates a digital image and compares it against a ‘watch list’ of people of interest (file photo)
Clearview AI created a directory of the images, so that when a user uploads a photo of a face into its system, it converts the face into a vector.
The app then shows all the scraped photos stored in that vector’s ‘neighborhood’, along with the links to the sites from which those images came.
Amid the backlash from the Times article, Clearview AI insisted that it had created a valuable policing tool, which they said was not available to the public.
‘Clearview exists to help law enforcement agencies solve the toughest cases, and our technology comes with strict guidelines and safeguards to ensure investigators use it for its intended purpose only,’ the company said.
Clearview AI insisted the app had ‘built-in safeguards to ensure these trained professionals only use it for its intended purpose’.
China’s watching YOU: Beijing-made CCTV cameras can recognise faces, eavesdrop on conversation and judge a person’s mood. Worst of all? There are tens of thousands of them lining Britain’s streets, writes ROSS CLARK
By ROSS CLARK for the DAILY MAIL
What if UK streets were plastered with Russian-made CCTV cameras, many employing sophisticated technology such as facial-recognition software — and virtually all hooked up to the internet.
Imagine their manufacturers — able to access them remotely — had been ordered by the Kremlin to make all the data recorded available to it, with the result that the FSB (the Russian secret police) as well as the military had the opportunity to spy on our streets, citizens, police stations, universities and hospitals.
Perhaps the cameras were being used to monitor the comings and goings at government departments, too, where ministers make vital decisions about, say, supplying weaponry to Ukraine. The Russian state could also be tracking dissidents and other opponents of the Ukraine war around our streets. In-built microphones would allow conversations to be monitored. Fortunately, Russia doesn’t have much of an electronics industry.
But China does. And while we are not engaged in armed conflict with China, it is deeply worrying how surveillance equipment designed and made in a country run by a dictatorship with an appalling human rights record has been allowed to embed itself in our security networks.
Police forces and councils who have ordered Hikvision and Dahua cameras by the hundreds. Both firms have major shareholders with connections to the Chinese Communist Party
For anyone who sighed with relief in 2020 when Boris Johnson made his welcome but belated decision to ban China’s telecoms giant Huawei from further participation in constructing the UK’s 5G network, I’m afraid to say the threat has not gone away.
Last month, Fraser Sampson, the Biometrics and Surveillance Camera Commissioner, wrote to Cabinet Minister Michael Gove to warn him about the dominance of Chinese CCTV equipment in Britain.
He said he had ‘become increasingly concerned at the security risks presented by some state-controlled surveillance systems covering our public spaces’. Two Chinese firms have become huge players in our CCTV market: Hikvision, which has revenues of £7.5 billion and Dahua, whose revenues are £3 billion. While both are private companies, both have major shareholders with connections to the Chinese Communist Party.
Yet security concerns don’t seem to have been in the minds of the Government departments, police forces and councils who have ordered Hikvision and Dahua cameras by the hundreds.
Many have advanced features, even if they are not always used: microphones, the capacity for facial and gender recognition and distinguishing between people of different racial groups.
Some cameras can analyse behaviour — detecting, for example, if a fight might be breaking out. Others can even judge moods, track via heat-sensing and learn patterns of behaviour, so as to highlight any unusual activity.
The campaign group Big Brother Watch sent 4,500 freedom of information (FoI) requests to public bodies asking whether they had Hikvision or Dahua cameras employed on their premises.
Of the 1,300 which responded, 800 confirmed that they did, including nearly three-quarters of councils, 60 per cent of schools, half of NHS trusts and universities and nearly a third of police forces.
A pedestrian walks past a Hikvision surveillance camera installed on a footbridge. The Department of Health is known to have Hikvision cameras because it was on one that the then Health Secretary Matt Hancock was caught in an embrace in his office with his lover last summer
Just one Government ministry, the Department for Work and Pensions, admitted to having CCTV cameras made by the companies. We know, however, that the Department of Health has Hikvision cameras because it was on one that the then Health Secretary Matt Hancock was caught in an embrace in his office with his lover last summer.
(It ought to be stressed that Hancock is not believed to have been caught out by a data leak from a CCTV system but from someone photographing a monitor. Nor is there any evidence that anyone at Hikvision, Dahua or at any Chinese authority has wrongfully accessed images from CCTV cameras installed in Britain.)
Regardless, his successor, Sajid Javid, has since banished Hikvision cameras from the DoH. This could prove to be a wise move: security flaws have been detected in Chinese-made cameras which could be used to access images and data remotely and without the permission of their owners.
Last year, the Italian state broadcaster Rai revealed that data collected from a Hikvision camera installed on its premises appeared to be being sent to a server in China — apparently due to a ‘glitch’. Rai also revealed that 100 cameras at Rome’s main airport had tried to connect with unknown computers multiple times.
Computer experts in the U.S. have already hacked into Hikvision cameras and posted their live feeds online — allowing anyone to see into people’s homes without the owners of the cameras being aware. Conor Healy of U.S. computer security website IPVM directed me to a website featuring a map of several hundred Hikvision cameras in the U.S. and UK.
Hover over the map and you see a live feed of car parks, streets, shops, gardens and, in at least one instance, what appears to be into someone’s home office.
‘All cameras have vulnerabilities,’ he says. ‘What makes some Chinese cameras different is that there are more security flaws. Chinese law requires that companies report vulnerabilities to the government within two days. It is inevitable that the Chinese government could have the opportunity to make use of them.’
Even laying aside the security issues, do we want our public authorities buying surveillance equipment from companies which supply the cameras to suppress the freedoms of Chinese Uyghurs? Hikvision and Dahua cameras have been spotted in detention camps in Xinjiang province by BBC reporters among others.
In his letter to Gove, Sampson said he had asked Hikvision if it accepted that human rights abuses were taking place and to clarify their involvement in the camps. ‘More than eight months later they have yet to answer those questions,’ he added.
The U.S. has already banned Hikvision and Dahua from selling surveillance equipment in the country and last July the Commons Foreign Affairs Committee demanded that the UK Government do the same.
Many welcomed Boris’ belated decision to ban China’s telecoms giant Huawei from further participation in constructing the UK’s 5G network, but the threat has not gone away
It isn’t just public bodies, either, who are using the cameras. Big Brother Watch reported that they are rife in the private sector, too, with 164,000 Hikvision cameras and 14,000 Dahua cameras used in shops and other spaces used by the public. The naivety with which we have allowed Chinese-made security cameras to become embedded in Britain mirrors that which nearly allowed Huawei into our 5G network. At first the Government brushed aside concerns about using Huawei, in spite of warnings by the U.S. (and our intelligence partners in Australia, Canada and New Zealand) that Chinese-made equipment was a potential security risk. But it changed its mind two years ago. All existing Huawei equipment must be removed by 2027. The Government has realised, too, the potential security risk of allowing China’s state nuclear group, CGN, to become involved in the project to build a new nuclear power station at Sizewell in Suffolk. It is now looking at going ahead with the project without Chinese involvement.
It would also be considered the height of foolishness for any government to order military equipment from China. Imagine, in any future conflict with China, requiring spare parts from the enemy!
Moreover, it is well known for military equipment to be engineered with ‘kill switches’ which could prevent it being used in the event of the country of manufacture going to war with the buyer.
French-made Exocet missiles sold to Argentina are believed to have such devices, although as the Defence Select Committee revealed this month, the Mitterand government does not appear to have responded to UK requests to share the technology required to render the missiles inoperable, leading to fatal attacks on British ships during the Falklands War.
In an era where cyber war will become increasingly important, we have to appreciate that security equipment used in civilian settings comes with a risk if ordered from countries with potentially hostile governments and they might see access to our CCTV systems as valuable in a war.
Yes, foreign investment in the UK is vital and our markets should be open to world trade. But we must ask whether permitting a fleet of potential spying machines into our public institutions is a price worth paying.
Source: Read Full Article