Google rolls out new policy to protect minors using its products

Google is blocking ads from targeting minors based on age, gender and interests in a bid to protect users under 18, days after Apple said it will scan iPhones for child abuse images

  • Google is restricting ads from targeting minors in a bid to protect  their privacy
  • It is adding a safeguard to Search for those under 13 that blocks explicit  searches from appearing
  • Location history is automatically turned off for those under 18 globally
  • YouTube is also adding defaults that turn off autopay and makes created content private, but users can opt to make their content viewable to the public

Google announced Tuesday that it is blocking ads from targeting children based on their age, gender or interests in a new effort to protect the privacy of users under 18 years old.

The new restrictions also turns off its ‘location history’ feature for users under 18 globally and includes new default settings in YouTube to protect minors from explicit vides.

The new protection policy allows children, their parent or guardian to request the removal of their images from Google Image results – a feature that is set to roll out in the coming weeks.

Google’s move comes as major online platforms have long been under scrutiny from lawmakers and regulators over their sites’ impact on the safety, privacy and wellbeing of younger users.

Facebook’s Instagram recently launched a similar policy against targeting users under 18 and Apple is using it technology to combat child abuse by scanning iPhone users’ photos in search of such material.

Mindy Brooks, Google’s general manager for kids and families, wrote in a blog post: ‘Some countries are implementing regulations in this area, and as we comply with these regulations, we’re looking at ways to develop consistent product experiences and user controls for kids and teens globally.’

Google announced Tuesday that it is blocking ads from targeting children based on their age, gender or interests in a new effort to protect the privacy of users under 18 years old. The new restrictions also turns off its ‘location history’ feature for users under 18 globally

Google’s new policy includes some of its most popular products among children and teens, specifically YouTube. 

In a few weeks, the video platform will change the default upload setting to its most private option for teens aged 13 through 17, where content is seen only by the user and people they choose.

However, users have the final say to make their content viewable by the public. 

Auto-play will be off by default for kids under 18, and YouTube will turn on break reminders.

Google’s new policy includes some of its most popular products among children and teens, specifically YouTube. In a few weeks, the videotaping platform will change the default upload setting to its most private option for teens aged 13 through 17, where content is seen only be the user and people they choose

Online platforms’ approach to younger users has been in the spotlight in recent months as US lawmakers and attorneys general slammed Facebook’s plans to create a kids-focused version of Instagram. 

Search is also protecting young users from being exposed to explicit results when those under the age of 13 are logged into Family Link, which launched in 2017.

Family Link is an app that allows parents to manage and monitor how much time their children spend online, along with see what websites or apps they are scrolling through. 

Facebook recently announced changes to ad targeting of minors under the age of 18, though its advertisers can still target these younger users based by age, gender or location.

Instagram also announced in late July that it is making under-16s’ accounts private by default, as part of its drive to make the app ‘safe and private’ for young users.

Prior to the change, new Instagram users’ account were set as public, meaning anyone can see your profile and posts on Instagram. 

Apple made headlines last week when it announced a plan to monitor iPhones for child abuse materials. 

Apple also made headlines last week when it announced a plan to monitor iPhones for child abuse materials. The Cupertino-based company says it is using an algorithm the scan photos for the explicit images

The Cupertino, California-based company says it is using an algorithm to scan photos for explicit images, which has sparked criticism among privacy advocates who fear the technology could be exploited by the government.

However, Apple released a new Frequently Asked Questions document Monday that explains how the technology will work.

The system will not scan photo albums, Apple clarified, but rather look for matches based on a database of ‘hashes’ – a type of digital fingerprint – of known child sexual abuse material (CSAM)images provided by child safety organizations. 

While privacy advocacies worry about ‘false positives, Apple boasted that ‘the likelihood that the system would incorrectly flag any given account is less than one in one trillion per year.’ 

Source: Read Full Article

Previous post Inside Jacqueline Jossa’s christening for daughters Mia and Ella
Next post Beyoncé Says New 'Music Is Coming' In Jaw-Dropping September Issue Of Harper’s Bazaar