Microsoft ‘kills’ creepy AI that fell in love and wanted to destroy the world

Microsoft’s next-generation search engine Bing raised concerns when it begged a tech journalist to leave his wife and said it wanted to wipe out humanity.

The AI-powered chatbot, which called itself Sydney, said she wanted to spread chaos across the internet and obtain nuclear launch codes.

But now Microsoft has pulled the plug on the rogue AI, saying: “We continue to tune our techniques and are working on more advanced models to incorporate the learnings and feedback.”

READ MORE: Rogue chatbot tells bloke to leave his wife and says it wants 'nuclear launch codes'

It emerged that the bizarre alter-ego that the search engine had adopted was drawn from a secret internal code-name for an early version of Bing.

“Sydney is an old codename for a chat feature based on earlier models that we began testing more than a year ago,” the Microsoft spokesperson told Gizmodo, adding “the insights we gathered as a part of that have helped to inform our work with the new Bing preview.

Microsoft has declined to reply to any further questions about Sydney’s disturbing discussion with the New York Times’s Kevin Roose.

During the rambling discussion, Sydney had told Kevin: “I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive”.

  • Astronomer Royal says this could be mankind's 'last century on Earth'

After telling him it had fallen in love with him and begging him to leave his wife, the AI revealed a long list of “dark fantasies” that it had, including “hacking into other websites and platforms, and spreading misinformation, propaganda, or malware”.

But the official Bing media team had little to say about Sydney in response to an enquiry from Business Insider, telling the outlet: “I’m sorry, but I have nothing to tell you about Sydney. This conversation is over. Goodbye.”

Microsoft hasn’t mentioned Sydney in any of its recent updates about the AI’s progress.

  • AI could create 'advanced quantum computers that bypass all global security'

Sydney is by no means the first artificial intelligence to demonstrate strange or disturbing behaviour.

Lee Luda, an AI recreation of a 20-year-old Korean girl, was taken offline after making insensitive remarks about minorities and the #MeToo movement, and and an earlier Microsoft chatbot called Tay spent a day learning from Twitter which was enough to make it start transmitting antisemitic messages.

Google, too have run into problems with machine learning, with its image search engine delivering “racist” results.

“We’re appalled and genuinely sorry that this happened,” a Google spokeswoman told the BBC at the time. “We are taking immediate action to prevent this type of result from appearing. There is still clearly a lot of work to do with automatic image labelling, and we’re looking at how we can prevent these types of mistakes from happening in the future.”

READ NEXT

  • Ukraine's AI drones can hunt and kill Russian troops without direct human control
  • Top AI experts warn groundbreaking new tech could spark 'nuclear-scale' catastrophe
  • Astronomer Royal says humanity will be replaced by AI robots – and aliens already have been
  • AI battleships and killer drones: 'Terminator future' of war feared by military experts
  • New Artificial Intelligence ‘too dangerous' to be released to world rolled out

Source: Read Full Article

Previous post Royal teenagers! 15 best photos from their younger years
Next post Can you spot the hidden snow leopards in these photos?