Artificial intelligence: Expert discusses research on future crime
Once relegated to the margins of society, conspiracy theories have in recent months moved into the mainstream, seemingly fuelled in part by the coronavirus pandemic. Despite their potential to spread panic and even cause harm, debunked conspiracy theories are easily spread among like-minded people on social media.
But University of California (UC) researchers have now developed an artificial intelligence tool able to distinguish an emerging new narrative and an unfounded conspiracy theory.
These tools could form the basis of an early warning system to alert authorities to online narratives that pose a threat in the real world
Professor Timothy Tangherlini
Timothy Tangherlini, A UC professor of Danish Literature and Culture, wrote: “It turns out that it’s possible to distinguish between conspiracy theories and true conspiracies by using machine learning tools to graph the elements and connections of a narrative.
“These tools could form the basis of an early warning system to alert authorities to online narratives that pose a threat in the real world.”
UC’s culture analytics group, also led by Vwani Roychowdhury, UC’s professor of Electrical and Computer Engineering, has designed an automated approach to determining when conversations on social media reflect the classic indications of conspiracy theories.
READ MORE: AI-manipulated media will be ‘WEAPONISED’ to trick military
We will use your email address only for sending you newsletters. Please see our Privacy Notice for details of your data protection rights.
These methods were successfully tested on recent notorious conspiracy theory examples, including Pizzagate, the coronavirus pandemic and the anti-vaccination movements, and plans are already in place to next study QAnon.
Because almost all conspiracy theories occur online, the researchers were able to trace their origins to a series of often disjointed rumours and story pieces en route to forming a more comprehensive narrative.
Professor Tangherlini wrote: “For our work, Pizzagate presented the perfect subject.”
Pizzagate was the totally-unsubstantiated conspiracy theory linking prominent politicians with satanic paedophile trafficking, resulting with a man entering a Washington pizza parlour armed with an AR-15 rifle in 2016.
Researchers analysed 17,498 posts from April 2016 through February 2018 on the Reddit and 4chan fora where Pizzagate was discussed, via cutting-edge AI machine learning.
Machine learning algorithms parse swathes of data to determine the categories of things in the data and then identify which categories particular elements belong to.
This technique treated each post as a fragment of a hidden story in an attempt to unpick the narrative.
This is achieved by identifying the key people, places and events to distinguish the major from the minor elements and how their connections form the complete narrative.
DON’T MISS
Asteroid danger: 100% certainty of impact warns space expert [INTERVIEW]
Hubble snaps galaxy ‘like a portal to another dimension’ [PICTURES]
Scientists build ‘self-aware’ robot able to REPAIR ITSELF [ANALYSIS]
This fabricated Pizzagate conspiracy theory was then compared with an actual conspiracy, known as Bridgegate – a political operation involving Republican Governor Chris Christie’s administration.
Two distinguishing features of a conspiracy theory’s narrative framework become clearer compared when the two separate collections.
First, while the narrative graph for Bridgegate took approximately seven years to develop, Pizzagate’s appeared fully-formed and stable within a month.
Second, Bridgegate’s data survived having elements removed, as opposed to Pizzagate’s which all-too easily crumbled.
Professor Tangherlini wrote: “When we removed the people, places, things and relationships that came directly from the interpretations of the WikiLeaks emails, the graph fell apart into what in reality were the unconnected domains of politics, casual dining, the private lives of the Podestas and the odd world of satanism.”
However, the researchers admit their work raises “clear ethical challenges”, as their methods could eventually exacerbate the conspiracy theory problem by generating additional posts to a conspiracy theory discussion.
And even more concerning, this AI tool could also develop an entirely new conspiracy theory.
Fortunately, they add how such an “early warning system” capable of tracking emerging conspiracy theories can one day help the authorities to react to real-world actions triggered by these narratives.
Professor Tangherlini added: “Perhaps with such a system in place, the arresting officer in the Pizzagate case would not have been baffled by the gunman’s response when asked why he’d shown up at a pizza parlour armed with an AR-15 rifle.”
Source: Read Full Article