The deep fakes ‘propaganda war’ and its impact on the Ukraine conflict

A cyber security expert has explained how deep fakes are being used in a “propaganda war” which could have a potential impact on the conflict in Ukraine. A deep fake is an attempt to impersonate somebody or present material “as if it had come from another source”, usually presented in video format.

Less than a month after Russia invaded Ukraine, a deep fake video shared on Twitter showed a simulation of Ukrainian President Volodymyr Zelensky appearing to call for his troops to put down their weapons.

While it was quickly ridiculed as Ukrainians spotted its flaws without much hesitation, it raised the alarm over the potential for the technology to evolve and start playing a greater role in the conflict.

Toby Lewis, Global Head of Threat Analysis told Express.co.uk: “The classic thing we see with deep fakes is the impersonation of foreign leaders, celebrities or high profile individuals and getting them to carry a message that isn’t true.

“The supporting technology that goes that is artificial intelligence (AI), which is used to edit or modify a genuine video clip, change an audio message or create something new entirely.

“At the start of the Ukraine conflict, there was a lot more speculation about how this would have wrapped around Zelensky as his position as a wartime leader. A lot of people were looking to see whether he would retreat or go in to hiding if Ukraine become so overwhelmed by the Russian invasion.

“That is why in the early months, March, April, May and June, this is why we saw a series of deep fake videos of Zelensky showing him talking about laying down arms, asking his troops to surrender, very much trying to push this message out.”

But a year into the invasion, and the Ukrainian military has become well-equipped to deal with threats of this nature, Mr Lewis said.

He explained: ”It [the deep fake] was designed to rely on poor communications amongst the Ukrainian military – that they would take this video distributed on Telegram or Twitter or Instagram as gospel. But as time has gone on, the military communications platforms of the Ukrainian military are robust enough for them to determine that these videos do not come from authoritative sources.”

But he warned there is still some scope for the Russians’ use of the technology to advance as it can still be “incredibly hard to determine the source of a video”.

Mr Lewis said: “You have got these intersecting circles of state-directed and state-aligned [deep fakes].

“Unless you’ve got some super secret intelligence that Vladimir Putin picked up the phone and provided orders, it is really hard to say whether a video that has come from a hacking group like Kilnet for example was anything more than nationalist fervour seemingly operating in a way that is aligned with state interests, but that is not to say that they have been directed by the state with a certain agenda.

“These sort of videos are used in a propaganda war and that is fundamentally what it boils down to. It is ‘how can we use messaging to shape the minds of individuals either by presenting someone in a bad like – and there is clearly going to be an interest in state actors to shape that narrative – but equally there could be relatively amateur approaches by state-aligned groups.”

Shaping the minds of individuals is a key strategy Putin has used to win public support for his invasion of Ukraine among Russians, taking advantage of technology and available media to pump out misinformation about his neighbouring country.

DON’T MISS
Micrometeors struck again causing latest Russian spacecraft leak [REVEAL]
ChatGPT only manages to write ‘basic’ CV for job-seeker [INSIGHT]
Worrying satellite image from space shows the true extent of droughts [INSIGHT]

According to the RAND Organisation, Russian state propaganda is “rapid, continuous, and repetitive, and it lacks commitment to consistency”. It is distributed in “text, video, audio, and still imagery” formats “via the Internet, social media, satellite television, and traditional radio and television broadcasting”.

But it is not limited to Russian state actors churning out misinformation. Ultimately, there is a danger that anyone with the relevant skills and knowledge can use AI in this nefarious way.

Mr Lewis said: “These technologies aren’t super Government weapons. They are on Google and they are freely available for the public to play around with.”

However, there are currently some flaws in the deep fake technology which are easy to spot.

Mr Lewis said:“There are small little details of the human character that are really hard to replicate. Little twitches of the arm or mannerisms and those sorts of things which as a human being will insert somewhat randomly when we talk, Can AI truly represent that? Well, to the people who know those individuals, they will know that that is not how that individual speaks or acts.

But there is the potential for deep fakes, particularly of Zelenksy, to become more sophisticated. Mr Lewis said: “The more data you have feeding a machine learning model, the more videos and examples you have of Zelenksy there is on TV, the easier it will become to replicate how he acts and talks.”

Mr Lewis said: “I would like to think there are a series of fail-safes. What is the triggering point and what level of verification goes into it? Would a simple video by itself be enough to warrant that? Probably not.

“This comes back to the checks and balances that one would hope that any decent democracy has. Yes, in the modern era, you can get deep fakes and messages pumped out on social media that are contentious. But I would like to hope that, even in the high tensions of war, any decision around weapons of mass destruction rest on a lot more than a tweet or WhatsApp that someone does not like.”

Source: Read Full Article

Previous post Mary Bedford branded ‘Khloe Kardashian lookalike’ in teeny string bikini
Next post ‘Teen Mom’ alum Jenelle Evan’s ex Nathan Griffith arrested for domestic battery