How will the PLA use GenAI?

Xinhua virtual news anchor (Image credit: Xinhua)
Xinhua virtual news anchor (Image credit: Xinhua)

One of China’s research institutes run by the Ministry of Science and Technology, revealed earlier this year that Chinese organisations had launched 79 large language models (LLMs) since 2020.

The scale of this R&D effort comes as no surprise, given that Beijing stated ambition is to make China the world’s leading AI power by 2030. And given China’s now longstanding innovation strategy for the People’s Liberation Army (PLA), it should be taken as a given that it will be experimenting with generative AI.

With a policy of “intelligentisation” of the PLA and intelligentisation of warfare, artificial intelligence and unmanned systems have become more and more of a focus over the past four years. Many analysts believe this to be a holistic strategy that aims to use AI and automation across all aspects of warfare, including information, propaganda, cyber and psychological. In fact, some Chinese researchers, in the context of intelligentised warfare, have often referred to the future ability to affect the enemy’s human cognition.

 If the ability to influence the minds of their adversaries is one of China’s ambitions, then generative AI must certainly be a focus for development and experimentation. In fact, GenAI’s potential for deepfake audio and video alone, must make it a key technology. We’ve not yet seen deepfake technology used at scale by adversaries, but the potential is there and so are the early commercial equivalents.

UK-based startup Synthesia allows customers to produce HD and UHD videos choosing from a library of synthetic video human avatars, created from real-life models, with voice-over audio created via text-to-speech. However, the software platform allows customers to go one step further. For those prepared to pay additional fees, customers can create their video avatars in their own likeness, voiced by clones of their own voices and then personalise these videos at scale. For example, marketing teams could create personalised videos from their CEO for each of their top 1,000 customers.

Last year, deepfake disinformation videos were spotted being distributed by pro-China bot accounts on Facebook and Twitter for the first time. The videos showed fake news bulletins, on a fake television station, delivered by synthetic human avatars. According to U.S. media reports, the messages were allegedly part of one of China’s state-aligned disinformation campaigns.

Chinese software firms have their own synthetic video technology. China’s state news agency Xinhua, trialed its first virtual television newsreader way back in 2018. Meanwhile, creating virtual human avatars for advertising and business communications is already a fast growing business.

While we can expect China to develop deepfakes and other creations of generative AI, for information, propaganda and even psychological warfare, as one of the top two investors in AI on the planet, we should expect much more too.

by Carrington Malin