Could you spot artificially generated news? - How AI feeds 'fake news'.

If you have spent a few minutes on Tiktok or X in the last few days, chances are you have come across the disturbing news reporting a serious fire inside the famous Louvre National Museum in Paris. Alternatively, it may be that you have had the opportunity to see photographs depicting the even more beloved Eiffel Tower, again, strictly in flames. As much as there were pictures and videos in the mentioned posts, it is important to specify that they are both untruthful, they're the so-called 'fake news' that seem to have infested the vast world of the Internet for years now. 

But if they actually present untrue information, fabricated stories to get attention and sow concern, how can the featured images seem so real?

Artificial intelligence makes headlines every day, whether it is image-generating software that makes disrespectful use of other people's art, or viral videos depicting politicians pronouncing words they never remotely thought of, the power of this technology is objectively undeniable. We are now confronted with the considerable image generation and manipulation capabilities of artificial intelligences: a carefully descriptive sentence is enough to achieve verisimilitude. After that, it takes little to spread fake news: a post on a social network, one on another platform, keywords to attract attention, and that's it. The story writes itself, humans are fuelled by curiosity and surprise - being the first to discover a piece of news inevitably leads to its dissemination, word of mouth works without the need for external input, and if the information is truly shocking, it is able to reach unimaginable speeds. The concept of 'viral', in fact, revolves around this trend.  

A question arises: how much of the news we come across on a daily basis, whether positive or negative, is made of completely false content?

Fortunately, the current situation is still more than manageable. Although it may present considerable concern for the future, there is no need to panic at the moment. On average, an artificially generated news story is debunked within a few days, the maximum time span documented so far is just over two weeks, and it was a single isolated case during the month of March 2023. 

The news stories mentioned here tend to report implausible information, or facts that can be disproved with a simple Internet search. A story such as a potential fire in a building of enormous cultural value such as the Louvre, or a historical monument such as the Eiffel Tower, would certainly be published by various media outlets and would tour the world in less than ten minutes counted. A timely search following the 'discovery' of the news usually leads to the famous Reddit site, more precisely to the subreddit '/Midjourney', thus showing the original post of the image used as proof. Midjourney, in fact, is the artificial intelligence software currently most widely used in the generation of plausible photographs - it goes without saying, therefore, that the results obtained through this software are usually disseminated as alleged evidence.

The 'fake news' in question usually feature stories linked to the world of politics and famous figures: whether it is the Pope or the British Royal family, no one is spared. During the first months of 2023, the Pontiff became the protagonist of a series of articles concerning his clothing: a particularly realistic photograph showing him wearing a jacket very similar to the clothing produced by the famous brand Balenciaga became viral. The photo was 'debunked' the same day it gained attention, however, thanks to its generally innocent and far from harmful nature: in the following days, both online newspapers and television news programmes reported on it.

Not long after the story just mentioned, other particularly plausible photographs began to circulate: they portrayed Prince Harry and his brother William during the coronation of King Charles III held in May. The two appeared to be immortalised in a series of brotherly embraces, with serene smiles and seemingly positive attitudes towards each other. It took twelve days to disprove this series of photographs.

Politics, of course, always seems to be at the centre of these events. In the first half of 2023, a video featuring the Democratic former US Secretary of State Hillary Clinton became viral. Apparently, the diplomat seemed to openly promote the presidential candidacy of Republican Florida Governor Ron DeSantis. The verisimilitude of the video was supported by the overlay graphics as well: the MSNBC logo and the phrase: 'Hillary Clinton endorses DeSantis' - a spokesperson for the TV channel stated that the subject in question was not part of any service they filmed. A closer review of the video underlined the errors made by the artificial intelligence during its generation, showing how the movement of the mouth did not coincide with the words spoken, in addition to the diplomat's monotone tone.

Remember the accident of the submarine Titan, by the company OceanGate? The tragedy that occurred shortly before the Summer of 2023 became fodder for artificial intelligence. Within days of the vehicle's disappearance in the Atlantic Ocean, images of the vehicle's debris began circulating on X, usually accompanied by a shot of shoes on the seabed. Within three days, they denied the veracity of the photos, specifying that the accompanying shot was true, but not even remotely related to the OceanGate affair.

Misinformation has always existed. Unclear facts, misrepresentations and lies have always been part of the journalistic field, this is nothing new - it is almost inevitable when it comes to the dissemination of information and knowledge. Which means that, in the face of recent developments in technology and the ever-increasing spread of artificial intelligence, it is crucial to act as one has always done - ideally, one should pay more attention to what one reads and hears, and train personal judgement and common sense. It is never wise to trust something without researching it yourself.

About the Author

Yako

Yako

Columnist, (He/Them)

Content Creator for cosplay, gaming and animation. With a degree in foreign languages and a great passion for Oriental culture, he writes about copyright to protect the work of artists and young minds. A cosplayer since 2015, Yako is an advocate of gender identity and the development of one's creativity through personal attitudes: be it role-playing, cosplay or writing.