Go to contents

Fake news has even invaded the war

Posted October. 12, 2024 09:28,   

Updated October. 12, 2024 09:28

한국어

On Sunday (local time), as the week was coming to a close, a five-second video caused a frenzy on social media platforms TikTok and X in the U.S. With the hashtag #Beirut and no further explanation, the video showed the downtown area of Lebanon's capital engulfed in flames. The night sky was filled with thick, ashen smoke, and residential apartment complexes were on the verge of collapse. Beirut was, quite literally, devastated.

Reactions were varied, but no one doubted who was behind the catastrophe. It seemed certain that it was the work of Israel, which had launched a ground operation by deploying troops in southern Lebanon on Oct. 1 and had been continuing airstrikes around Beirut since last month. The situation escalated when the video was posted on the official X account of CAIR (Council on American-Islamic Relations), the largest Muslim organization in the U.S. with over 200,000 followers, fueling the anger of those opposing the Middle East war. According to CBS News, the video surpassed 10 million views within a few hours.

However, the next day, it was revealed that the "Beirut engulfed in flames" video was fake. It had been created by a self-proclaimed "AI artist" using widely available AI production tools. Although the Israeli Air Force had indeed bombed the southern suburbs of Beirut at the time, both the location and the scale of damage were significantly different. Despite the fact that this episode unfolded over the course of just one day, the impact was substantial. Al Jazeera, an Arab news network, reported that the false AI video almost threw not only Lebanon but the entire Middle East into chaos.

The reality that AI could bring confusion even to life-or-death situations like war was somewhat predictable. Concerns have been raised since OpenAI, the developer of ChatGPT, unveiled an AI program under the project name "Sora" this past February. The sight of a prehistoric extinct animal, a mammoth, walking across a snowy plain, generated in an instant from just a few lines of text, was shocking in many ways. The New York Times described Sora's creations as "photorealistic" and expressed concern, saying that it would be difficult to distinguish between real and fake without warning labels.

AI video technology has made even greater strides in less than a year. On Oct 4, Meta, the big tech company that owns Instagram and Facebook, unveiled another AI program called "Movie Gen." While previous AI tools could only generate visuals, Movie Gen goes further by creating sounds. In Meta's demonstration video, as a snake slithers through the jungle, the rustling sound of grass can be heard. The New York Times tested a similar video and found that it took less than 10 minutes to add sound to the video. Although Meta said that they would include the label "Generated by AI" on all Movie Gen videos, NYT's investigation revealed that it is possible to remove this label.

Let’s revisit the controversial Beirut video. According to an analysis conducted by AI experts at the request of the British newspaper The Guardian, the video contained numerous flaws, such as fire unnaturally spreading between buildings. When CBS questioned a CAIR spokesperson about why they posted a video that would have been exposed as fake with even basic verification, the spokesperson admitted it was a clear and simple mistake but added, "The ‘essence’ that Israel committed crimes killing over 2,200 people in Lebanon remains unchanged." Meanwhile, many people on social media continue to share the video, still believing it to be real. The blurring of the line between reality and the virtual world is not the only issue—trust and ethics are also eroding. While the "essence" may be unclear, the boundaries of integrity and morality are crumbling alongside it.