{"id":17754,"date":"2026-03-05T13:01:36","date_gmt":"2026-03-05T12:01:36","guid":{"rendered":"https:\/\/haimagazine.com\/uncategorized\/deepfakes-in-war-when-the-news-anchor-doesnt-exist\/"},"modified":"2026-03-11T09:20:07","modified_gmt":"2026-03-11T08:20:07","slug":"deepfakes-in-war-when-the-news-anchor-doesnt-exist","status":"publish","type":"post","link":"https:\/\/haimagazine.com\/en\/hai-premium-2\/deepfakes-in-war-when-the-news-anchor-doesnt-exist\/","title":{"rendered":"\ud83d\udd12 Deepfakes in war. When the news anchor doesn&#8217;t exist"},"content":{"rendered":"<p>In December 2023, residents of Dubai turned on their televisions and saw the news. The anchor spoke calmly and professionally, with an appropriate degree of seriousness. Except that this anchor never existed. He was entirely generated by artificial intelligence, and the broadcast itself was fabricated by the Cotton Sandstorm group, linked to Iran\u2019s Islamic Revolutionary Guard Corps (IRGC). Hackers hijacked the IPTV signal and replaced the programming with propaganda about the conflict in the Gaza Strip. Viewers in the UAE, Canada and the United Kingdom saw something that the Microsoft Threat Analysis Center (MTAC) later described as the first Iranian influence operation in which AI was a key component of the messaging (Microsoft Threat Analysis Center, 2024).<\/p><p>This incident should be treated as a turning point, not because deepfakes are something new, but because, for the first time, a state actor combined a cyberattack (the takeover of broadcasting infrastructure) with generative AI in one cohesive operation. This is qualitatively different from social media trolling.<\/p><h4 class=\"wp-block-heading\">Iran as a lab for cognitive warfare<\/h4><p>Iran\u2019s information strategy has been evolving for years toward what the IRGC calls &#8220;intellectual and verbal weapons&#8221;\u2014a concept that, in official discourse, is placed on a par with the missile program. This isn&#8217;t just rhetoric. In August 2024, OpenAI identified and blocked a cluster of ChatGPT accounts belonging to the Iranian operation Storm-2035, which was mass-generating political articles and social media commentary, aiming to polarize American voters. The content concerned the conflict in Gaza, the Olympic Games and the U.S. presidential elections. The accounts interspersed political content with fashion and beauty posts to appear more authentic.<\/p><p>Operation Storm-2035 didn&#8217;t achieve significant reach. Most posts garnered negligible engagement. But that\u2019s not reassuring. It\u2019s about the scale of capabilities, not a single success. Iran is testing tools, refining methods and building infrastructure. During the escalation of the conflict with Israel in June 2025, Iranian disinformation networks used, among other things, video game engines (such as Arma 3) to create materials that, after AI processing, were presented as authentic battlefield footage. Some of this content reached over 100 million views. The organization WITNESS confirmed that many of the circulated clips bore signs of AI generation, and their verification yielded mixed results even for advanced detection tools.<\/p><h4 class=\"wp-block-heading\">February 28: combat test<\/h4><p>And then came February 28.<\/p><p>A coordinated strike by the United States and Israel on Iran\u2014culminating in the assassination of Ayatollah Khamenei\u2014triggered an immediate retaliatory response from Tehran. Within a matter of hours, the conflict spread across the entire region: from Lebanon to the Persian Gulf states, from military bases to the airport in Dubai. But in parallel with the kinetic war, a second war was being waged\u2014an information war. And the latter turned out to be the most intense test of deepfake capabilities we&#8217;ve seen so far.<\/p><p>Shayan Sardarizadeh, a BBC Verify journalist specializing in content verification and disinformation, said on March 4 that this conflict has likely already broken the record for the number of AI-generated videos and images that went viral during the hostilities (Flannery, 2026). <\/p><p>Five days. A record. <\/p><p>Let&#8217;s take a moment to consider this.<\/p><p>Cases were diverse, but they were connected by a common logic: the rapid acceleration of the spread of false content at a moment when societies&#8217; capacity to verify is at its lowest. A viral clip, allegedly showing the effects of a drone attack on the U.S. Embassy in Riyadh, turned out to be footage from an earlier, unrelated car accident. It was shared by high-reach accounts before anyone had time to verify it (Flannery, 2026). The official account of Israel&#8217;s Prime Minister Benjamin Netanyahu posted a 72-second video in which Netanyahu addressed the citizens of Iran in Persian, urging them to &#8220;take to the streets&#8221; and &#8220;finish the job&#8221; of toppling the regime. The visuals were real. But the audio was generated by AI\u2014Netanyahu doesn&#8217;t speak Persian, and the lip-sync contained errors, as confirmed by VerificaRTVE&#8217;s analysis (Flannery, 2026).<\/p><p>One of the most shocking events of the first weekend of the war was the airstrike on an elementary school in Minab, which killed at least 168 people, mostly children. Geolocating the site of the attack was made possible thanks to the cooperation of European verification teams. Satellite imagery showed that the school is located near two Revolutionary Guard buildings, identifiable by signage in Farsi (Flannery, 2026). But the tragedy immediately became a battleground for disinformation.<\/p><p>And here&#8217;s an episode that warrants separate reflection. X users began asking the built-in chatbot Grok whether the photos of the destruction at the school were authentic. Grok replied that the photo was old. For several hours the chatbot stood by this mistake, even calling media reports fabrications, while accepting arguments from disinformation accounts that provided no evidence (Flannery, 2026). BOOM Live documented that Grok gave three mutually contradictory locations for the same video: Peshawar, Pakistan (2014), Kabul, Afghanistan (2021), and Minab, Iran (2026), each with high confidence and in an expert tone (BOOM Live, 2026). When users accused Grok of lying, the chatbot defended itself, claiming it was only &#8220;updating the verification&#8221;.<\/p><p>This is a noteworthy paradox: an AI tool built into a social media platform, designed among other things to help users distinguish truth from falsehood, actively amplified disinformation. Users trusted Grok not because it performed a forensic analysis of the image, but because it responded with authoritative certainty, exactly as a deepfake would.<\/p><p>At the same time, the Iranian embassy in Austria published a photo of a bloodied school backpack from Minab. Google\u2019s SynthID tool confirmed that the image was generated using Google\u2019s AI (Flannery, 2026). Iranian diplomacy thus used a synthetic photo to reinforce the emotional message about a real tragedy. Truth and falsehood intertwined here in a way that makes it difficult to draw simple lines between &#8220;our&#8221; and &#8220;their&#8221; propagandists.<\/p><h4 class=\"wp-block-heading\">Dubai: from a livestream takeover to a rocket barrage<\/h4><p>Let&#8217;s return to Dubai. In December 2023, the Iranian group Cotton Sandstorm hijacked the IPTV signal there and broadcast doctored news with an AI-generated presenter. Two years and two months later, on February 28, 2026, Dubai came under real fire from Iranian ballistic missiles and drones.<\/p><p>The wave of content that immediately flooded social media included both authentic footage and pure fabrications. A dramatic video showing a missile hitting the Burj Khalifa and triggering a powerful explosion turned out to be AI-generated, and, as VRT nws journalist Bram Vandendriessche noted, generated by a clearly older model. The debris and plumes of smoke looked cartoonish, and every element was less detailed than in reality (Flannery, 2026). At the same time, genuine footage of smoke around the skyscraper was circulating online, further blurring the line between fiction and fact.<\/p><p>The Dubai airport terminal was indeed hit. The luxury Burj Al Arab hotel was engulfed in flames. A drone struck near the Fairmont Hotel on Palm Jumeirah. Tourists hid in underground garages. European governments organized evacuation flights. But at the same time, Dubai influencers\u2014and this is a moment that Olga Tokarczuk might have imagined if she were writing cyberpunk\u2014began posting an identical message en masse: &#8220;You live in Dubai, aren&#8217;t you afraid?&#8221; \u2013 &#8220;No, because I know who protects us.&#8221; These reels, with a musical backing in the form of an AI remake of Stromae\u2019s song &#8220;Papaoutai,&#8221; then showed images of the Emir of Dubai and his son. DW News asked whether the influencers were paid for this coordination (Flannery, 2026). Some German female influencers admitted on Instagram that they didn\u2019t know what they were allowed to post, and that they had removed materials\u2014because the UAE has strict regulations regarding social media content, and since mid-2025 the UAE Media Council has required influencers to hold mandatory licenses (France 24, 2026).<\/p><p>Emma Ferey, a French journalist and author of the novel <em>Emirage<\/em> (2024) about Dubai&#8217;s influencer scene, commented as follows: We live in &#8220;an underinformed world, where everything seems easy&#8221;. And now &#8220;the bubble is starting to burst&#8221; (France 24, 2026).<\/p><p>So on one small piece of land we have a full catalog of the phenomena described in this column: AI-generated videos of airstrikes (alongside real ones), a social media platform&#8217;s chatbot amplifying disinformation, coordinated influencer campaigns, censorship under legal threat, and a real war as the backdrop to all of this. In 2026, Dubai is a kind of testing ground where all the threads of the narrative about the weaponization of synthetic content converge.<\/p><h4 class=\"wp-block-heading\">The platform responds (finally)<\/h4><p>On March 3, 2026, five days after the war began, X&#8217;s head of product, Nikita Bier, announced a policy change: users who post AI-generated videos of armed conflicts without labeling them as synthetic will be suspended from the monetization program (Creator Revenue Sharing) for 90 days. A repeat violation means permanent ban. Violations are to be identified by Community Notes (X&#8217;s crowdsourced verification system) as well as by metadata and other technical signals embedded in generative content (TechCrunch, 2026).<\/p><p>Two days earlier, X introduced the &#8220;Made with AI&#8221; label. It\u2019s a step in the right direction. But the scope of the policy is narrow: it applies only to monetized creators and videos about the armed conflict. Political disinformation outside the war context, fake pictures, audio manipulation\u2014all of that remains beyond the reach of the new policy. It\u2019s a bit like banning lies only on Saturdays. At the same time, X revealed that it had unmasked a user from Pakistan who controlled 31 hacked accounts that were mass-posting AI-generated videos from the war. All the accounts were taken over on February 27\u2014the day before the attack\u2014and renamed to variants of &#8220;Iran War Monitor&#8221; (Tribune India, 2026). The disinformation infrastructure was ready before the first bomb fell.<\/p><h4 class=\"wp-block-heading\">Sex as a weapon<\/h4><p>There&#8217;s another dimension that gets too little attention. Iranian security services use deepfakes to persecute women activists in the diaspora. In a 2024 report, Citizen Lab documented how Iranian women activists in Canada and the United Kingdom are targeted with pornographic deepfakes. In Iran&#8217;s conservative cultural context, such material has a single aim: to exclude women from the public sphere through a moral scandal. This is a form of digital violence that researchers refer to as gendered disinformation. This phenomenon isn&#8217;t limited to Iran, but Iran has turned it into a systematic tool of transnational repression.<\/p><p>The Miaan Group\u2019s 2025 report indicates that feminist activists accounted for more than 10% of all cases of transnational digital repression reported to their digital security help desk. Activists such as Azam (Canada) regularly receive pornographic deepfake images featuring her face, false claims that she engages in prostitution, and realistic AI-generated threats of sexual violence.<\/p><h4 class=\"wp-block-heading\">A conclusion I&#8217;d rather not write<\/h4><p>We live in a world where &#8220;seeing&#8221; no longer means &#8220;believing&#8221;. That sentence may sound like a clich\u00e9, but Iranian influence operations\u2014from hijacking a TV stream in Dubai to pornographic deepfakes targeting women activists in London\u2014show that the consequences of that clich\u00e9 are very real. The problem isn\u2019t purely technical. Better detection algorithms will help, but the arms race between generators and deepfake detectors is by definition never-ending. The real defense lies elsewhere: in verification processes, in content provenance standards, in media literacy, and in that bit of skepticism toward any piece of content that promises us we already know everything we need to know.<\/p><hr class=\"wp-block-separator has-alpha-channel-opacity is-style-default\"\/><h4 class=\"wp-block-heading\">How to spot a deepfake? A practical guide<\/h4><p>Since we know who creates deepfakes and why, the pragmatic question is: how do we protect ourselves? In 2026, the answer is more difficult than ever. But not impossible.<\/p><p><strong>First: biology exposes fakery. <\/strong>The human face changes color in sync with the heartbeat\u2014an effect of blood flowing through the vessels, invisible to the naked eye but measurable computationally. The FakeCatcher system, using remote photoplethysmography (rPPG), analyzes these subtle color changes at 32 points on the face. AI typically doesn&#8217;t simulate them. Intel claims 96% effectiveness for this tool, though BBC tests revealed issues with false alarms on low-resolution videos. The method has one significant advantage: it&#8217;s not easy to circumvent from the deepfake generator side, because PPG signal extraction is non-differentiable\u2014it can&#8217;t simply be incorporated into a GAN&#8217;s loss function.<\/p><p><strong>Second: lips lie differently than ears. <\/strong>Consonants that require closing the lips (m, p, b) are particularly hard to render. In deepfakes, lip movements often don&#8217;t match the audio track at exactly those moments. Synthetic voices can also be too monotonous: they lack natural micro-pauses for breath, lip smacks, sighs. If someone speaks for three minutes without taking a single breath, something&#8217;s off.<\/p><p><strong>Third: metadata doesn&#8217;t lie (usually). <\/strong>The C2PA standard (Coalition for Content Provenance and Authenticity), developed by Adobe, Microsoft and other companies, makes it possible to trace a file&#8217;s history from its creation. The absence of a C2PA certificate or a tampered digital signature should raise red flags. Tools such as InVID\/WeVerify allow quick video decomposition and reverse searching of keyframes, which is the most effective method for detecting &#8220;disinformation recycling,&#8221; i.e., the reuse of old footage in a new context. During the Iran-Israel conflict in June 2025, fact-checkers from DW and AFP repeatedly identified videos from 2021 military exercises being presented as current footage from the battlefield.<\/p><p><strong>Fourth, and perhaps most importantly: emotions are a warning sign. <\/strong>Deepfakes are designed to evoke strong emotions\u2014fear, anger, outrage. If a video provokes a strong emotional reaction, that alone should prompt you to pause before you click &#8220;share&#8221;. The rule is simple: the more you feel the urge to share something immediately, the more you shouldn&#8217;t do it until you verify the source.<\/p><p>In the context of the current Iranian war, it&#8217;s worth adding one more criterion that is simpler than rPPG analysis but surprisingly effective: <strong>look for inconsistencies in background elements. <\/strong>In one viral video, allegedly depicting an Iranian strike on Tel Aviv, the cars on the street had bizarre shapes that resembled no known vehicles. The audio featured the voice of an off-camera person speaking in English, &#8220;Tel Aviv, I can&#8217;t believe this,&#8221; in a way that was too perfect and too controlled, as if someone had commissioned a script for a specific narrative (Gizmodo, 2026). Sardarizadeh from BBC Verify confirmed that the video was fake. But Grok consistently told users that the video was authentic. A user who shared the footage argued that it must be real because Grok said so.<\/p><p>This leads to a fifth rule I\u2019d like to add to this practical guide: <strong>don\u2019t trust AI to verify AI<\/strong>. General-purpose chatbots, such as Grok or ChatGPT, were not designed for image forensics. They respond with an air of authority, because that\u2019s how language models work\u2014they generate text that sounds confident. But there\u2019s a gulf between a confident tone and factual certainty. For verification, use specialist tools: InVID\/WeVerify for reverse frame searches, SynthID for detecting Google-generated content, Content Credentials Verify for checking C2PA manifests.<\/p><h4 class=\"wp-block-heading\">Tools<\/h4><p>In 2026, the content provenance verification ecosystem transitioned from the experimental phase to everyday professional use, which warranted the development of tools available to researchers and journalists. The primary point of reference remains the Content Credentials Verify portal (available at <mark style=\"background-color:#82D65E\" class=\"has-inline-color has-base-color\"><a href=\"http:\/\/contentcredentials.org\/verify\" target=\"_blank\" rel=\"noopener\">contentcredentials.org\/verify<\/a><\/mark>), which, following the 2025 updates, became significantly more efficient at handling large video files.<\/p><p>This tool runs directly in the browser. It lets you instantly check a file\u2019s manifest without fully uploading it to the server, which saves you time (especially for 4K footage). You can see the full history: from the moment it\u2019s recorded on a professional Sony camera, through editing in Adobe Premiere Pro, all the way to the final export with information on the use of generative models. It works smoothly. However, remember that the absence of a manifest doesn&#8217;t always mean manipulation, because many social platforms (like Facebook or X) still aggressively strip metadata during compression.<\/p><p><strong>Browser tools and independent verifiers<\/strong><\/p><p>An alternative to Adobe&#8217;s official portal is C2PA Viewer (<mark style=\"background-color:#82D65E\" class=\"has-inline-color has-base-color\"><a href=\"http:\/\/c2paviewer.com)\" target=\"_blank\">c2paviewer.com)<\/a><\/mark>. This lightweight solution rose to prominence in mid-2025 as an independent verification gateway that isn&#8217;t affiliated with any tech giant. Developers can also use the open-source libraries provided by the Content Authenticity Initiative (CAI). This allows building your own scripts in Rust or Python to bulk-verify the authenticity of video files in editorial archives.<\/p><p>There are also Chrome extensions that automatically detect the &#8220;CR&#8221; (Content Credentials) icon on websites. When you hover your cursor over a video, the extension displays a &#8220;digital nutrition label&#8221; for the content. It shows the author, the creation date, and whether artificial intelligence was involved in the production process. If the manifest is corrupted or the digital signature doesn&#8217;t match the creator&#8217;s public key, the system displays an integrity violation warning.<\/p><p><strong>Live video verification and 2.3 standard<\/strong><\/p><p>The implementation of the C2PA 2.3 specification at the start of 2026 was a breakthrough. It introduced protocols for digitally signing live streaming in the DASH and HLS formats. As a result, verification tools can now check the authenticity of broadcasts in real time, which is crucial for reporting on armed conflicts. This technology has been integrated with Sony professional cameras showcased at IBC 2025 and Google Pixel 10 smartphones.<\/p><p>A key element of this system is the C2PA Trust List, a list of trusted certificate issuers launched in July 2025. It allows us to distinguish signatures originating from reputable news agencies or hardware manufacturers from certificates generated by entities spreading disinformation. However, we should maintain some healthy skepticism: C2PA confirms that the file hasn&#8217;t been altered since it was signed, but it doesn&#8217;t guarantee that what the lens of the camera in question sees isn&#8217;t a staged scene or a lie.<\/p><hr class=\"wp-block-separator has-alpha-channel-opacity is-style-default\"\/><p><strong>Further reading<\/strong><\/p><p>Microsoft Threat Analysis Center. (2024, February). <em>Iran surges cyber-enabled influence operations in support of Hamas<\/em>. Microsoft. <mark style=\"background-color:#82D65E\" class=\"has-inline-color has-base-color\"><a href=\"https:\/\/www.microsoft.com\/en-us\/security\/business\/security-insider\/reports\/iran-surges-cyber-enabled-influence-operations-in-support-of-hamas\/\" target=\"_blank\" rel=\"noopener\">https:\/\/www.microsoft.com\/en-us\/security\/business\/security-insider\/reports\/iran-surges-cyber-enabled-influence-operations-in-support-of-hamas\/<\/a><\/mark><\/p><p>OpenAI. (2024, August 16). <em>Disrupting a covert Iranian influence operation<\/em>. <mark style=\"background-color:#82D65E\" class=\"has-inline-color has-base-color\">https:\/\/openai.com\/index\/disrupting-a-covert-iranian-influence-operation\/<\/mark><\/p><p><\/p>","protected":false},"excerpt":{"rendered":"<p>The war in the Middle East has been fought on the traditional and informational fronts since the beginning. And the latter has turned out to be the most intense test of deepfake capabilities we have seen so far.<\/p>\n","protected":false},"author":247,"featured_media":17700,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"rank_math_lock_modified_date":false,"footnotes":""},"categories":[796,805,837],"tags":[],"popular":[],"difficulty-level":[38],"ppma_author":[614],"class_list":["post-17754","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-hai-premium-2","category-law-and-ethics","category-safety-2","difficulty-level-medium"],"acf":[],"authors":[{"term_id":614,"user_id":247,"is_guest":0,"slug":"prof-dr-hab-dariusz-jemielniak","display_name":"prof. dr hab. Dariusz Jemielniak","avatar_url":{"url":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/maxresdefault-1-e1742292469999.jpg","url2x":"https:\/\/haimagazine.com\/wp-content\/uploads\/2025\/03\/maxresdefault-1-e1742292469999.jpg"},"first_name":"Dariusz","last_name":"Jemielniak","user_url":"","job_title":"","description":"Profesor zarz\u0105dzania Akademii Leona Ko\u017ami\u0144skiego, gdzie kieruje katedr\u0105 MINDS (Management in Networked and Digital Societies). Pracuje te\u017c jako faculty associate w Berkman-Klein Center for Internet and Society na Harvardzie. Wiceprezes Polskiej Akademii Nauk. Cz\u0142onek Rady Programowej CampusAI."}],"_links":{"self":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/17754","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/users\/247"}],"replies":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/comments?post=17754"}],"version-history":[{"count":1,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/17754\/revisions"}],"predecessor-version":[{"id":17755,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/posts\/17754\/revisions\/17755"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/media\/17700"}],"wp:attachment":[{"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/media?parent=17754"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/categories?post=17754"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/tags?post=17754"},{"taxonomy":"popular","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/popular?post=17754"},{"taxonomy":"difficulty-level","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/difficulty-level?post=17754"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/haimagazine.com\/en\/wp-json\/wp\/v2\/ppma_author?post=17754"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}