How A.I. is Reshaping Political Propaganda

How A.I. is Reshaping Political Propaganda

In the late hours of February 25, Donald Trump posted a surreal, thirty-three-second video to Truth Social, depicting Gaza not as the war-ravaged territory the world knows, but as a gleaming coastal paradise: “Gaza 2025 What’s Next?”

The clip, a hyperreal fantasy featuring beachfront casinos, towering golden statues of Trump, and Elon Musk dining on hummus, was unmistakably the work of artificial intelligence. But what began as a satirical experiment by two Israeli American filmmakers quickly took on a more sinister dimension when Trump himself shared it—without context, without attribution, and seemingly as an endorsement of a dystopian vision that had been conjured in mere hours by A.I. tools.

A.I. is no longer just a tool for deepfakes or synthetic voices; it has become an engine for a new brand of political misinformation, one that is potent precisely because it blurs the line between fiction and reality. Unlike the crude Photoshop edits of past election cycles or the obviously doctored memes that once circulated in niche forums, today’s A.I.-generated propaganda has an unnerving smoothness. The software powering these fabrications—systems like Arcana Labs, Sora, and Runway—allows for the effortless production of hyperrealistic imagery and video, placing sophisticated misinformation within reach of any political actor willing to wield it.

The Trump campaign and its allies have leaned heavily into this new digital frontier. In mid-February, the official White House account on X (formerly Twitter) posted a video titled “ASMR: Illegal Alien Deportation Flight,” transforming images of immigrants being forced onto planes into a grotesque mockery of autonomous sensory meridian response (ASMR) videos. On February 19th, in response to New York’s congestion pricing, the campaign released a faux Time magazine cover depicting Trump as a crowned king, ruling over a cityscape. These pieces of content are not merely attempts at satire; they are algorithmically engineered to flood the information ecosystem, replacing fact with spectacle.

During his first term, Trump wielded social media as an unfiltered megaphone. But now, with Truth Social as his personal fiefdom and Elon Musk reshaping X into a playground for the right, Trump’s digital reach has become both more insular and more extreme. Unlike his past Twitter tirades, which were subject to fact-checking and rebuttal in mainstream discourse, today’s A.I.-fuelled propaganda operates in a self-reinforcing loop. His base consumes content designed explicitly for them, a closed-circuit system where misinformation is no longer a mistake but the entire point.

The implications extend far beyond Trump. Jonathan Yunger, co-founder of Arcana, the A.I. studio whose software was used to create “Trump Gaza,” insists that his company is focused on realism, not deception. Yet realism is precisely what makes these tools so dangerous. “We’re constantly fine-tuning and making models for realism,” Yunger explained to The New Yorker. “It’s clear that this wasn’t real.” But was it? The mere fact that the President of the United States disseminated the video gave it a veneer of legitimacy, a stamp of authority that made it impossible to dismiss as mere internet ephemera.

The world has been here before, in a sense. When Orson Welles adapted H.G. Wells’s The War of the Worlds for radio in 1938, some listeners mistook the fictional broadcast for reality. But today’s audience isn’t just an unprepared public stumbling into a new medium—it is a politically charged electorate being actively shaped by it. A.I. misinformation doesn’t just mislead; it manufactures an entirely alternate reality.

Solo Avital, one of the filmmakers behind the “Trump Gaza” clip, initially saw A.I. as a creative tool, a way to push the boundaries of visual storytelling. But after witnessing his work co-opted and repurposed, he has begun to view it differently. “The wildest idea that you ever imagined could be visualized right now,” Avital recently told The New Yorker. “Imagine someone posting an A.I. clip of Trump announcing that he’s launched a nuclear attack on Russia. If it looks real enough, and if the right people share it—what happens next?”

The question is no longer whether A.I. will be used to manipulate reality, but how effectively it can be countered.

If “Trump Gaza” was made in eight hours, imagine what could be achieved in eight days, or eight months, by a campaign with resources and intent. As the Trump presidency unfolds, the battle over truth itself is becoming one that is fought not just in speeches or debates, but in pixels and algorithms. And in a world where the artificial can, with every passing year, be made indistinguishable from the real, the greatest danger may not be that people believe the lies, but that they stop believing anything at all.

Share

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top