Skip to main content
Mar 3

How Journalists Unmask Deepfakes

The reliability of online images and videos is increasingly compromised by the proliferation of AI-generated content, widespread misinformation, and e

7 min read95 views3 tags
Originally reported bytheverge

The reliability of online images and videos is increasingly compromised by the proliferation of AI-generated content, widespread misinformation, and even clips sourced from video games.

Following the recent joint military strike by the US and Israel on Iran last Saturday, the internet was inundated with images and videos purporting to document the conflict. Investigations revealed that many of these visuals were either outdated, depicted entirely unrelated events, were synthetically generated or manipulated by AI, or, strikingly, originated from military-themed video games such as War Thunder.

As misinformation rapidly proliferates, a growing number of individuals are turning to reputable digital investigative bodies for reliable information. Esteemed organizations such as The New York Times, Indicator, and Bellingcat employ rigorous verification protocols to ensure they do not disseminate synthetic or deceptive material. "Audiences can turn to trusted, independent news organizations that take the time and effort to authenticate visuals and clearly explain sourcing," stated Charlie Stadtlander, executive director for media relations and communications at The Times, in an interview with The Verge. While media authentication methods are not entirely infallible, the standards are exceptionally high, with experts leveraging years of experience in combating disinformation.

This intricate verification process presents significant challenges, particularly given the scarcity of dependable deepfake detection instruments. However, by understanding the techniques employed by these experts, individuals can better safeguard themselves amidst major news events dominating digital platforms. Below are some of the strategies they utilize.

In January, when unverified images of Venezuelan leader Nicolás Maduro rapidly spread across social media following his alleged abduction by the US, The Times' Visual Investigations team promptly initiated their scrutiny. They meticulously examined the visuals for "visual inconsistencies that would suggest they were not authentic," highlighting an instance where an aircraft featured unusually shaped windows.

While these inconsistencies were not sufficient to conclusively prove the images were fabricated, The Times' photography director, Meaghan Looram, explained in the article: "But even the remote chance that the images were not genuine — coupled with the fact they came from unknown sources, and details like Mr. Maduro’s clothing being different between the two images — was strong enough to disqualify them from publication."

While the rudimentary method of identifying AI-generated deepfakes by counting a subject's fingers is largely outdated, subtle indicators often persist. A common technique involves examining backgrounds, architecture, and peripheral figures for any unexplained anomalies.

Interestingly, one image of Maduro that The Times did publish, depicting the Venezuelan leader in custody, originated from President Donald Trump’s Truth Social account. It is crucial to note that this does not imply Trump, or any government official, serves as an inherently reliable source, given his history of disseminating AI-generated falsities online and the general difficulty in substantiating the integrity of official government communications. Moreover, this particular image raised authenticity concerns due to its inferior quality and unusually cropped dimensions.

"In this case, the president’s Truth Social post itself was newsworthy, even if we had no surefire way to confirm that the image was authentic," Looram clarified. Significantly, the image was not published in isolation but appeared on The Times’ homepage as part of a complete screenshot of Trump’s original post. This contextual presentation was deliberate: "Displaying it in context means that, if the image proves to be inauthentic in some way, we will not have presented it as a legitimate news photo, but rather as a communication from the President."

Identifying potential red flags does not require prior familiarity with the individual or organization sharing the content. A straightforward technique involves examining the account's age; if it is relatively new, or an older account with a sudden surge of recent activity, it warrants suspicion. Jeremy Carrasco, creator of ShowtoolsAI and Riddance, terms this the "Account Age Paradox," positing that since the technology for highly convincing deepfakes is a recent development, accounts promoting such content were likely established concurrently with the release of these AI models, whereas older fabricated content is generally simpler to detect.

Frequently, fabricated news can be swiftly debunked by cross-referencing whether the same photos or videos have been published elsewhere. This can be achieved through manual searches for related topics online or by utilizing search engine functionalities such as Google’s reverse image search tool. Often, the original source material proves to be considerably older and entirely unrelated to the current context in which it is being shared, as exemplified by a post claiming to depict missiles striking an Israeli nuclear facility, which was, in fact, footage from Ukraine dating back to 2017.

The open-source intelligence (OSINT) platform Bellingcat employs a comprehensive methodology that combines visual inspections, extensive cross-referencing, and specialized software tools. These include Google and Yandex for reverse image searches and ExifTool for extracting image metadata. Such investigations are inherently time-intensive, and the increasing accessibility of generative AI tools is presenting significant challenges to maintaining pace.

"The flood of convincing fakes has sped things up and given bad actors a handy ‘it could be AI’ excuse to dismiss real footage," Bellingcat creative director Eliot Higgins informed The Verge. He added, "Our methods still hold because we focus on provenance and context, not just pixels, but the noise level is way higher now."

To verify the purported location of a photo or video, one can leverage satellite imagery or applications like Google Maps for cross-referencing. Distinct markers such as flags, logos, and specific equipment can further aid in determining the precise time period and geographical origin, a technique The Times successfully employed in 2022 to authenticate footage from the Russia-Ukraine conflict. The publication’s Investigations Team also possesses the capability to estimate the time of day a photograph was captured by analyzing shadows using tools like SunCalc, and may even utilize footage from nearby CCTV and security cameras to corroborate the visual evidence.

Merely differentiating between authentic photographs and entirely synthetic images is no longer sufficient. A more complex question arises: how much editing or manipulation can a photograph undergo before it ceases to be considered genuinely real? While a universally accepted definition remains elusive, Higgins offers his personal perspective, defining a photo as "a real moment captured by light on a sensor or film."

"It’s evidence of what actually existed in that time and place. Minor tweaks like cropping or contrast are fine and always have been, but once you add, remove, or fabricate elements (especially with AI), it’s no longer a photo, it’s digital art or propaganda," Higgins elaborated. He underscored this point by adding, "Authenticity lives in honest provenance, not perfect pixels; that’s why real ground-truth images still matter more than any fake ever will."

"The average person needs to understand that the current information environment is tilted towards manipulation and deception."

Craig Silverman, a prominent fake news expert and cofounder of the open-source intelligence (OSINT) platform Indicator, emphasizes the enduring importance of vigilance for every online user. "The average person needs to understand that the current information environment is tilted towards manipulation and deception. This requires you to scroll with an awareness of how easily images, video, and text can be manipulated," Silverman conveyed to The Verge. He further highlighted the issue: "Add in the fact that major social platforms have largely failed to live up to their promises to label AI-generated content, and you get a chaotic, deception-filled, digital landscape that overwhelms and misinforms."

Ordinary individuals can contribute significantly to curbing the spread of misinformation by exercising caution and pausing before sharing emotionally charged or viral content online. It is noteworthy that many of the sophisticated verification tools utilized by trusted newsrooms are freely accessible to the public. If personal investigation is not feasible, cross-referencing any questionable posts with multiple independent sources is a crucial step.

"Remember that it takes time for information to develop, especially when it comes to fast-moving conflicts and other news stories," Silverman advised. He concluded, "Awareness and patience are critical, and they don’t require tools or expertise. But you do have to practice."

#AI#News#Tech
ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news