For three full days, the internet was absolutely certain it was looking at a real photograph. It was shared by celebrities with millions of followers. Journalists described it without questioning it. Comment sections across every major platform were full of people reacting to what they believed was a genuine captured moment. And then someone looked more closely — at the fingers, at the shadows, at the impossible geometry of a coffee cup held by a hand that didn't quite work — and the entire edifice collapsed.

The image was fake. Every pixel of it was generated by an AI program from a text prompt typed by a person sitting at a laptop. No camera. No subject. No real moment. Just a machine that had learned — from billions of real photographs — exactly how to make something look completely indistinguishable from reality. And in 2026, that machine has gotten extraordinarily good at its job.

This is the full story of AI-generated images that fool the world: the landmark cases, the real numbers, what makes them so convincing, and — most importantly — how you can protect yourself from being next.

8M
Deepfakes shared online in 2025
+1,500%
Growth since 2023 (500K → 8M)
38%
People who can't identify AI images correctly
700M
Images made with GPT-4o in its first week

Why Does This Keep Happening? The Science of Visual Trust

Why do people keep falling for AI-generated images?

FAQ — Answered
A peer-reviewed study from Swansea University published in November 2025 confirmed that AI can now produce images of real, familiar people that are genuinely indistinguishable from real photographs. Separately, a Cornell University study found only 62% of people correctly identify AI-generated images — and, strikingly, up to 87% of respondents have at some point mistakenly flagged a real photo as AI. Our visual processing systems were built to trust what we see. AI exploits that at a hardware level.

The human brain processes visual information faster than any other sensory input. When you see a face you recognise — a famous figure, a world leader, a celebrity — your brain doesn't run a forensic analysis of the image. It recognises the pattern, confirms its prior knowledge, and moves on within milliseconds. AI image generators exploit this precisely: they don't need to produce a perfect photograph. They need to produce an image that clears the brain's initial pattern-recognition threshold before any slower, more analytical processing kicks in.

In 2023, the most convincing AI images still had reliable tells — hands with the wrong number of fingers, text that dissolved into garbled characters, eyelids that merged into glasses. In 2026, most of those tells have been addressed. GPT-4o's image generation system, released to 130 million users in March 2025, was specifically engineered to handle text within images, multiple distinct objects in complex scenes, and the subtle imperfections of real-world photography. When OpenAI released it, users created over 700 million images in the first week alone. The baseline quality of AI-generated images jumped overnight — and the baseline for misinformation jumped with it.

"The sudden prominence of AI-generated content in fact-checked misinformation claims suggests a rapidly changing landscape." — Google Research paper analysing 136,000 fact-checks, 2024
1
🎭
The "Balenciaga Pope" — March 2023
First Mass-Level AI Misinformation Case · Midjourney · Global Viral

Case 1 — The Image That Started Everything

What was the most famous AI-generated image that fooled everyone?

FAQ — Answered
The "Balenciaga Pope" — an AI-generated image of Pope Francis wearing a white designer puffer jacket, created by Pablo Xavier, a 31-year-old Chicago construction worker, using Midjourney in March 2023. The image went viral on Reddit and Twitter, fooled celebrities including Chrissy Teigen, and was later described by BuzzFeed reporter Ryan Broderick as "the first real mass-level AI misinformation case."
25K+
Retweets before debunking
Midjourney
Tool used to create it
~72 hrs
Before majority realised it was fake
"Mass-level"
First AI misinformation event at scale

On a Friday afternoon in March 2023, Pablo Xavier — a construction worker from Chicago who had turned to Midjourney after his brother's death as a way to process grief — typed a prompt roughly equivalent to "Pope Francis wearing a Balenciaga puffer coat in Rome." What came out was something he described as "perfect." He posted it to a Facebook group called AI Art Universe and then to Reddit.

The image spread at a velocity that even Xavier didn't anticipate. "I was just blown away," he later told BuzzFeed. "I didn't want it to blow up like that." By the time the image was widely understood to be fake, it had been shared tens of thousands of times. Celebrity model and author Chrissy Teigen tweeted to her 12.9 million followers: "I thought the pope's puffer jacket was real and didn't give it a second thought. No way am I surviving the future of technology."

The reason so many people were fooled was specific and analysable: Pope Francis was already known for occasionally surprising fashion choices. The low-stakes nature of the image — a religious figure in a funny coat — meant people's guard was down. The AI had produced accurate facial detail, realistic fabric texture, and convincing street-level lighting. The tells — a warped hand, glasses merging into shadow, a cross held aloft without a visible chain — required close inspection that scrolling at speed doesn't permit.

It was subsequently described by internet culture analysts as "the first real mass-level AI misinformation event" — a moment where AI-generated imagery crossed from a novelty into something with genuine cultural and informational consequences.

2
💥
The Fake Pentagon Explosion — May 2023
Stock Market Impact · Financial Consequences · AI Misinformation

Case 2 — When a Fake Image Moved the Stock Market

Can AI-generated images affect the stock market?

FAQ — Answered
Yes — and it already has. In May 2023, an AI-generated image of an explosion near the Pentagon went viral, was picked up by financial news aggregators as a real event, and caused a brief but measurable dip in the U.S. S&P 500 before being debunked. It was the first documented case of AI-generated imagery causing direct, quantifiable financial market movement.
S&P 500
Registered a brief dip
News Aggr.
Picked up by financial platforms as real
Minutes
Time between viral spread and market reaction
Unknown
Original creator never identified

Two months after the Pope image established that AI could fool social media at scale, a significantly more consequential case demonstrated that the stakes extended far beyond celebrity embarrassment. An AI-generated image depicting what appeared to be an explosion near the Pentagon Building in Washington D.C. began spreading rapidly on Twitter. Within minutes, financial news aggregators — systems designed to scan social media for breaking news and translate it into market signals — processed the image as real information.

The result was a brief but documented dip in the S&P 500. The image was debunked within the hour, and the market recovered. But the incident established something that financial regulators, national security analysts, and technology ethicists have been grappling with ever since: a sufficiently convincing AI image, posted at the right time, can move real-world systems with real-world consequences before any human fact-checker has time to respond.

3
🔥
Hollywood Sign Wildfire Images — January 2025
California Wildfires · AI Disaster Misinformation · Platform Spread

Case 3 — AI Fakes During a Real Disaster

Jan 2025
During active California wildfires
X + IG
Primary spread platforms
Official
Authorities issued public reassurance
Real fires
Were already devastating nearby areas

In January 2025, as California was experiencing some of its most destructive wildfires in years, a series of AI-generated images depicting the iconic Hollywood Sign engulfed in flames began circulating widely on X and Instagram. The images were photorealistic. The fires were real — they just hadn't reached the Sign. But in the panic of an active disaster, with real smoke visible from Los Angeles streets and real evacuation orders in effect across the region, millions of people shared the images believing them to be genuine documentation of ongoing destruction.

The situation reached a point where California authorities had to issue public statements specifically reassuring residents that the Hollywood Sign was unscathed — an extraordinary reality in which official emergency communication resources had to be diverted to debunking AI-generated social media content while a real emergency was simultaneously unfolding. The incident was cited by journalism researchers as one of the clearest examples yet of AI misinformation actively interfering with emergency response communication.

The Deepfake Numbers: How Fast This Is Growing

The scale of the AI-generated image problem in 2026 cannot be understood without the raw numbers. What was a contained and largely harmless novelty in 2022 has become a quantifiable and rapidly expanding dimension of the global information environment.

2023
500,000
2024
~2,000,000
2025
8,000,000
2026 (proj.)
70M+
⚠️ What These Numbers Actually Mean

8 million deepfakes in 2025 — confirmed by the UK government — represents a 1,500% increase from 2023's 500,000. The 900% annual growth rate documented by researchers, if sustained, would produce over 70 million deepfake images and videos shared in 2026 alone. At that volume, the assumption that fake images are an edge case in your daily information consumption is no longer tenable.

How to Spot an AI-Generated Image in 2026 — The Complete Checklist

How do you tell if an image is AI-generated?

FAQ — Answered
In 2026, the most reliable visual indicators of AI generation are: distorted hands and fingers, text within the image that appears garbled or incorrect, overly uniform and "perfect" lighting, glasses or jewellery that merge unnaturally into skin or shadows, background elements that are blurred in ways that don't match a real camera lens, and ears that look smudged or asymmetrical. Free detection tools like Illuminarty and Sightengine can assist, but the checklist below remains your most reliable first-pass defence.
Check the hands and fingers first
AI models still struggle with the precise geometry of human hands. Look for extra fingers, merged digits, or hands that appear to grasp objects without actually touching them.
📝
Read any text in the image
Signs, labels, newspaper headlines, or text on clothing in AI images often appears garbled, reversed, or in a language that doesn't match the context. Real photographs have legible, correctly oriented text.
👓
Look at glasses, jewellery, and chains
AI frequently fails at accessories — glasses may merge into facial shadows, earrings may not match between sides, and necklace chains often disappear or reappear implausibly.
💡
Is the lighting too perfect?
Real photographs have directional light with consistent shadows. AI images often apply uniform, slightly surreal lighting that makes subjects look slightly "waxy" or too evenly lit for their environment.
🌫️
Check the background blur
AI-generated background blur doesn't always follow the rules of real camera optics. Objects at the same depth may be rendered at different blur levels, and depth-of-field transitions can appear unnatural.
👂
Look closely at ears and hair edges
Ears are a consistent weak point in AI portrait generation — they may appear smudged, asymmetrical, or merge into the hair or face. The boundary where hair meets background is another telltale area.
🔍
Run a reverse image search
Google Reverse Image Search and TinEye can identify if an image has been previously published in a different context. AI images created specifically for misinformation often won't return results — which itself can be a signal.
🛠️
Use a detection tool as a second opinion
Illuminarty, Sightengine, and AI or Not all offer free AI image detection. None are 100% accurate — but combined with your own visual checklist, they add a meaningful layer of verification.

What Happens Next — The 2026 Outlook

The UK government, in a statement published in February 2026 alongside the launch of a deepfake detection challenge involving INTERPOL and members of the Five Eyes security community, confirmed that 8 million deepfakes were shared in 2025 — and declared the problem one of its most pressing national security and public safety concerns. The UK has already criminalised the creation of non-consensual intimate deepfakes and is moving to ban "nudification tools" outright.

On the technology side, the C2PA (Coalition for Content Provenance and Authenticity) standard — a technical framework that embeds authentication data directly into images at the point of capture — is being adopted by major camera manufacturers including Sony, Canon, and Nikon. When fully implemented, images taken with C2PA-compliant cameras will carry a visible "Content Credential" pin that can be verified. It won't solve the problem of AI images created entirely from text prompts — but it will at least create a verified category of confirmed-real photography.

In the meantime, the gap between what AI can generate and what the average person can detect continues to widen. The 3-day deception that opened this piece is not an outlier. It is, increasingly, the norm. The question facing everyone who consumes information through images in 2026 is not whether they might be fooled by an AI-generated image — it's whether they have the habits, the tools, and the discipline to slow down enough to check.