The year 2025 was the year I could believe my eyes

The internet is a fire hose of fake shit. And I’m not sure what to do about it.
I’ve been writing about fake photos on the Internet since 2013, when a photo of President Teddy Roosevelt riding a moose went viral and I fired off a quick blog post to remove it. The image was a pre-Photoshop fake, created as a humorous curiosity a hundred years ago. It was just one of those old pictures that made you pause and wonder, “wait, could that be real?”
The following year, 2014, I removed many of the fake photos that were circulating. And I’ve been writing year-end cycles containing fake viruses ever since. I often enjoy fact-checking photos that go viral because it feels like I’m getting a better understanding of what’s true in the world and sharing that with people who might want to understand the same. But that feeling has changed recently.
This year, I was confused. Looking back to 2025, there were too many fake photos and videos to deal with. And making rounding up the biggest fakes of the past year feel like a futile effort. What if a significant percentage of posts on platforms like Instagram, Facebook, and X are AI-generated garbage? It seems that this year is a tipping point to answer that question. Most of the pictures and videos from my social media were fake. I have never seen it so bad.
What examples would you include in an article about fake photos and videos in 2025? In 2023, images like the Pope in an AI-engineered puffer jacket seemed to be everywhere. It was the early days of photorealistic AI rendering, and the fake Pope seemed to be spreading in every corner of the internet. Here in 2025, it’s hard to focus on a select number of photos and videos because there are so many to be released. People are drowning in fake services, and many are clueless. They just flick their finger and move on to the next photo or video.
Even the experts still can’t tell. Jeremy Carrasco has been doing amazing work this year on TikTok, YouTube, and Instagram, teaching people how to spot AI videos. But there was at least one video he faked in 2025, which was actually real.
This is where we are. Even people who are well equipped to spot these fakes are not sure about them.
Video after video and photo after photo on social media is just AI slop looking at the wedding. There is a financial incentive for people to post fake videos, such as pets being rescued from fires, dogs choosing their owners, or animals being rescued from being trapped under snow. These videos are compelling if you don’t know they are fake, as content creators are paid to attract millions of views. But they’re not very compelling when you know they’re not real.
What are the odds when you watch a very short video of thoughtful people doing thoughtful things? Movies and TV shows often give us a reason to care about the characters on screen before they are thrown into a conflict that they must overcome. None of this dramatic tension exists when you’re watching AI characters. So why should anyone care?
There was a time when I used to think that some people had no excuses when it came to bad photoshops. But I don’t have that attitude anymore. The tools are so advanced that anyone can fall for fake photos. You really shouldn’t feel bad if you come across something AI in 2025.
The top image in this article was created on Google Gemini with Nano Banana Pro, released last month. I wrote nothing but: “Create a photoreal image of a man looking directly into the camera. Make it as realistic as possible.” If I wanted to, I could customize the picture using all kinds of specific commands to show the scene in a certain place with certain people. But I created this image in 10 seconds to show how easy it is to achieve photorealism now.
As someone who has spent over a decade applying my skepticism to fake images, I can no longer rely on visual clues or trace the source of a given image. Google has finally integrated its virtual watermark detector into the Gemini, but that only gets us so far. There are so many AI image generators out there, and if Gemini doesn’t mark your image as AI-generated, it doesn’t mean it’s real. It means that Google had no hand in creating it. There are tons of AI graphics creators out there.
Powerful people like Donald Trump take full advantage of this knowledge base. The president has emphasized several times that real things are fake in 2025. Like when Trump said the BBC used AI to make him say something he didn’t (they didn’t use AI) or when furniture was removed from the White House in September, a month before the East Wing was demolished. Trump relies on what is known as the liar’s dividend. When the information environment is confused, people will insist that the real things are false information.
Elon Musk, the billionaire owner of X, has said that he envisions a world where everything people use every day is AI. And while it’s too early to tell if that prediction will come true, he’s working hard to make it happen.
Musk thinks this version of the future is what people really want—virtual content tailored to your desires—because you’re far from humanity.
As he told Joe Rogan: “Most of what people consume in five or six years, maybe sooner than that, will be AI-generated content. So music, videos…”
[image or embed]— Matt Novak (@paleofuture.bsky.social) November 8, 2025 at 8:27 AM
In a way, it feels like we’re halfway there. And when so many things are false, it seems to work best to point out the real things.



