Exploitation

By Gordon Hopkins
We live in a world in which technology advances at an ever-increasing pace. Thanks to new tech warping our society in ways both predictable and not, the world has changed more in the last 150 years than in the entire history of humankind before that. In fact, you could say the same thing about the last 50 years. Indeed, you could probably say the same thing about the last 20 years.
As usual, the government is having a hard time keeping up. Eventually, perhaps the human race will figure out how to govern a world in which AI (Artificial Intelligence) can seemingly give everyone everything they most desire, for good or ill. Mostly, we should be worried about the ill, of course.
Until that day comes, and it won’t be anytime soon, those wishing to exploit this new tech that is rapidly taking over every aspect of our lives will do so with ravenous and unfettered avarice.
Unfortunately, there are others, altogether more innocent, that will also be exploited.
“Deepfake” is, essentially, the term for putting someone’s face in an image that wasn’t there before. I recently saw a video of Christopher Reeve as Superman battling Lou Ferrigno as the Hulk. I’ve seen videos of “Star Wars” characters on the bridge of the Starship Enterprise. Things like that are blatant copyright infringement but, otherwise, pretty harmless.
Other deepfakes are less so.
Before the internet age, you would have to cut out a photo with scissors and paste it onto another photo: literally “cut and paste.” Now, of course, you can just upload a photo and ask AI to turn it into a video. If you peruse social media at all, then you’ve probably seen ads for AI sites that do exactly that.
There are other AI sites that can do the same thing, only naked.
That this technology was going to be used for porn was inevitable. There is no excuse for not being prepared. The images of real women and, I am very sorry to say, even children, are being turned into pornography.
It is the most odious kind of exploitation and the developers of AI and the government that is supposed to protect the vulnerable insist on acting surprised, “How could we possibly know such a terrible thing could happen?”
How could you NOT know? Everybody else did.
Unfortunately, as noted above, the government is always late to the game when it comes to things like this. Even when the government finally gets around to doing something, it often gets it wrong.
The “Take It Down” Act of 2025 is a federal law making it illegal to knowingly publish non-consensual intimate imagery (aka: porn), including AI-generated deepfakes, with internet platforms mandated to remove such content as quickly as possible. Sounds good, right? Unfortunately, the law makes the same mistake here that it did decades ago when trying to stop copyright infringement by file sharing sites and platforms publishing books and songs and movies and anything else that can be copyrighted.
The “Take It Down” Act, just like the laws that are supposed to protect against internet piracy, puts the onus on the victim. The victim has to contact the platform and ask that the offending material, whether it be a story someone wrote or a song someone produced or a fake AI-generated nude, be removed.
Obviously, that depends first and foremost on the victim even being aware that they are being victimized. By the time that happens, assuming it ever does, those fake images could have been seen by hundreds, thousands, even millions of people.
And copied by millions of people. As politicians and celebrities keep finding out when something horrible or stupid they wrote on social media years ago comes back to bite them, once something is on the internet, it’s there forever.
AI developers argue that the government shouldn’t be regulating AI, that such regulation “stifles innovation.”
That same argument has been used for decades. I’m sure the caveman that first discovered fire said something similar when his fellow cavemen warned him not to set fire to the forests.
Well, if those tech giants didn’t want the government to police them, then they should have been policing themselves. Clearly, that didn’t happen and it ain’t gonna happen.
In May of this year, X (formerly known as Twitter) owned by AI-guru Elon Musk (The man who claims with a straight face his AI deepfake-infested platform is somehow the most accurate source of news in the world), sued the state of Minnesota over a law banning the creation of deepfakes intended to influence elections, which it said violated free speech. In August, Musk won a similar lawsuit against the state of California for its deepfake ban.
The most aggravating part about this whole awful situation is that it was easily foreseen. We were warned about this exact thing coming to pass years ago. Yet, nobody who could have made a difference listened until AFTER it became a crisis and many, many innocent people got hurt. And are still being hurt.


