Deepfake, is a technique for human image synthesis based on artificial intelligence. It is used to combine and superimpose existing images and videos onto source images or videos using a machine-learning technique known as a generative adversarial network. This new technological feat may perhaps be the single greatest threat to human lives, to date. Second, to A.I. robot take over, but that’s for another day.
I’ve been looking more and more into this DeepFake as of late stuff, not out of interest in seeing how technology has grown over the last few years since the last time I talked about it but mostly out of fear of being misused. I started researching Deepfakes to be able to identify them. What I learned was that it was damm near impossible to do so in the first place. In an age where fake news and misinformation are rampant, the addition of deepfakes, coupled with things like A.I. voice reproduction and you have yourself a recipe for disaster
What spurred this sudden interest was that Twitch streamer Atrioc was seemingly caught looking at NSFW deepfakes of fellow streamers including Pokimane and Maya Higa. That’s the reality we live in. People can take your face, and your voice and produce content you may not necessarily want to be a part of and even profit from it. It was then I realized just how dangerous this new technology had become, and why I felt there needed to be some sort of legislation to control it. Or just outright ban it.
Not to mention how we live in a society that loves to find someone guilty before any evidence has come out to prove their innocence. A simple deepfake, if executed correctly, could destroy someone’s life.
I mean think about it this way, how will we be able to prove a video is deepfaked in the next 5-10 years? That tech would reach near-perfect by then, and distinguishing the real from the fake would be damm near impossible. Imagine this, someone makes the claim that they got leaked information on a prominent figure and allegedly says that they’re planning on invading a country or planning on assassinating someone in a video. Or simply creating a deepfake that shows you with a child doing inappropriate things, and the video gets sent to your future employer. How do we even defend against something like that? You get fired for even being in proximity of controversy.
I mean look at how people abused swatting today. Look how people try to cause controversy on Tiktoks with their gym videos, people are willing to do anything now for attention, and that includes destroying their fellow man in the process. This type of behavior is notorious in the gaming community. The Swat team does not wait to verify the call before acting because they cannot. They have to take all threats or potential threats as absolute certainty until it has been confirmed to be fake. There have even been lives lost because of this infamous trend. So if a deepfake video of a regular guy or girl saying they’re going to go shoot up a school, or they are in possession of a bomb, would you wait to see if the video is fake, or would you react first and then investigate later?
Now imagine what chaos deepfake technology can do in this day and age where people don’t even go beyond reading the title of an article and are easily sharing bogus articles and blogs. It’s would be anarchic! Now the question remains, how to we create laws around it? Is it even possible?
Well, that’s just my thoughts in opinion on this deepfake stuff. We gotta stay more vigilant if we are to stay safe.