Artificial Intelligence (AI) is growing by leaps and bounds and touches almost every aspect of our lives, from AI controlled autopilot planes to the spam filter in your email or the shopping suggestions on websites.
One of the main benefits of AI lies in its ability to self-train. This is being used to our advantage in developing technology like autonomous vehicles or medical diagnostics. However, this technology can be easily misused to create fear and doubt among the public, thus having far reaching effects in society. One of the top examples of such misuse is through the propagation of Deepfakes.
Deepfake is a manipulated video created using AI and so realistic that it takes an expert to be able to tell the difference. Deepfake is for movies what Photoshop is for images.
The “Deep” comes from deep learning of the AI neural networks, a type of Machine Learning (ML) technique based on the human brain. The idea is to use a GAN (Generative Adversarial Network) to train the AI. Essentially, it means two ML models are used, one creating the Deepfake and the other trying to catch it out by testing it. At some point when fake video has improved to such an extent that the other ML model can’t detect the forgery, we have a Deepfake ready to be used. It was invented in 2014 by Ian Goodfellow, a then PhD student.
In the past, the ability to create tampered videos used to be limited to a select few in the entertainment industry or by sophisticated government organizations. Deepfake took this ability and placed it in the hands of anyone who is so inclined. All they need is to download a free software app like Faceswap, a few photos of the person downloaded from social media or otherwise and in a few hours, they are able to create a Deepfake video. Of course, it does need some skill, perseverance and access to real photos/video of the person to have a quality video, but these are freely available on social media and on the internet, especially of celebrities and politicians.
Initially, its use was limited to creating porn videos with famous faces on them or personal revenge porn. However, this has steadily progressed from targeting individuals to being used for political gains by opponents or even rogues nations as part of computational propaganda. Marco Rubio, the Republican senator from Florida and 2016 presidential candidate, dramatically called them the modern equivalent of nuclear weapons.
Politically motivated individuals, groups or even states, use it to launch a coordinated campaign, to drive a point and influence public opinion by posting disturbing videos online. The story is then spread by their thousands of associates or even artificial bots to make it viral. By the time this is verified and taken down, if ever, a small section of the society has already been influenced permanently.
In today’s connected world of social media, just as it is hard to suppress the truth, it is also easy to spread lies. It can help create artificial panic in the public or in the financial space. It may sway votes and even cost an election if say a video is released just before the voting day, too late to validate and call out, before casting the ballot. The US is fighting hard to prevent this in the coming November 2020 Presidential elections. Meanwhile other less developed countries or those with tighter government controls are able to feed and propagate lies to their citizens, who have no means of knowing otherwise.
A fired Facebook data scientist’s internal memo, published on Buzzfeed news, spoke of finding “multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry, and caused international news on multiple occasions. I have personally made decisions that affected national presidents without oversight and taken action to enforce against so many prominent politicians globally that I’ve lost count.”
It details how the social network knew leaders of countries around the world were using their site to manipulate voters — and failed to act, ignoring Global Political Manipulation. It illustrates the challenges facing individuals and nations, especially democracies, across the world and social media platforms themselves. The current efforts to control it are uncoordinated and a slapdash approach.
A couple of years back, BuzzFeed produced a Deepfake video featuring former US President Barack Obama and comedian Jordan Peele, using After Effects CC and FakeApp, to raise public awareness about it. Jordan, looking like President Obama, made a weird public announcement, saying: “You are entering an era in which our enemies could make it look like we’re saying anything at any point in time, even if they would never say those things. So, for instance, like you can have me say things, like, I don’t know… Killmonger was right.” Seconds later, he appeared side by side with Obama, revealing that the video was fake.
Deepfake, per se, are a technological advancement and have multiple positive uses in industries like gaming, fashion or entertainment. It may lead us to being able to play games with our own avatars. It can also be used for dubbing in movies. David Beckham helped launch a global appeal to end malaria using a ‘Deepfake’ malaria awareness video, where he spoke in nine languages.
Will the entertainment industry reach a stage where the actors’ faces are the only real thing and their bodies and voice are all fake, using Deepfake? Maybe. However, there are moral and legal downsides, like using images of actors who may have died, without their consent.
The state of California, realizing the severity of impact, made them illegal, but Deepfake is legal elsewhere. In the absence of official direction, many platforms are taking a stand against using them. They do not want to encourage the behaviour and would like to avoid the moral dilemma itself.
We are attempting to regulate modern technology with ancient laws. However, any new legislation has to weigh in the benefits of this AI technology along with freedom of expression, IP rights and privacy laws. It cannot be a blanket ban. It will take time to research the advantages and implications before the law can be framed, with the technology itself and its uses still evolving.
Meanwhile, we need to be careful and critical about every video we watch, especially if it is found to be too radical or sensational. A bad Deepfake mostly has no eye blinking, wrong shadows or unnatural movements, but a good one needs much more sophisticated research to detect.
It is worth checking the authenticity of the video, before believing or posting it on social media. In this world of fake news and Deepfake videos, our vigilance is the only armour we have for protection of ourselves and the society we live in.
Bhubaneswar: The Odisha Government has dissolved the governing bodies or managing committees of all Non-Government…
New Delhi: President Droupadi Murmu on Monday appointed former Supreme Court judge Justice V Ramasubramanian…
Bhubaneswar: Marking a significant development in eco-friendly transport in Odisha, a tripartite Memorandum of Understanding…
Mumbai: After veteran actor Mukesh Khanna, noted poet Kumar Vishwas left netizens irked with his…
Bhubaneswar: A major fire erupted at a pressure cooker company godown at Satyanagar in Bhubaneswar…
Bhubaneswar: The Bhubaneswar Municipal Corporation (BMC) on Monday announced that the Patha Utsav, which was…
This website uses cookies.