Imagine scrolling through your phone on a calm Sunday morning, only to see a video of yourself saying things you’ve never said and doing things you’ve never done. It’s you—your face, your voice—yet it’s all fake, generated using AI.
As AI grows smarter every day with new versions, it doesn’t just learn from us—it mimics us, deceives us, and, at times, disrupts our ability to differentiate what’s real from what’s fake. Are we prepared for the shadows AI can cast, or is the line between real and unreal already fading?
Deepfakes are increasingly being used for purposes beyond entertainment, including spreading misinformation, taking revenge, and manipulating public opinion. The surge of deepfake videos featuring various public figures has grabbed widespread attention on social media. AI-generated videos of popular Bollywood actors, entrepreneurs, cricketers, and politicians have been widely circulated online.
Among these were two videos featuring India’s biggest Bollywood stars, Aamir Khan and Ranveer Singh, criticizing the incumbent Indian Prime Minister and urging people to vote for the opposition party in the country’s general elections in 2024.
In response, Aamir and Ranveer lodged complaints with the police, and FIRs were registered. Ranveer posted the information with his followers on his X handle (formerly Twitter) – “Deepfake se bacho dostonnnn.”
Similarly, another video of a renowned South Indian Actress, Raskmika Mandanna, was circulated online on social media, whereon her face was superimposed on a different woman’s body dressed in a black workout onesie inside an elevator, to which the actress took to her Instagram story stressing about the situation, writing, “I feel really hurt to share this and have to talk about the deepfake video of me being spread online. Something like this is honestly extremely scary not only for me but also for each one of us who today is vulnerable to so much harm because of how technology is being misused,” she stated.
It is the case of not just a handful of incidents but many whose stories go unheard; even the Prime Minister of India fell victim to a Deepfake video during the Navratri season in 2023, in which he was seen performing a garba dance. In his speech in November 2023 in New Delhi, he cautioned the general public about the issue, talking about the serious consequences such content can pose, and asked the media to educate people.
According to a study by a cybersecurity company, McAfee, released in April 2024, one in every four Americans (23%) came across a political deepfake that they initially believed was real, while 66% are more worried about deepfakes than a year ago. Half of the responders (53%) believe that AI has made online scams more difficult to detect. 72% of American social media users fail to identify AI-generated content.
In 2023, 43% of respondents reported coming across deepfake content, 26% encountered deepfake scams, and 9% were direct victims, as per the study. Additionally, 31% claimed to have encountered AI voice scams, and 40% saw deepfake content of celebrities that they believed was real. McAfee suggests verifying sources before further sharing information, looking for visual distortions, and using scam protection tools to fight against these serious emerging threats.
The Voices talked to advocate Puneet Chaturvedi, the senior counsel of the Bombay High Court, to explore the severe repercussions of generative AI and understand how a victim of AI-generated visuals can navigate the legal framework in India.
Advocate Puneet Chaturvedi explains that deepfakes are a form of digital manipulation where appearance and audio are altered with computer technology to deceive viewers or listeners. This malpractice, often used to spread misinformation, can harm someone’s image, reputation, or career. Incidentally, deepfakes are now used to further one’s political agenda and harm others. They are also used to morph the image of actors.
There are many more areas and purposes where deepfake is being used, including acting, art, blackmail, entertainment, fraud, internet, pornography, politics, social media, sock puppets (non-existent persons) and even war.
While there are no dedicated or specific statutes, laws, rules, or regulations on Artificial Intelligence or deepfakes in India, Chaturvedi assures the victims that this does not mean the law is toothless. There is no remedy against this social menace. He explains that until the Parliament of India has a dedicated statute, one can find remedies in the existing laws within the Indian legal framework.
These include the Indian Penal Code (IPC), which has been replaced with Bharatiya Nyaya Sanhita. It tackles issues like fraud, forgery, criminal intimidation, and defamation. It also addresses crimes involving the outraging of modesty, conspiracy, and creating animosity by spreading rumours.
Chaturvedi adds that the Information Technology Act 2000 extends legal protection by taking appropriate measures against spreading misinformation and data morphing. Also the Indecent Representation of Women (Prohibition), Act,1986 prohibits indecent representation of women in any form, including in films, television, advertisements, publications, writings, paintings, figures, or in any other manner.
Chaturvedi informs that, as of now, there are no dedicated or specific statutes, laws, rules, or regulations on Artificial Intelligence or deepfakes in India. He adds that the Digital India Act, which will replace the Information Technology Act of 2008, may include laws for AI and deepfakes.
However, he advises that individuals who are at risk of becoming victims seek help from the law enforcement agency immediately against the menace. He also recommends seeking punitive action and relief in the traditional criminal justice system against using artificial intelligence or deepfakes.
Chaturvedi stressed increased public awareness campaigns and educational initiatives to prevent people from falling into these traps. He believes that even young students should be educated in schools about the dangers and legal consequences of deepfake technology.
The right to freedom of speech and expression is not an absolute right. Chaturvedi believes that nuisances and illegalities should not be tolerated under the pretext of freedom of expression and that the said right is subject to a list of reasonable restrictions.
So, even if an individual is a victim of such AI-generated nuance, he says one just has to reach the legal boundaries, and the rest will be taken care of with the best-suited existing laws.
Copyeditor: Vibhuti Landge