Deep Fakes are a Cause for Concern

‘Rising Misuse of Deep Fakes is a Cause for Concern’

Rohit Srivastwa, a Pune-based cyber security expert, says the rise of deep fakes presents a myriad of risks from misinformation to manipulation. His views:

It’s 2024 and Artificial Intelligence is spreading like wildfire and as technology continually pushes boundaries, the emergence of deep fakes has sparked both fascination and fear.

Before we let fear take the better of us, we need to understand what deep fakes are all about. Deep fakes refer to synthetic media, more often videos than images that are created or altered using advanced artificial intelligence techniques. These manipulations are so seamless that they can convincingly depict people saying or doing things they never actually did. We are already seeing a bunch of them across the globe on social media recently.

So how exactly are these created? At the heart of deep fakes lies AI that involves training computer algorithms to recognize and generate patterns. To create a deep fake video, for example, algorithms are fed large amounts of data, including footage samples. The AI then learns to mimic their facial expressions, mannerisms, and voice, enabling it to superimpose their likeness onto another person in a video.

The rise of deep fakes presents a myriad of risks, from misinformation and defamation to political manipulation and fraud. In an era where trust in media and institutions is already fragile, the spread of fabricated content can further erode public trust and sow discord. Imagine the consequences of a deep fake video depicting a world leader declaring war or confessing to a crime – the potential for chaos is staggering.

ALSO READ: ‘Govt Must Wake and Take Note of Deep Fake Menace’

Leaders and policymakers worldwide have sounded the alarm on the dangers of deep fakes, including Indian Prime Minister Narendra Modi. In a digital age where information warfare is increasingly prevalent, the spread of maliciously manipulated content poses a grave threat to democracy, national security, and individual privacy. Modi’s warning underscores the urgent need for proactive measures to combat this insidious trend.

While the proliferation of deep fakes may seem daunting, the tech community is not without recourse. Several initiatives and tools have emerged to detect and mitigate the impact of synthetic media. These include:

1) Deep Fake Detection Tools: Researchers and tech companies are developing algorithms capable of identifying telltale signs of manipulation in videos and images, such as unnatural facial movements or inconsistencies in audiovisual data.

2) Media Literacy and Education: Empowering individuals with the skills to critically evaluate information can serve as a potent defence against the spread of deep fakes. By promoting media literacy and digital hygiene practices, we can mitigate the impact of misinformation and foster a more discerning public.

As we confront this new frontier of synthetic media powered by AI, it is imperative that we remain vigilant, adaptable, and committed to safeguarding the integrity of information and preserving trust in our digital interactions.

By embracing technological innovation while upholding ethical standards and promoting digital literacy, we can navigate the age of deep fakes and many more such things coming in near future with resilience and integrity, ensuring that our digital future remains one of empowerment, transparency, and trust. Stay informed, stay vigilant. It is only then that we can shape a future where truth triumphs over deception.

(The narrator is a serial entrepreneur and recipient of the Microsoft Most Valuable Professional award for enterprise security. He advises law enforcement agencies of various countries on cyber security)

As told to Deepa Gupta

For more details visit us: https://lokmarg.com/

Deep Fake Menace

‘Deep Fakes Threat Must Be Fought With Tech & Legal Devices’

Shakti Singh Tanwar, a cyber security & tech expert, says use of Deep Fakes throws up possibilities that are both fascinating and scary. His views:

In the realm of multimedia graphics, technological advancements have ushered in a new era, one that blurs the lines between reality and fiction. Deep Fakes, a portmanteau of “deep learning” (processing data like human brain) and “fake”, have emerged as a cutting-edge technique in the field of artificial intelligence (AI) and machine learning. As a multimedia graphics expert, I find myself grappling with the implications of this technology, understanding how it is executed, and acknowledging the inherent dangers that come with it.

So, what is a Deep Fake? It is fabricated media (audio/videos) created by using deep learning models. The model is trained in the voices and mannerism of an individual to generate fake video/audio of the person concerned. Depending on how trained the model is, it’s difficult to distinguish fake videos from real ones. One recent example of this that has been in news was a fake video of Rashmika Mandanna.

Deep Fakes involve the use of deep learning algorithms to create realistic and often convincing manipulations of audio and visual content. By leveraging powerful neural networks, these algorithms can seamlessly replace faces, voices, or even entire scenarios in videos.

The complex process typically involves training the AI model on vast datasets of images and videos, allowing it to learn the subtle nuances of facial expressions, voice tones, and other distinctive features.

The danger lies not only in the potential for misuse but also in the sophistication of the technology, making it increasingly difficult to distinguish between the authentic and the manipulated content. This has prompted global leaders, including Indian Prime Minister Narendra Modi, to issue warnings about the risks associated with Deep Fakes. Deep Fakes pose a threat to the foundations of trust and authenticity in an increasingly digitalized society.

The creation of Deep Fakes requires a deep understanding of AI, machine learning, and multimedia graphics. Advanced tools, such as generative adversarial networks (GANs), are employed to refine the realism of manipulated content by pitting two neural networks against each other—one generating Deep Fakes, and the other discerning real from fake.

Artificial Intelligence has been the buzz word for some time now. More than 750 startups have started to work on AI-related stuff in last one year in the US itself. Given the powers AI possess, the number is expected to grow. But with great powers comes great responsibility.

ALSO READ: ‘Govt Must Take Note Of Deep Fake Menace’

Some key aspects that are involved in deep fakes are generating realistic images/videos and audios, face swapping etc. It’s very easy to superimpose one face to another body – something done so far with Photoshop for fun. But Photoshop results were not convincing and one could only manipulate still images. With deep fakes we can manipulate full length videos as well.

Deep Fakes are illegal and have huge consequences. In today’s world which heavily relies on social media tools for information it is easy to spread hoaxe and panic in society. Lots of people have recently raised voices regarding misuse of Deep Fakes, including Prime Minister Modi and Bollywood star Amitabh Bachchan.

There have been efforts to develop tools and techniques that can detect Deep Fakes. Some approaches and tools to identify deep fakes are: 1) Microsoft Video Authenticator (Not available for general use); 2) Sensity Top Deepfake Detection Solution | New AI Image Detection; 3) Deepware scanner.

Combatting the misuse of Deep Fake technology demands a multi-faceted approach. One avenue involves developing sophisticated detection tools that can analyze videos and identify anomalies that betray the presence of manipulation. These tools may leverage AI algorithms themselves to scrutinize content for inconsistencies or tampering. Researchers are continually refining these tools to keep pace with the evolving sophistication of Deep Fake technology.

From a regulatory perspective, there is a growing need for laws and policies that address the ethical implications of Deep Fakes. Striking a balance between innovation and safeguarding against malicious use requires a collaborative effort between governments, technology developers, and the wider public. 

My perspective on Deep Fakes is one rooted in both fascination and concern. The power of AI to manipulate audio and visual content opens a realm of creative possibilities, but the potential for misuse demands a vigilant response from the technological community. By developing advanced detection tools, promoting media literacy, and establishing ethical guidelines, we can work towards harnessing the potential of Deep Fake technology responsibly and preserving the integrity of our digital reality.

As told to Deepti Sharma

For more details visit us: https://lokmarg.com/

Take Note Of Deep Fake Menace

‘Govt Must Wake Up And Take Note Of Deep Fake Menace’

Nikhil Kumar, a tech-savvy professional and CEO of a digital platform Simply Cue, says the menace of Deep Fake and misuse of AI require coordinated effort from state and cyber experts

A couple of short videos recently caused outrage among netizens when they showed Bollywood celebrities Rashmika Mandanna, Katrina Kaif and Kajol Devgan in skimpy outfits in their private space. It turned out that these clips were Deep Fakes, meaning someone mischievously placed the faces of these film stars over someone else by using digital tools and encoders. These tools used Artificial Intelligence technology for digital imposition in such a way that most viewers could not detect the fake manipulation.

Welcome to the latest social menace. How far these deep fakes can be misused for a crime is yet to be evaluated but it can certainly cause untold humiliation to a public figure by tarnishing her or his image, even though temporarily. Besides, it is breach of the right to privacy of an individual. Malicious elements can easily mine publicly available images or data and create misinformation. Such fake content can then be disseminated farther via open social media platform to cause disharmony and chaos.

Clearly, we have a big problem at hand. Recently, even Prime Minister Narendra Modi expressed his concern on Deep Fake videos and other misleading content which are consumed as true by the gullible audience. Thus, the first requirement will be to create a uniform standardization that can first detect and then disable such content to be shared or spread on public channels like YouTube, Facebook, X etc. For such a thing to happen both state and social media giants should be on the same page as it will require both legal and digital firewalls.

ALSO READ: ‘Deep Fakes Are A Serious Societal Risk’

The next step should be social. The netizens and social media users must be made aware that they should not become unpaid mules of such disinformation and must use discretion before sharing any content which is dubious in nature.

Amid the possible measures to counter the menace, there are various tools and technologies that have been developed over the recent past. These tools majorly rely on computer vision techniques, analyze facial inconsistencies, or use AI algorithms to identify anomalies in audio and visual content. Examples include Microsoft’s Video Authenticator and Intel’s FakeCatcher.

Eventually, as Deep Fake technology evolves, so will the detection methods, creating an ongoing challenge in staying ahead of deceptive techniques. So for a long time to come this will remain a cat and mouse game. The stress should be to penalize such action and bring in new laws to effectively deal with such misdeeds.

As responsible citizens of our country, we also need to contribute our bit – to remain aware about such things and try to be informed about Deep Fakes. As we are in an era of depending upon social media (WhatsApp, Twitter, FaceBook, Instagram, etc) for our newsfeed and information, we need to be able to separate grain from the chaff. The best source of authentic information is a newspaper – as it is printed in a hard copy and the information published in it cannot be erased or changed. For those hooked to digital portals, use only trustworthy names for their news and other information.

As told to Rajat Rai

For more details visit us: https://lokmarg.com/