Deep Fakes are a Cause for Concern

‘Rising Misuse of Deep Fakes is a Cause for Concern’

Rohit Srivastwa, a Pune-based cyber security expert, says the rise of deep fakes presents a myriad of risks from misinformation to manipulation. His views:

It’s 2024 and Artificial Intelligence is spreading like wildfire and as technology continually pushes boundaries, the emergence of deep fakes has sparked both fascination and fear.

Before we let fear take the better of us, we need to understand what deep fakes are all about. Deep fakes refer to synthetic media, more often videos than images that are created or altered using advanced artificial intelligence techniques. These manipulations are so seamless that they can convincingly depict people saying or doing things they never actually did. We are already seeing a bunch of them across the globe on social media recently.

So how exactly are these created? At the heart of deep fakes lies AI that involves training computer algorithms to recognize and generate patterns. To create a deep fake video, for example, algorithms are fed large amounts of data, including footage samples. The AI then learns to mimic their facial expressions, mannerisms, and voice, enabling it to superimpose their likeness onto another person in a video.

The rise of deep fakes presents a myriad of risks, from misinformation and defamation to political manipulation and fraud. In an era where trust in media and institutions is already fragile, the spread of fabricated content can further erode public trust and sow discord. Imagine the consequences of a deep fake video depicting a world leader declaring war or confessing to a crime – the potential for chaos is staggering.

ALSO READ: ‘Govt Must Wake and Take Note of Deep Fake Menace’

Leaders and policymakers worldwide have sounded the alarm on the dangers of deep fakes, including Indian Prime Minister Narendra Modi. In a digital age where information warfare is increasingly prevalent, the spread of maliciously manipulated content poses a grave threat to democracy, national security, and individual privacy. Modi’s warning underscores the urgent need for proactive measures to combat this insidious trend.

While the proliferation of deep fakes may seem daunting, the tech community is not without recourse. Several initiatives and tools have emerged to detect and mitigate the impact of synthetic media. These include:

1) Deep Fake Detection Tools: Researchers and tech companies are developing algorithms capable of identifying telltale signs of manipulation in videos and images, such as unnatural facial movements or inconsistencies in audiovisual data.

2) Media Literacy and Education: Empowering individuals with the skills to critically evaluate information can serve as a potent defence against the spread of deep fakes. By promoting media literacy and digital hygiene practices, we can mitigate the impact of misinformation and foster a more discerning public.

As we confront this new frontier of synthetic media powered by AI, it is imperative that we remain vigilant, adaptable, and committed to safeguarding the integrity of information and preserving trust in our digital interactions.

By embracing technological innovation while upholding ethical standards and promoting digital literacy, we can navigate the age of deep fakes and many more such things coming in near future with resilience and integrity, ensuring that our digital future remains one of empowerment, transparency, and trust. Stay informed, stay vigilant. It is only then that we can shape a future where truth triumphs over deception.

(The narrator is a serial entrepreneur and recipient of the Microsoft Most Valuable Professional award for enterprise security. He advises law enforcement agencies of various countries on cyber security)

As told to Deepa Gupta

For more details visit us: https://lokmarg.com/

Take Note Of Deep Fake Menace

‘Govt Must Wake Up And Take Note Of Deep Fake Menace’

Nikhil Kumar, a tech-savvy professional and CEO of a digital platform Simply Cue, says the menace of Deep Fake and misuse of AI require coordinated effort from state and cyber experts

A couple of short videos recently caused outrage among netizens when they showed Bollywood celebrities Rashmika Mandanna, Katrina Kaif and Kajol Devgan in skimpy outfits in their private space. It turned out that these clips were Deep Fakes, meaning someone mischievously placed the faces of these film stars over someone else by using digital tools and encoders. These tools used Artificial Intelligence technology for digital imposition in such a way that most viewers could not detect the fake manipulation.

Welcome to the latest social menace. How far these deep fakes can be misused for a crime is yet to be evaluated but it can certainly cause untold humiliation to a public figure by tarnishing her or his image, even though temporarily. Besides, it is breach of the right to privacy of an individual. Malicious elements can easily mine publicly available images or data and create misinformation. Such fake content can then be disseminated farther via open social media platform to cause disharmony and chaos.

Clearly, we have a big problem at hand. Recently, even Prime Minister Narendra Modi expressed his concern on Deep Fake videos and other misleading content which are consumed as true by the gullible audience. Thus, the first requirement will be to create a uniform standardization that can first detect and then disable such content to be shared or spread on public channels like YouTube, Facebook, X etc. For such a thing to happen both state and social media giants should be on the same page as it will require both legal and digital firewalls.

ALSO READ: ‘Deep Fakes Are A Serious Societal Risk’

The next step should be social. The netizens and social media users must be made aware that they should not become unpaid mules of such disinformation and must use discretion before sharing any content which is dubious in nature.

Amid the possible measures to counter the menace, there are various tools and technologies that have been developed over the recent past. These tools majorly rely on computer vision techniques, analyze facial inconsistencies, or use AI algorithms to identify anomalies in audio and visual content. Examples include Microsoft’s Video Authenticator and Intel’s FakeCatcher.

Eventually, as Deep Fake technology evolves, so will the detection methods, creating an ongoing challenge in staying ahead of deceptive techniques. So for a long time to come this will remain a cat and mouse game. The stress should be to penalize such action and bring in new laws to effectively deal with such misdeeds.

As responsible citizens of our country, we also need to contribute our bit – to remain aware about such things and try to be informed about Deep Fakes. As we are in an era of depending upon social media (WhatsApp, Twitter, FaceBook, Instagram, etc) for our newsfeed and information, we need to be able to separate grain from the chaff. The best source of authentic information is a newspaper – as it is printed in a hard copy and the information published in it cannot be erased or changed. For those hooked to digital portals, use only trustworthy names for their news and other information.

As told to Rajat Rai

For more details visit us: https://lokmarg.com/