AI-powered tools, deepfakes pose misinformation challenge to internet users

Artificial intelligence, deepfakes and social media…little known to the layman, the combination of the three poses a mysterious barrier to the millions of internet users who try to filter the real from the fake every day.

The fight against misinformation is always challenging, and the challenge is made even more difficult as advances in artificial intelligence tools make it harder to detect deepfakes across multiple social media platforms. AI’s ability to create fake news mindlessly — faster than it can be stopped — has worrying consequences.

“In India’s ever-changing information ecosystem, deepfakes have emerged as the new frontier of disinformation, making it difficult for people to distinguish false information from real information,” said Syed Nazakat, founder and CEO of DataLEADS, a company dedicated to improving information. Literacy and Information Epidemic Management Program, Tell public transport.

READ | Using artificial intelligence to improve services, surveys and monitoring: Parliamentary group

India is already battling a lot of misinformation in different Hindi languages. This will only get worse as different AI bots and tools drive deepfakes across the internet.

“The next generation of AI models, called generative AI — such as Dall-e, ChatGP0T, Meta’s Make-A-Video, etc. — don’t need to convert the source. Instead, they can generate images, text, or video based on cues. These are still in Early stages of development, but could see potential for harm as we didn’t have any original content to use as evidence,” added Azahar Machwe, who works as an AI enterprise architect at BT.

What are deep fakes?

Deepfakes are photos and videos in which one person’s face has been realistically replaced with another. Internet users can use many artificial intelligence tools on their smartphones for almost free.

In its simplest form, artificial intelligence can be explained as the use of computers to do something that would otherwise require human intelligence. A notable example is the ongoing competition between Microsoft’s ChatGPT and Google’s BARD.

While both AI tools automatically create human-level writing, the difference is that BARD uses Google’s Language Model for Conversational Applications (LaMDA) and can provide responses based on real-time and current research pulled from the internet. ChatGPT uses its Generative Pre-training Transformer 3 (GPT-3) model, which was trained on the data until the end of 2021.

recent example

Two composite videos and a digitally altered screenshot of a Hindi-language newspaper report were shared on social media platforms including Twitter and Facebook last week, highlighting the role of AI tools in creating misleading or false Unintended consequences of claimed altered photos and doctored videos.

A synthetic video is any video generated using AI without cameras, actors, and other physical elements.

Read | Is Machine Learning Coming Back?

A video of Microsoft co-founder Bill Gates being cornered during an interview with reporters was shared as real and later found to have been edited. A digitally altered video of US President Joe Biden calling for national conscription (mandatory recruitment of individuals into the armed forces) to fight the war in Ukraine has been shared as authentic. In another instance, an edited photo to look like a report in a Hindi-language newspaper was widely circulated to spread misinformation about migrant workers in Tamil Nadu.

All three instances — two composite videos and digitally altered screenshots of Hindi-language newspaper reports — were shared on social media platforms by thousands of internet users who believed they were real.

These issues escalated into stories on social media and mainstream media, highlighting the unintended consequences of artificial intelligence tools in creating doctored photos and doctored videos with misleading or false claims.

public transportThe fact-checking team investigated the three claims and debunked them as “deepfakes” and “digital redactions” using artificial intelligence tools readily available on the internet.

Artificial Intelligence and Fake News

A few years ago, the introduction of artificial intelligence in journalism sparked the promise of revolutionary change in the industry and news generation and dissemination. It is also seen as an effective way to curb the spread of fake news and misinformation.

“One weakness of deepfakes is that they require some original content to be useful. For example, the Bill Gates video has original audio overlaid with fake audio. These videos are relatively easy to debunk if the original content can be identified, but it takes time and effort.” The ability to search original content,” Azahar told public transport.

Read | How artificial intelligence could disrupt the world more than electricity or the internet

He thinks the deepfakes recently shared on social media are easy to track down, but worries that debunking such synthetic videos will be challenging in the coming days.

“Converting raw video can introduce flaws (such as lighting/shading mismatches) that AI models can be trained to detect. These generated videos are often of lower quality to hide these flaws from algorithms (and humans),” he explained.

According to him, fake news spreads in many forms, while deepfakes are created by very basic artificial intelligence tools. These videos are relatively easy to debunk.

“But it’s impossible to have 100% accuracy. For example, Intel’s version promises 96% accuracy, which means 4 people out of 100 will still pass,” he added.

the road ahead

Most social media platforms claim to reduce the spread of misinformation at source by building fake news detection algorithms based on linguistic patterns and crowdsourcing. This ensures that misinformation is not propagated, rather than being caught and removed after the fact.

While the example of deepfakes highlights the potential threat of AI in generating fake news, AI and machine learning provide journalism with several task-facilitating tools that help automatically generate content for speech-recognition transcription tools.

“AI continues to help journalists focus on developing high-quality content as the technology ensures timely and rapid distribution. Humans in the loop will need to check the consistency and accuracy of content shared in any format – text , images, video, audio, etc.,” Azahar said.

Deepfakes should be clearly labeled as “synthetically generated” in India, which has over 700 million smartphone users (two years and older) in 2021. A recent Nielsen report stated that there are more than 425 million Internet users in rural India, 44% more than 29.5 million people using the Internet in urban India.

Read | A God in a Machine?The rise of artificial intelligence could spawn a new religion

“Humans tend to join ‘echo chambers’ of like-minded people. We need to instil media literacy and critical thinking lessons in basic education to raise awareness and build proactive ways to help people protect themselves from misinformation infringement.

“We need a multi-pronged, cross-sectoral approach in India to prepare people of all ages for today’s and tomorrow’s complex digital environment to be vigilant against deepfakes and disinformation,” Nazakat said.

For a large country like India, the changing information landscape creates greater demand for information literacy skills in all languages. Every educational institution should prioritize information literacy in the next decade, he added.

Leave a Reply

Scroll to Top
%d bloggers like this: