Bad actors spreading malware via “AI tools” popularity: Meta 

32

A report by Meta’s security team disclosed that bad actors are using the popularity of the AI chatbot ChatGPT to get new victims for their illegal activities.

Meta is the parent company behind Facebook, WhatsApp, and Instagram. In 2021, Meta rebranded its name from Facebook to Meta and announced that the company will focus on Virtual Reality (VR) based high-level project Metaverse. Last week, Meta CEO Mark Zuckerberg also confirmed that Meta also shifted its focus toward Artificial intelligence (AI) tools. 

Recently the Meta security team published a report and confirmed that its security team found 10 malware groups which are intending to show that they are ChatGPT, a popular AI chatbot, or related AI tools. In short, bad actors are now using AI chatbots, AI tools & ChatGPT-named websites to attract new visitors to their malicious websites & spread phishing links. 

“Since March alone, our security analysts have found around 10 malware families posing as ChatGPT and similar tools to compromise accounts across the internet,” Meta Report read.

The Meta team also noted that some bad actors are using AI tools to create malicious codes, malicious extensions & also malicious browsers, which are available on official web stores. The suspicious actors behind these malicious online tools are claiming to provide AI tools-related services.

Recently Guy Rosen, Meta’s chief security officer, appeared in an interview with Reuters and there he said that ChatGPT is the new Cryptocurrency for bad actors.

This report is an example of how people are using AI tools to create malicious tools & websites but there are huge numbers of people who are using AI tools to bring a better level of internet experience to the internet users. 

Recently a Crypto Twitter user shared his crypto coin journey. He said that he successfully created & launched a new meme coin with the help of ChatGPT.

Read also: A guy created a $40M market cap holding meme coin with ChatGPT