Hackers get people to give them their information by saying they will give them ChatGPT.
Meta warned on Wednesday that hackers are using generative artificial intelligence (AI) to trick people into installing dangerous code on their devices.
In a news briefing, the social media giant’s chief information security officer, Guy Rosen, said, “Over the past month, security analysts have found malicious software that pretends to be ChatGPT or other AI tools.”
The officer said, “The most recent wave of malware campaigns has taken note of generative AI technology, which has been getting everyone’s attention and getting them excited.”
Rosen thought that the company that owns WhatsApp, Facebook, and Instagram often shares what it learns with others in the cyber defence community and in the same business.
Rosen said, “Meta has seen threat actors try to sell internet browser add-ons that claim to be able to use generative AI but are actually made to infect devices with malicious software.”
Hackers can easily catch their victims by using “click bait” to get people to click on dangerous links or install programmes that could steal their data and important information.
Rosen said, “We’ve seen this with other popular topics, like crypto scams, which are fueled by the huge interest in digital currency.” He also said, “From the point of view of a bad actor, ChatGPT is the new crypto.”
The tech company’s security team said, “Meta has found and blocked more than a thousand web addresses that are advertised as promising ChatGPT-like tools but are actually hacker traps.”
Rosen said, “Meta hasn’t seen hackers use generative AI for anything other than bait, but they’re getting ready for the day when it will be used as a weapon.”
Rosen said, “Generative AI has a lot of potential, and bad people know it. This means we should all be very careful to stay safe.”
The tech giant is also working on ways to use ChatGPT to protect itself, such as from attacks by hackers and deceptive online campaigns.
Nathaniel Gleicher, who is in charge of security policy at Meta, said in the meeting, “We already have teams thinking about how [generative AI] could be used wrongly and what protections we need to put in place to stop that.”