The security team at Meta is warning users about fake ChatGPT malware that can hack user accounts and take over business pages. Malware operators and spammers are creating fake versions of AI chatbots like ChatGPT, Bing, and Bard, in order to trick users into trying them. Meta security analysts have found around 10 forms of malware posing as AI chatbot-related tools since March. Some of these fake chatbots are available as web browser extensions and toolbars and are even being sold through unnamed official web stores. The Washington Post reported last month about how these fake ChatGPT scams have been using Facebook ads to spread.
Some fake ChatGPT malware has AI built in to appear more convincing. Meta security analysts have identified around 10 forms of this malware posing as AI chatbot-related tools since March. Over 1,000 unique links to the malware have been blocked by Meta. The company has also explained how scammers gain access to accounts by hijacking logged-in sessions and maintaining access. This method is similar to the one used in the attack that brought down Linus Tech Tips.
Meta is providing a new support flow for businesses that have been hacked or shut down on Facebook. Typically, business pages are hacked because individual users with access to them get targeted by malware. Meta is deploying new Meta work accounts that support existing, more secure, single sign-on credential services from organizations. The hope is that it’ll be more difficult for malware like fake ChatGPT to attack once businesses migrate to the new Meta work accounts.