Over 100,000 ChatGPT accounts stolen via info-stealing malware

According to Bleeping Computer*, ​More than 101,000 ChatGPT user accounts have been stolen by information-stealing malware over the past year, according to dark web marketplace data.

Cyberintelligence firm Group-IB reports having identified over a hundred thousand info-stealer logs on various underground websites containing ChatGPT accounts, with the peak observed in May 2023, when threat actors posted 26,800 new ChatGPT credential pairs. Regarding the most targeted region, Asia-Pacific had almost 41,000 compromised accounts between June 2022 and May 2023, Europe had nearly 17,000, and North America ranked fifth with 4,700. Information stealers are a malware category that targets account data stored on applications such as email clients, web browsers, instant messengers, gaming services, cryptocurrency wallets, and others.

People may not realize that their ChatGPT accounts may in fact hold a great amount of sensitive information that is sought after by cybercriminals. It stores all input requests by default and can be viewed by those with access to the account. Furthermore, info stealers are becoming more prominent in ChatGPT compromises and even used in malware-as-a-service attacks. Info stealers focus on stealing digital assets stored on a compromised system looking for essential information such as cryptocurrency wallet records, access credentials and passwords as well as saved browser logins.

The fact that a regular user with free access doesn’t have the option to enable 2FA/MFA makes the service increasingly vulnerable. It might be a wise idea to therefore disable the chat saving feature unless absolutely necessary and use one of the single sign-on options you trust the most (currently Google, Microsoft or Apple) which uses 2FA. The more data that chatbots are fed, the more they will be attractive to threat actors so it is also advised to think carefully about what information you input into cloud based chatbots and other services.