A generative AI (GenAI) tool called GhostGPT is being offered to cybercriminals for help with writing malware code and phishing emails, Abnormal Security reported in a blog post Thursday.
GhostGPT is marketed as an “uncensored AI” and is likely a wrapper for a jailbroken version of ChatGPT or an open-source GenAI model, the Abnormal Security researchers wrote.
It offers several features that would be attractive to cybercriminals, including a “strict no-logs policy” ensuring no records are kept of conversations, and convenient access via a Telegram bot.
“While its promotional materials mention ‘cybersecurity’ as a possible use, this claim is hard to believe, given its availability on cybercrime forums and its focus on BEC [business email compromise] scams,” the Abnormal blog stated. “Such disclaimers seem like a weak attempt to dodge legal accountability – nothing new in the cybercrime world.”
The researchers tested GhostGPT’s capabilities by asking it to write a phishing email from Docusign, and the chatbot responded with a template for a convincing email directing the recipient to click a link to review a document.