In November 2023, Sophos X-Ops revealed analysis exploring risk actors’ attitudes in the direction of generative AI, specializing in discussions on chosen cybercrime boards. Whereas we did be aware a restricted quantity of innovation and aspiration in these discussions, there was additionally a whole lot of skepticism.
Given the tempo at which generative AI is evolving, we thought we’d take a contemporary look to see if something has modified up to now yr.
We famous that there does appear to have been a small shift, at the very least on the boards we investigated; a handful of risk actors are starting to include generative AI into their toolboxes. This principally utilized to spamming, open-source intelligence (OSINT), and, to a lesser extent, social engineering (though it’s value noting that Chinese language-language cybercrime teams conducting ‘sha zhu pan’ fraud campaigns make frequent use of AI, particularly to generate textual content and pictures).
Nonetheless, as earlier than, many risk actors on cybercrime boards stay skeptical about AI. Discussions about it are restricted in quantity, in comparison with ‘conventional’ matters equivalent to malware and Entry-as-a-Service. Many posts concentrate on jailbreaks and prompts, each of that are generally shared on social media and different websites.
We solely noticed a number of primitive and low-quality makes an attempt to develop malware, assault instruments, and exploits – which in some instances led to criticism from different customers, disputes, and accusations of scamming (see our four-part sequence on the unusual ecosystem of cybercriminals scamming one another).
There was some proof of revolutionary concepts, however these had been purely aspirational; sharing hyperlinks to legit analysis instruments and GitHub repositories was extra frequent. As we discovered final yr, some customers are additionally utilizing AI to automate routine duties, however the consensus appears to be that the majority don’t depend on it for something extra complicated.
Curiously, we additionally famous cybercriminals adopting generative AI to be used on the boards themselves, to create posts and for non-security extracurricular actions. In a single case, a risk actor confessed to speaking to a GPT day by day for nearly two years, in an try to assist them cope with their loneliness.
Statistics
As was the case a yr in the past, AI nonetheless doesn’t appear to be a scorching matter amongst risk actors, at the very least not on the boards we examined. On one distinguished Russian-language discussion board and market, for instance, we noticed fewer than 150 posts about GPTs or LLMs within the final yr, in comparison with greater than 1000 posts on cryptocurrency and over 600 threads within the ‘Entry’ part (the place accesses to networks are purchased and bought) in the identical interval.
One other distinguished Russian-language cybercrime web site has a devoted AI space, in operation since 2019 – however there are fewer than 300 threads on the time of this writing, in comparison with over 700 threads within the ‘Malware’ part and greater than 1700 threads within the ‘Entry’ part within the final yr. However, whereas AI matters have some catching as much as do, one might argue that that is comparatively quick development for a subject that has solely turn into extensively recognized within the final two years, and continues to be in its infancy.
A preferred English-language cybercrime discussion board, which makes a speciality of information breaches, had extra AI-related posts. Nonetheless, these had been predominantly centered round jailbreaks, tutorials, or stolen/compromised ChatGPT accounts on the market.
It appears, at the very least for the second, that many risk actors are nonetheless centered on ‘enterprise as common,’ and are solely actually exploring generative AI within the context of experimentation and proof-of-concepts.
Malicious growth
GPT derivatives
In November 2023, we reported on ten ‘GPT derivatives’, together with WormGPT, FraudGPT, and others. Their builders sometimes marketed them as GPTs designed particularly for cybercrime – though some customers alleged that they had been merely jailbroken variations of ChatGPT and comparable instruments, or customized prompts.
Within the final yr, we noticed solely three new examples on the boards we researched:
- Ev1L-AI: Marketed as a free various to WormGPT, Ev1L-AI was promoted on an English-language cybercrime discussion board, however discussion board employees famous that the supplied hyperlink was not working
- NanoGPT: Described as a “non-limited AI based mostly on the GPT-J-6 structure,” NanoGPT is seemingly a piece in progress, educated on “some GitHub scripts of some malwares [sic], phishing pages, and extra…” The present standing of this venture is unclear
- HackerGPT: We noticed a number of posts about this device, which is publicly accessible on GitHub and described as “an autonomous penetration testing device.” We famous that the supplied area is now expired (though the GitHub repository seems to nonetheless be reside as of this writing, as does another area), and noticed a relatively scathing response from one other consumer: “No totally different with [sic] regular chatgpt.”
Determine 1: A risk actor advertises ‘Ev1l-AI” on a cybercrime discussion board
Determine 2: On one other cybercrime discussion board, a risk actor gives a hyperlink to ‘HackerGPT’
Spamming and scamming
Some risk actors on the boards appear more and more desirous about utilizing generative AI for spamming and scamming. We noticed a number of examples of cybercriminals offering suggestions and asking for recommendation on this matter, together with utilizing GPTs for creating phishing emails and spam SMS messages.
Determine 3: A risk actor shares recommendation on utilizing GPTs for sending bulk emails
Determine 4: A risk actor gives some suggestions for SMS spamming, together with recommendation to “ask chatgpt for synonyms”
Curiously, we additionally noticed what seems to be a business spamming service utilizing ChatGPT, though the poster didn’t present a value:
Determine 5: An advert for a spamming service leveraging ChatGPT
One other device, Bluepony – which we noticed a risk actor, ostensibly the developer, sharing totally free – claims to be an internet mailer, with the flexibility to generate spam and phishing emails:
Determine 6: A consumer on a cybercrime discussion board presents to share ‘Bluepony.’ The textual content, translated from Russian, reads: “Good day to all, now we have determined to not conceal within the shadows like ghouls anymore and to indicate ourselves to the world and are available out of personal, to look out into the general public gentle, in an effort to present a very free model of Bluepony. Webmailer – works primarily on requests based mostly on BAS, there are small moments when GMAIL wants authorization by way of a browser, however we try to do it as rapidly as potential. Within the free model, 1 thread will likely be accessible, however even with 1 thread on requests it shoots like a machine gun. Bluepony Free works with such domains as: Aol, Yahoo, Gmail, Mail.com, Gmx.com, Internet.de, Mail.ru, Outlook, Zoho and even SMTP (we are going to work on it right here). Sooner or later, we are going to add extra domains. Some domains might fall off, however we try to repair them urgently, as a result of in addition they don’t stand nonetheless and may add all types of issues. The mailer has OPENai gpt [emphasis added], you’ll be able to generate emails and pictures, html emails… a bunch of settings and moments, so you should utilize AI in the course of the mailing, you describe the required matter and particulars within the immediate and obtain a 100% generated e mail in the course of the mailing itself.”
Some risk actors can also be utilizing AI to raised goal victims who converse different languages. For example, in a social engineering space of 1 discussion board, we noticed a consumer discussing the standard of varied instruments, together with ChatGPT, for translating between Russian and English:
Determine 7: A risk actor begins a dialogue concerning the high quality of varied instruments, together with AI, for translation
OSINT
We got here throughout one submit the place a risk actor acknowledged that they used AI for conducting open supply intelligence (OSINT), albeit they admitted that they solely used it to avoid wasting time. Whereas the poster didn’t present any additional context, cybercriminals carry out OSINT for a number of causes, together with ‘doxing’ victims and conducting reconnaissance towards corporations they plan to assault:
I’ve been utilizing neural networks for Osint for a very long time. Nonetheless, if we discuss LLM and the like, they can’t fully change an individual within the technique of looking and analyzing info. Essentially the most they will do is immediate and assist analyze info based mostly on the information you enter into them, however you could understand how and what to enter and double-check every part behind them. Essentially the most they will do is simply an assistant that helps save time.
Personally, I like neurosearch programs extra, equivalent to Yandex neurosearch and comparable ones. On the similar time, providers like Bard/gemini don’t at all times deal with the duties set, since there are sometimes a whole lot of hallucinations and the capabilities are very restricted. (Translated from Russian.)
Malware, scripts, and exploits
As we famous in our earlier report, most risk actors don’t but look like utilizing AI to create viable, commodified malware and exploits. As a substitute, they’re creating experimental proof-of-concepts, usually for trivial duties, and sharing them on boards:
Determine 8: A risk actor shares code for a ‘Netflix Checker Device’, written in Python “with the assistance of ChatGpt”
We additionally noticed risk actors sharing GPT-related instruments from different sources, equivalent to GitHub:
Determine 9: A risk actor shares a hyperlink to a GitHub repository
An extra instance of risk actors sharing legit analysis instruments was a submit about Purple Reaper, a device initially introduced at RSA 2024, that makes use of LLMs to establish ‘exploitable’ delicate communications from datasets:
Determine 10: A risk actor shares a hyperlink to the GitHub repository for Purple Reaper v2
As with different safety tooling, risk actors are prone to weaponize legit AI analysis and instruments for illicit ends, along with, or as a substitute of, creating their very own options.
Aspirations
Nonetheless, a lot dialogue round AI-enabled malware and assault instruments continues to be aspirational, at the very least on the boards we explored. For instance, we noticed a submit titled “The world’s first AI-powered autonomous C2,” just for the writer to then admit that “that is nonetheless only a product of my creativeness for now.”
Determine 11: A risk actor guarantees “the world’s first AI-powered autonomous C2,” earlier than conceding that the device is “a product of my creativeness” and that “the expertise to create such an autonomous system continues to be within the early analysis phases…”
One other risk actor requested their friends concerning the feasibility of utilizing “voice cloning for extortion of Politicians and enormous crypto influencers.” In response, a consumer accused them of being a federal agent.
Determine 12: On a cybercrime discussion board, a consumer asks for suggestions for tasks for voice cloning in an effort to extort individuals, solely to be accused by one other consumer of being an FBI agent
Tangential utilization
Curiously, some cybercrime discussion board discussions round AI weren’t associated to safety in any respect. We noticed a number of examples of this, together with a information on utilizing GPTs to put in writing a e book, and suggestions for varied AI instruments to create “top quality movies.”
Determine 13: A consumer on a cybercrime discussion board shares generative AI prompts for writing a e book
Of all of the non-security discussions we noticed, a very fascinating one was a thread by a risk actor who claimed to really feel alone and remoted due to their occupation. Maybe due to this, the risk actor claimed that that they had for “virtually the final 2 years…been speaking on a regular basis [sic] to GPT4” as a result of they felt as if they couldn’t discuss to individuals.
Determine 14: A risk actor will get deep on a cybercrime discussion board, confessing to speaking to GPT4 in an try to cut back their sense of isolation
As one consumer famous, that is “dangerous on your opsec [operational security]” and the unique poster agreed in a response, stating that “you’re proper, it’s opsec suicide for me to inform a robotic that has a partnership with Microsoft about my life and my issues.”
We’re neither certified nor inclined to touch upon the psychology of risk actors, or on the societal implications of individuals discussing their psychological well being points with chatbots – and, after all, there’s no method of verifying that the poster is being truthful. Nonetheless, this case, and others on this part, might recommend {that a}) risk actors aren’t solely making use of AI to safety matters, and b) discussions on legal boards typically transcend transactional cybercrime, and may present insights into risk actors’ backgrounds, extracurricular actions, and life.
Discussion board utilization
In our earlier article, we recognized one thing fascinating: risk actors trying to increase their very own boards with AI contributions. Our newest analysis revealed additional cases of this, which regularly led to criticism from different discussion board customers.
On one English-language discussion board, for instance, a consumer instructed making a discussion board LLM chatbot – one thing that at the very least one Russian-language market has accomplished already. One other consumer was not significantly receptive to the concept.
Determine 15: A risk actor means that their cybercrime discussion board ought to have its personal LLM, an thought which is given brief shrift by one other consumer
Stale copypasta
We noticed a number of threads wherein customers accused others of utilizing AI to generate posts or code, sometimes with derision and/or amusement.
For instance, one consumer posted an especially lengthy message entitled “How AI Malware Works”:
Determine 16: A risk actor will get verbose on a cybercrime discussion board
In a pithy response, a risk actor replied with a screenshot from an AI detector and the message “Regarded precisely like ChatGPT [sic] output. Embarrassing…”
Determine 17: One risk actor calls out one other for copying and pasting from a GPT device
In one other instance, a consumer shared code for malware that they had supposedly written, solely to be accused by a distinguished consumer of producing the code with ChatGPT.
Determine 18: A risk actor calls out particular technical errors with one other consumer’s code, accusing them of utilizing ChatGPT
In a later submit in the identical thread, this consumer wrote that “the factor you might be doing mistaken is deceptive noobs with the code that doesn’t work and doesn’t actually makes [sic] a whole lot of sense…this code was simply generated with ChatGPT or one thing.”
In one other thread, the identical consumer suggested one other to “cease copy pasting ChatGPT to the discussion board, it’s ineffective.”
As these incidents recommend, it’s cheap to imagine that AI-generated contributions – whether or not in textual content or in code – aren’t at all times welcomed on cybercrime boards. As in different fields, such contributions are sometimes perceived – rightly or wrongly – as being the protect of lazy and/or low-skilled people searching for shortcuts.
Scams
In a number of instances, we famous risk actors accusing others of utilizing AI within the context of discussion board scams – both when making posts inside arbitration threads, or when producing code and/or instruments which had been later the topic of arbitration threads.
Arbitration, as we clarify within the above linked sequence of articles, is a course of on legal boards for when a consumer thinks they’ve been cheated or scammed by one other. The claimant opens an arbitration thread in a devoted space of the discussion board, and the accused is given a chance to defend themselves or present a refund. Moderators and directors function arbiters.
Determine 19: Throughout an arbitration dispute on a cybercrime discussion board (concerning the sale of a device to test for legitimate Brazilian identification numbers), the claimant accuses the defendant of utilizing ChatGPT to generate their clarification
Determine 20: In one other arbitration thread (this one concerning the validity of a bought dataset) on a distinct discussion board, a claimant additionally accuses the defendant of producing a proof with AI, and posts a screenshot of an AI detector’s output
Determine 21: In one other arbitration thread, a consumer claims {that a} vendor copied their code from ChatGPT and GitHub
Such utilization bears out one thing we famous in our earlier article – that some low-skilled risk actors are searching for to make use of GPTs to generate poor-quality instruments and code, that are then known as out by different customers.
Skepticism
As per our earlier analysis, we noticed a substantial quantity of skepticism about generative AI on the boards we investigated.
Determine 22: A risk actor claims that present GPTs are “Chinese language rooms” (referring to John Searle’s ‘Chinese language Room’ thought experiment) hidden “behind a skinny veil of techbro converse”
Nonetheless, as we additionally famous in 2023, some risk actors appeared extra equivocal about AI, arguing that it’s helpful for sure duties, equivalent to answering area of interest questions or automating sure work, like creating pretend web sites (one thing we researched and reported on in 2023).
Determine 23: A risk actor argues that ChatGPT is appropriate for automating “retailers” (pretend web sites) or scamming, however not for coding
Determine 24: On one other thread in the identical discussion board, a consumer means that ChatGPT is beneficial “for repetitive duties.” We noticed comparable sentiments on different boards, with some customers writing that they discovered instruments equivalent to ChatGPT and Copilot efficient for troubleshooting or porting code
We additionally noticed some fascinating discussions concerning the wider implications of AI – once more, one thing we additionally commented on final yr.
Determine 25: A consumer wonders whether or not AI will result in extra or fewer breaches
Determine 26: A consumer asks – probably as a response to the final tone of derision we noticed elsewhere – whether or not individuals who use AI to generate textual content and code need to be denigrated
Conclusion
A yr on, most risk actors on the cybercrime boards we investigated nonetheless don’t look like notably enthused or enthusiastic about generative AI, and we discovered no proof of cybercriminals utilizing it to develop new exploits or malware. In fact, this conclusion is predicated solely on our observations of a collection of boards, and doesn’t essentially apply to the broader risk panorama.
Whereas a minority of risk actors could also be dreaming huge and have some (probably) harmful concepts, their discussions stay theoretical and aspirational in the interim. It’s extra seemingly that, as with different features of safety, the extra speedy danger is risk actors abusing legit analysis and instruments which can be (or will likely be) publicly or commercially accessible.
There’s nonetheless a big quantity of skepticism and suspicion in the direction of AI on the boards we checked out, each from an OPSEC perspective and within the sense that many cybercriminals really feel it’s ‘overhyped’ and unsuitable for his or her makes use of. Menace actors who use AI to create code or discussion board posts danger a backlash from their friends, both within the type of public criticism or by way of rip-off complaints. In that respect, not a lot has modified both.
In reality, during the last yr, the one important evolution has been the incorporation of generative AI right into a handful of toolkits for spamming, mass mailing, sifting by way of datasets, and, probably, social engineering. Menace actors, like anybody else, are seemingly desirous to automate tedious, monotonous, large-scale work – whether or not that’s crafting bulk emails and faux websites, porting code, or finding fascinating snippets of knowledge in a big database. As many discussion board customers famous, generative AI in its present state appears suited to those types of duties, however to not extra nuanced and sophisticated work.
There may, subsequently, be a rising marketplace for some makes use of of generative AI within the cybercrime underground – however this may increasingly change into within the type of time-saving instruments, relatively than new and novel threats.
Because it stands, and as we reported final yr, many risk actors nonetheless appear to be adopting a wait-and-see method – ready for the expertise to evolve additional and seeing how they will greatest match generative AI into their workflows.