AI tools have rapidly become part of everyday life, powering everything from content creation and software development to research and business workflows.
Platforms such as ChatGPT, Claude, Microsoft Copilot, Perplexity and many others are now widely used by individuals and organizations alike, often assisting with tasks that involve internal documents, research material, software code, or other potentially sensitive information.
In many organizations, these tools are already embedded into daily workflows, making them not only convenient but also operationally critical.
As reliance on these services continues to grow, so does their value, but not only for legitimate users, but also within the cybercrime ecosystem. Access to advanced AI models can significantly reduce effort, improve output quality, and accelerate tasks that previously required expertise or time.
An analysis conducted by Flare analysts of hundreds of posts collected from fraud-oriented online communities reveals a growing underground market centered on premium AI platform access. pass
Paid accounts are sold in Telegram groups
Flare link to post, sign up for the free trial to access if you aren’t already a customer.
Rather than isolated cases of account misuse, the data points to a recurring pattern in which access to AI platforms is repeatedly advertised and redistributed through resale-style listings. Many of these listings promote discounted subscriptions, bundled access to multiple AI tools, or usage models that claim to remove typical platform limitations.
This may suggest a broader trend in underground markets, where access to digital services can be bundled, repackaged, and resold across a wider buyer base.
How Do Threat Actors Obtain AI Accounts?
... continue reading