Tech News
← Back to articles

175K+ publicly-exposed Ollama AI instances discovered

read original related products more articles

175,000 Ollama systems misconfigured, publicly exposed without authentication

Attackers exploit instances via LLMjacking to generate spam and malware content

Issue stems from user misconfiguration, fixable by binding to localhost only

Security researchers have claimed around 175,000 Ollama systems worldwide are exposed, putting them at risk of all sorts of malicious activities. In fact, some are already being abused, and if you’re among those running an Ollama instance, you might want to consider reconfiguring it.

Recently, SentinelOne SentinelLABS and Censys discovered many businesses are running AI models locally (the AI listens only to the computer it’s running on, not the internet) using Ollama.

However, in around 175,000 cases, these are misconfigured to listen on all network interfaces, instead of just localhost, making the AI publicly accessible to anyone on the internet, without a password.

LLMjacking

Many of these instances are running on home connections, VPS servers, or cloud machines, and around half allow “tool calling”, meaning their AI isn’t just answering questions, but also running code, calling APIs, and interacting with other systems.

Malicious actors who find these instances can abuse it to do different things and, according to Pillar Security, many are. In an attack called LLMjacking, these actors use other people’s electricity, bandwidth, and compute, to generate spam, malware content, and in some cases - to resell the access to other criminals.

To make matters worse, many systems are located outside normal enterprise security and lack the benefits of corporate firewalls, monitoring, authentication, and similar. All these things, together with the fact that many are sitting on residential IPs, makes them hard to track, and easy to abuse.

... continue reading