Tech News
← Back to articles

Broker-Side SQL Filtering with RabbitMQ Streams

read original related products more articles

RabbitMQ 4.2 introduces SQL filter expressions for streams, enabling powerful broker-side message filtering.

In our benchmarks, combining SQL filters with Bloom filters achieved filtering rates of more than 4 million messages per second — in highly selective scenarios with high ingress rates. This means only the messages your consumers actually care about leave the broker, greatly reducing network traffic and client-side processing overhead.

High-throughput event streams often deliver large volumes of data to consumers, much of which may not be relevant to them. In real systems there may be tens of thousands of subjects (event types, tenants, regions, SKUs, etc.), making a dedicated stream per subject impractical or unscalable.

RabbitMQ Streams address this with broker-side filtering.

Bloom filters skip entire chunks that don’t contain values of interest, while SQL Filter Expressions evaluate precise per-message predicates so only matching messages cross the network. This reduces network traffic, lowers client CPU and memory use, and keeps application code simpler.

Demand for broker-side filtering is longstanding - Kafka users have requested it for years (see KAFKA-6020) — but Kafka still lacks this capability. RabbitMQ’s Bloom + SQL filtering makes selective consumption practical at scale today.

Let’s walk through a hands-on example.

To run this example in your environment:

Start RabbitMQ with a single scheduler thread:

docker run -it --rm --name rabbitmq -p 5672 :5672 -e ERL_AFLAGS = "+S 1" rabbitmq:4.2.0-beta.3

... continue reading