Latest Tech News

Stay updated with the latest in technology, AI, cybersecurity, and more

Filtered by: clickhouse Clear Filter

Load Test GlassFlow for ClickHouse: Real-Time Dedup at Scale

Load Test GlassFlow for ClickHouse: Real-Time Deduplication at Scale By Ashish Bagri, Co-founder & CTO of GlassFlow TL;DR We tested GlassFlow on a real-world deduplication pipeline with Kafka and ClickHouse. It handled 55,00 records/sec published by Kafka and processed 9,000+ records/sec on a MacBook Pro, with sub-0.12ms latency. No crashes, no message loss, no disordering. Even with 20M records and 12 concurrent publishers, it remained robust. Want to try it yourself? The full test setup

Scaling our observability platform by embracing wide events and replacing OTel

TLDR # Observability at scale: Our internal system grew from 19 PiB to 100 PB of uncompressed logs and from ~40 trillion to 500 trillion rows. Efficiency breakthrough: We absorbed a 20× surge in event volume using under 10% of the CPU previously needed. OTel pitfalls: The required parsing and marshalling of events in OpenTelemetry proved a bottleneck and didn’t scale - our custom pipeline addressed this. Introducing HyperDX: ClickHouse-native observability UI for seamless exploration, correlatio

ClickHouse scales beyond 100 petabytes of logs

TLDR # Observability at scale: Our internal system grew from 19 PiB to 100 PB of uncompressed logs and from ~40 trillion to 500 trillion rows. Efficiency breakthrough: We absorbed a 20× surge in event volume using under 10% of the CPU previously needed. OTel pitfalls: The required parsing and marshalling of events in OpenTelemetry proved a bottleneck and didn’t scale - our custom pipeline addressed this. Introducing HyperDX: ClickHouse-native observability UI for seamless exploration, correlatio