Tech News
← Back to articles

Unlocking Real-Time Supply Chain Analytics with GPU Technology: Q&A with Meher Siddhartha Errabolu

read original related products more articles

As supply chains generate ever-larger datasets and demand faster decisions, traditional central processing unit (CPU)-based systems are approaching their limits. To meet real-time requirements at scale, developers turn to accelerated computing powered by graphics processing units (GPUs). These massive parallel processors reshape how data is accessed, analyzed, and operationalized across the enterprise supply chain.

One expert at the forefront of this transformation is Meher Siddhartha Errabolu. Currently a technical architect at Blue Yonder, a world leader in supply chain management solutions. Siddhartha has over 20 years of experience in enterprise application development, from building microservices that handle millions of daily transactions to designing extract, transform, and load (ETL) frameworks using functional programming principles. Today, he leads GPU-based initiatives in real-time supply chain computing, helping organizations move beyond batch-based analytics into truly responsive systems.

Q: Why are GPUs growing in enterprise supply chain applications, and how do they differ from traditional CPUs?

Errabolu: Supply chain systems are increasingly data-intensive. Large retailers manage millions of stock keeping units (SKUs) across thousands of locations, generating hundreds of billions of data points when you factor in time-series data, forecast metrics, and historical inventory states. The latest CPUs, 24cores, handle instructions sequentially. That design works well for general-purpose tasks, but doesn’t scale well when instant decisions are needed across large datasets.

GPUs, in contrast, come with up to 21,760 cores (the latest Nvidia GeForce RTX 50 Series), enabling thousands of simultaneous operations. This parallelism makes GPUs ideal for environments where multiple calculations must be performed quickly and concurrently. In our work at Blue Yonder, we are successfully testing systems GPU acceleration to process 364 billion records in near real-time—something that would be unworkable using CPUs alone. While GPU efficiency gains are not always linear, we routinely observe 100 times performance improvements over traditional systems.

Q: How are supply chain workloads adapted to leverage GPU parallelism?

Errabolu: The key is to structure workloads using two complementary approaches: task-level and data-level parallelism. With task-level parallelism, multiple functions run on the same data. For example, users might simultaneously calculate inventory accuracy and storage utilization rate for a single product. Data-level parallelism applies the same function across multiple datasets, for example, running the same metric calculation across 10,000 SKUs at once.

To put this in perspective, one use case involved one million products spread across 3,500 stores, with two years of weekly data. That alone resulted in over 364 billion records, not including derived metrics. By applying parallelism, we reduced what would typically require hours into sub-second computations. This sets the stage for real-time responsiveness, which is becoming a baseline expectation in many enterprise settings.

Q: Where do traditional big data platforms fall short, and how does GPU architecture address that gap?

Errabolu: Big data platforms are often optimized for batch analytics. Systems extract, process, and review data overnight. While that model works for long-term planning, it doesn’t support real-time decisions. In contrast, GPU-backed microservices can return results in less than 50 milliseconds.

... continue reading