Samsung has announced its own SOCAMM2 LPDDR5-based memory module designed specifically for AI data center platforms, positioning it to bring the power efficiency and bandwidth advantages of LPDDR5X to servers without the long-standing trade-off of permanent soldering, while aligning the form factor with an emerging JEDEC standard for accelerated and AI-focused systems.
Samsung says it is already working with Nvidia on Nvidia-accelerated infrastructure built around the module, positioning SOCAMM2 as the natural response to rising memory power costs, density constraints, and serviceability concerns in large-scale deployments.
At a high level, SOCAMM2 is aimed at a specific and growing class of systems where CPUs or CPU-GPU superchips are paired with large pools of system memory that must deliver high bandwidth at lower power than conventional server DIMMs can provide, and all within a smaller footprint. As inference workloads expand and AI servers transition to sustained, always-on operation, memory power efficiency can’t continue to be viewed as a secondary optimization; it is a material contributor to rack-level operating cost. SOCAMM2 is a reflection of this.
Why LPDDR is moving into the data center
LPDDR has long been associated with smartphones, an ideal application for its low-voltage operation and aggressive power management. In servers, however, its adoption has been limited by one practical issue more than any other: LPDDR is typically soldered directly to the board, which complicates upgrades, repairs, and hardware reuse at scale. That makes it a difficult sell for hyperscalers and other potential adoptees who expect to refresh memory independently of the rest of the platform.
SOCAMM2 is Samsung’s attempt to address this mismatch. The module uses LPDDR5X devices, but packages them into a detachable, compression-attached form factor designed for server deployments. Samsung highlights that SOCAMM2 has twice the bandwidth compared with DDR5 RDIMMs, along with reduced power consumption and a more compact footprint that can ease board routing and cooling in dense systems. The company also emphasizes serviceability, arguing that modular LPDDR allows memory to be replaced or upgraded without scrapping entire boards, reducing downtime and total cost of ownership over a system’s lifetime.
Samsung’s SOCAMM2 is expected to comply with the JEDEC JESD328 standard for compression-attached memory modules under the CAMM2 umbrella. The standard aims to make LPDDR-based memory modules interchangeable and vendor agnostic in the same way as standard RDIMMs are today, while preserving the signal integrity needed to run LPDDR5X at very high data rates. As AI racks consume increasingly large memory pools, DDR5 will continue to incur power and thermal penalties that scale poorly with capacity. SOCAMM2 will offer a way to raise effective bandwidth while cutting energy consumption, provided it can be integrated into platforms that support modular components.
SOCAMM2 versus RDIMM
(Image credit: SK hynix)
Understanding where SOCAMM2 fits requires looking at the full memory hierarchy in AI systems. At the top sits HBM, tightly coupled into the same package as GPUs or accelerators to deliver extreme bandwidth at the cost of price and capacity constraints. HBM is indispensable for training and high-throughput inference, but it is not a general-purpose memory solution. Below that, traditional DDR5 DIMMs provide large, relatively inexpensive capacity for CPUs, but with higher power draw and lower bandwidth per pin.
... continue reading