top of page

Storage Solution Puts Real-Time Performance at Scale within Reach

Modern applications require compute in storage to deliver real-time results.



The IT Press Tour had the opportunity to meet with executives from Pavilion Data.


Today, business insights and decision-making rely on modern, data-intensive applications. These are not back-office applications. These applications are used to detect risk and fraud, recognize images, provide real-time big data analytics, perform 3D volumetric capture, process genomes, and more.


These new applications are pushing what's possible in business and in the world by taking in multiple orders of magnitude of data, crunching it, and providing insights in real-time upon which real businesses and human lives depend.


Compared to traditional applications, modern, data-intensive applications use 100x more data, 10s of PBs, and must be high performance, real-time, parallel, and scale. Modern applications are built on new, advance, parallel application toolchain stacks.


There have been massive advancements in compute, memory, and network technology to enable modern applications. In the last decade, server CPU has increased 800x, memory has increased 100x, network speed has increased 40x.


But there's still a wide gap between how the rest of the infrastructure has evolved in terms of performance, scale, density, and capacity relative to storage and data. During this time, storage throughput has increased 10x while latency has increased 2x. Data performance is a new bottleneck.


The Pavilion Data teams shared their vision on two fronts. They see storage with some compute and AI/ML GPU systems. Some code will move into the storage segment. This is not about running the entire application inside storage, but running certain tasks which are better suited to run inside storage than directly at the compute layer. This is accelerating towards computational storage. They also envision AI/ML accelerators being closer to data since data has the most gravity and minimizes moving datasets.


Pavilion is building a platform that is field-programmable and capable of making the transition from being just a data storage appliance to an information management appliance to an inference generation appliance.


To accomplish this, they are building a network-centric architecture, which they call the hyper-parallel platform. It is built around the notion of ultra-low latency, high-bandwidth network, based on a PCIe fabric. The bandwidth and latency characteristics of that network is capable of doing 6.1 TB/second and shuttling data through it at roughly 1670 nanoseconds.


On the top is distributed shared memory. It provides the capability of durably storing small amounts of data at extremely low latency almost like memory semantics that is byte-addressable. They also connect end-by-end devices so they can communicate with each other using memory-oriented semantics.


The Pavilion HyperParallel Data Platform provides unprecedented scale with multiple PBs of data under a single global namespace. Fast performance 10x to 80x faster than "best-in-class." The flexibility is multi-protocol with class-leading density, as well as a compact space and power footprint.



Comments


bottom of page