The Storage Problem Quietly Slowing AI Infrastructure: How Novodisq Is Rethinking the Economics
- ctsmithiii
- 4 minutes ago
- 4 min read
Novodisq is targeting the power and space constraints quietly slowing AI infrastructure. Here's how their storage economics could reshape spending.

Covered at the 66th IT Press Tour—January 2026
Most of the AI infrastructure conversation focuses on GPUs. And for good reason—training large models is GPU-intensive and expensive. But there's a second constraint that doesn't get nearly enough attention: the storage and power underneath.
Data center rack space in key regions is booked out months in advance. Power capacity is the number-one barrier to new AI deployments in parts of Europe and North America. And the layer of storage that handles warm data—always accessible, always on, constantly streaming—is one of the least optimized parts of the entire stack.
That's the problem Novodisq is going after.
The Warm Storage Gap
Global data is growing 20 to 30 percent annually. Data center capacity is not growing at the same rate. The gap is tightening, and it's being felt most acutely in the middle tier—storage that isn't cold enough to archive but isn't hot enough to justify high-performance flash arrays.
This is where most enterprise data actually lives. Backup staging, caching, streaming, AI training datasets, genomics samples, video surveillance footage. It's always on, and it needs to be accessible quickly when someone needs it. Historically, this workload has been served by spinning hard drives. That's a lot of power, a lot of physical space, and a lot of heat.
Novodisq's flagship product, Novobladeâ„¢, is designed to replace that layer.
The Numbers That Matter
Here's what Novoblade delivers in its full configuration: 11.5 petabytes of storage in a 2U rack-mountable unit, consuming 1200 watts at full capacity. Compare that to the traditional storage systems serving the same workload—the power draw is roughly one-tenth per gigabyte, and the rack footprint is less than one-tenth.
Novodisq positions the cost of warm storage on Novoblade at roughly the price of cold storage tiers like AWS Glacier. But the data stays warm. It's accessible in seconds, not hours.
The economics stack up in three ways. First, density reduces the cost per terabyte of rack rental—you're fitting orders of magnitude more storage into the same physical space. Second, the system uses solid-state technology with no moving parts, which means fewer failures and less maintenance overhead. And third, the power budget is 5 to 10 percent of what comparable traditional storage systems consume.
Put those three together, and the total cost of ownership story changes significantly—even if the per-terabyte purchase price is comparable to alternatives.
A Real-World Proof Point
Novodisq shared a case study from an earlier version of their technology that illustrates how this works in practice. An IT service provider had an on-premises VM cluster and a tape backup library. Backup and restore times were a persistent operational problem.
They deployed a Novodisq system as a staging layer between the production environment and tape. Each night, backups were written to the Novodisq unit. The following day, they moved to tape. But the staging layer kept three to four weeks of backups online and immediately accessible.
The result: restore times dropped from hours to minutes. The FPGA on the system handled RAID, checksumming, and encryption in hardware—no CPU overhead. And the whole unit consumed less than 300 watts.
That's not a hypothetical. It's a deployment they ran.
Who This Is Built For
Novodisq's target customers fall into a few clear categories.
Genomics and healthcare organizations are a natural fit. Each human genome sample is roughly one terabyte. Tens of thousands are generated daily worldwide. That data is never deleted—legal requirements and the potential for future algorithm improvements mean it needs to stay permanently. Right now, a lot of it sits on expensive cloud storage like S3, rarely accessed. Novodisq offers local storage with on-board FPGA processing during ingestion.
Organizations with data sovereignty requirements are another key segment. Nations and enterprises are increasingly demanding control over where their data lives and who can access it. Novodisq's system is a self-contained, high-availability unit that can be deployed on-premises in a warehouse or co-location facility. The company has confirmed it can manufacture in Europe or the US if customers require it for supply chain security or regulatory compliance.
Managed service providers and system integrators are also in scope. The multi-tenant architecture—air-gapped customers within a single 2U chassis—makes it practical for shared infrastructure deployments.
Where the Company Is Now
Novodisq is a seed-stage startup. Two New Zealand venture capital firms provided early funding. The founding team includes Graham Gaylard, CEO with over 30 years in IT services and storage, and Robbie Litchfield, lead software engineer. Manufacturing is currently in Christchurch, New Zealand.
They're in active discussions with three to four pilot customers across different verticals. Some of the organizations they're in conversation with represent potential contracts worth $100 million per year, according to CFO Douglas Paul. Year one is focused on 60 to 250TB trial deployments with early adopters. Year two is land-and-expand with MSPs and accelerated manufacturing.
The technology partnerships are already in place. AMD supplies the Versal SoC. Micron supplies NAND with pre-booked inventory to hedge against price volatility. Microchip provides NVMe controllers. On the integration side, names like CoreWeave, Digital Realty, and Stack Infrastructure are on the partner roadmap.
What to Watch
The AI infrastructure buildout is running into physical limits. Power and space constraints aren't going away—they're getting tighter. Novodisq is a small team with a specific answer to a specific problem: make warm storage dramatically more efficient so that the same data center footprint can handle more.
They plan to be at FMS 2026. The pilot deployments over the next 12 months will tell the real story. But the engineering is solid, the economics are compelling, and the market need is only growing.

