top of page

Is This The Next Tech Decacorn?

While storage isn't sexy, it may produce the next decacorn.


The 44th IT Press Tour had the opportunity to meet with Jeff Denworth, Chief Marketing Officer, and Co-founder, and Kirstin Bordner, Global Director of Corporate Communications and Thought Leadership at Vast Data.


Through its first three years, Vast is outperforming storage high achievers like Isilon, Data Domain, Nutanix, and Pure Storage. They have been able to surpass what others achieved in their first three years with far fewer employees.


It's also outperforming what Silicon Valley darlings Nvidia, Netflix, Snowflake, and ServiceNow achieved in their first three years.


Data Driving Growth


Snowflake was born during the machine learning era. The average Snowflake customer has 45TB of capacity. Vast is leading the deep learning era. Their average customer has 12PB of capacity -- 266x that of Snowflake customers.


Platforms must evolve with data and the needs of their customers. During the analytics era, compute was handled by VMware, data by Verticam SAP, Oracle, and SAS, and storage by EMC and NetApp.


During the machine learning era, compute was handled by Amazon EKS, and Anthos; data by Snowflake and Databricks; and, storage by Hadoop and Delta Lake.


In the deep learning era, solutions are required to work at a much larger scale and Vast is positioned to provide the solution for compute, data, and storage. They are doing so with their DASE architecture with thousands of containers, fully parallel data services, and composable infrastructure. They integrate everything with a web-scale commodity data fabric. Storage needs are met with exabytes of hyper-scale low-cost flash.


The result is simplicity, linear scalability and resilience, significant cost-efficiency, and the ability to consolidate and enable insights.


Evolution of the Business


In its first two years, Vast focused on financial services, life science, HPC sites and universities, web and SaaS companies, and media companies. Today, they have expanded their horizontal focus as more regulation creates more need for data management and security features.


They have also been successfully expanding their business with existing customers landing an initial $1 million use case, followed by a $3 million use case expansion, and then $12 million in new use cases.


Vast has received a 100% recommendation rate on Gartner Peer Insights and has been identified as the enterprise storage vendor to watch.


Evolution of Industry Needs


The needs around big data and analytics are changing as data volumes grow. A previous focus on data protection has evolved to rapid data restoration. A focus on Hadoop clusters has morphed to fast S3 data lakes and abstraction. While a focus on machine learning is moving to deep learning and inference at scale.


Consequently, Vast has evolved to meet the changing needs with scalable, all-flash backup and restore without a legacy flash tax. Scalable, resilient, and fast S3 without HDD economics. And, the simplest path to scalable deep learning.


Storage tiering does not work for AI. Hard drive access can result in 50X slower I/O for AI applications. Vast is innovating the platform to address this. 30TB drives reduce the $/GB by more than 20%. The Ceres platform reduces hardware costs by approximately 12%. Storage class memory from Kioxia reduces costs and improves the Vast supply chain. Dual sourcing reduces costs and risk.


Vast universal storage uses 91% less power and requires 83% less space than Pure FlashBlade and Dell PowerScale Archive (A300) at a lower cost.


Jeff Denworth shared Vast's hardware during a tour.


Data Reduction


Vast is using 8KB block sizes which are less susceptible to noise.


Hard disk drive deduplication systems were not designed for fast restores. This is a critical failing of the ransomware age. Flash, on the other hand, provides no compromise to restore performance. Random access enables new data reduction opportunities. Vast's data structure is a variable-length at about 32KB. This makes Vast an insurance policy for backup, recovery, and restoration.


Vast introduced adaptive chunking in April. After they have the chunks, they are sent to a write pipeline to be deduped, matched for similarity, and the rest is compressed.

Vast compared their solution with adaptive chunking to a Dell Data Domain storage system and achieved the following results:

  • 70% better data compression of Commvault backup files of some SQL Servers

  • 50% better data compression of some unstructured data

  • 30% better data compression from some lab virtual machines.

Vast is creating a data reduction layer with a single copy dictionary of 10,000 controllers that will result in savings of $1000's per controller. Adaptive chunking will provide 2X more reduction that the leaders and significantly more than that for other file storage. New data-aware compression with lead to an additional 25% cost reduction ($100K/PB).

bottom of page