100TB SSD: What It Is, Why It Matters, and What’s Next”

100TB SSD What It Is, Why It Matters, and What’s Next”

In the world of data storage, we tend to take progress for granted. A few years ago, a 1TB SSD felt extravagant, 2TB felt like luxury, and 4TB seemed like the upper limit for most consumers. Yet behind the scenes—far beyond gaming PCs and creator laptops—storage innovators have been quietly pushing boundaries most people never hear about.

That’s where the 100TB SSD comes in.

To many, the phrase sounds almost absurd. A hundred terabytes? On a single SSD? And not an array, not a cloud instance, not a hyperscale solution—just one physical drive?

It existed. It was real. And it changed far more about the storage industry than most people realize.

In this blog, we’re diving deep into the legendary 100TB ExaDrive, why it was created, what problems it solved, how the market reacted, and how the industry has changed since then. We’ll also explore whether 100TB is still the upper limit—or just a stepping stone to something even more massive.

Let’s explore the biggest SSD the world had ever seen at its launch, and the impact it made on everything from data centers to long-term archival storage.

What Was the 100TB ExaDrive Actually?

The 100TB ExaDrive wasn’t a consumer product you could buy off Amazon or drop into a gaming rig. It wasn’t meant to be.

It was built by Nimbus Data, a company known for pushing insane limits in enterprise flash systems. When they launched the 100TB ExaDrive around 2018, it immediately became the world’s largest-capacity SSD.

Physically, it looked like a somewhat oversized 3.5-inch SATA drive. You could hold it in one hand, like a hard drive. But inside, it was essentially a small flash-storage datacenter built into a drive enclosure.

Inside the ExaDrive were:

  • Hundreds of NAND flash packages
  • A highly efficient controller system
  • Power-loss protection circuitry
  • Enterprise-grade DRAM caching
  • Onboard microcontrollers to handle wear leveling and load distribution

The drive was designed around one key mission: high-density flash storage for data centers, with the lowest possible power draw and the highest possible reliability.

It wasn’t meant to win speed benchmarks.
It wasn’t meant to flex PCIe bandwidth.
It wasn’t meant to compete with NVMe performance monsters.

Its purpose was simple: Replace hard drives at enormous capacities while using less power and lasting longer.

That’s what made it revolutionary.

Why a 100TB SSD Mattered

At first glance, the average consumer might shrug. “Who needs that much?”, people joked online. But for the industries that actually manage petabytes of data, the 100TB milestone mattered on multiple levels.

It moved SSDs into a tier previously occupied only by multi-drive HDD arrays.

This wasn’t just about hitting a cool, round number. It had deeper implications:

1. It Proved NAND Could Scale Far Beyond Expectations

In 2018, mainstream SSDs maxed out around 4TB. Even enterprise drives rarely exceeded 16TB.

Nimbus Data simply leapt ahead—and showed the world that NAND flash scalability was nowhere near its ceiling.

2. It Challenged HDDs at Their Own Game

For decades, hard drives dominated when it came to capacity.

Flash was for speed.
Hard disks were for size.

But suddenly, a single SSD could outperform a rack of high-capacity HDDs while consuming far less power.

It forced the industry to rethink assumptions.

3. It Made “Flash for Everything” a Realistic Vision

Data centers long dreamed about replacing spinning disks entirely. Hard drives require more power, generate more heat, and degrade faster under load.

The 100TB ExaDrive showed that flash could finally cross the threshold where it could be used for bulk storage—not just performance layers.

4. It Pushed the Economics of Flash Storage Forward

Once a physical milestone is crossed, manufacturers begin optimizing for it. The 100TB ExaDrive applied pressure on competitors and NAND suppliers to keep improving yields, packaging density, and layer count.

It was a symbolic and practical win for the entire flash ecosystem.

Rack-Unit Density: Using Space More Efficiently

If you don’t work with servers or large-scale data storage, the phrase “rack unit density” might sound abstract. In a datacenter, however, it’s everything.

Each rack has:

  • Limited height
  • Limited cooling
  • Limited power delivery
  • Limited weight capacity

The goal is simple: fit as much usable storage as possible into a small space, while staying within power and cooling limits.

Traditionally, the best density came from packing racks full of 10TB or 12TB hard drives. But HDDs bring heat, vibration, and mechanical failure points. They need airflow. They consume more power per terabyte.

A single 100TB SATA SSD suddenly changed the math.

Instead of:

  • 10 × 10TB drives = 100TB
    …you could theoretically have:
  • 1 × 100TB drive = 100TB

And dozens of those drives could be packed into a rack with far less strain on the power budget.

That meant:

  • Reduced cooling requirements
  • Lower airflow needs
  • Less mechanical risk
  • Simpler cabling
  • Higher storage density per rack unit (U)

Even though the ExaDrive wasn’t fast in NVMe terms, density was its true superpower.

HDD Replacement for Nearline / Archive Storage

One of the most significant use cases for the 100TB ExaDrive was “nearline” storage.

Nearline refers to data that isn’t actively used every second but must remain accessible—usually:

  • Backups
  • Historical logs
  • Machine learning datasets
  • Compliance archives
  • Video libraries
  • Surveillance footage
  • Scientific instrument output

Traditionally, nearline storage has been dominated by HDDs. They’re slow but cheap, and they can hold vast amounts of data.

But the 100TB SSD had an advantage HDDs could never compete with:

Zero mechanical latency.

Even slow SATA SSDs outperform HDDs in random access performance by orders of magnitude. For workloads that involve mixed reads and writes or unpredictable access patterns, SSD-based archives can massively outperform hard-drive based systems.

Additionally:

  • SSDs have lower failure rates
  • SSDs use less power per terabyte
  • SSDs weigh less
  • SSDs generate less noise and heat
  • SSDs offer more predictable performance

This made the ExaDrive a perfect candidate for organizations wanting high-density archival storage without the physical drawbacks of massive HDD arrays.

NAND Scale-Out: Proof of Concept

Before the ExaDrive, storage companies had talked about ultra-dense flash systems.

Nimbus Data was the first to actually produce one at scale.

This made the ExaDrive an important proof of concept—not only for itself, but for the future of NAND flash development.

It demonstrated:

1. Vertical NAND scaling works

As NAND layer counts climbed (32 → 64 → 96 → 128 → 176 → 232 → 300+), single-package density exploded.

The ExaDrive proved companies could use these high-density modules reliably in large quantities.

2. Controllers could handle massive flash arrays

Managing 100TB of NAND isn’t trivial.

Wear leveling, garbage collection, and error correction all become exponentially harder.

Nimbus showed it was feasible.

3. Power efficiency doesn’t need to collapse at high capacities

The ExaDrive consumed surprisingly little power for its size—far less than a comparable HDD array.

This helped the industry trust large-scale SSD-based archives.

4. Thermal management is solvable

100TB of flash can generate heat. Nimbus engineered the enclosure to dissipate it efficiently without active cooling.

All of this laid crucial groundwork for the high-capacity enterprise SSDs that exist today.

Price and Availability

Here’s the part that shocked people the most:

The 100TB ExaDrive cost as much as a small car.

Its launch price was around:

$40,000 (market estimates varied)

This wasn’t a consumer toy. It was designed for:

  • Cloud providers
  • AI research centers
  • Video production archives
  • Universities
  • National labs
  • Government facilities
  • Hyperscale enterprise users

Availability was technically open, but supply was limited due to the sheer number of NAND modules required to assemble each drive.

Production was low-volume.
Sales were selective.
Lead times could be weeks to months.

This wasn’t a product meant for the mass market. It was a specialized solution for organizations running petabyte-scale workloads.

What’s Changed Since 2018 — and Is 100TB Still the Max?

A lot has changed since the ExaDrive’s debut.

While the 100TB size was unprecedented in 2018, today’s storage landscape has shifted in several important ways.

1. NVMe Has Taken Over Enterprise Storage

U.2, U.3, M.2, and EDSFF drives dominate modern servers.

SATA drives like the ExaDrive, while still useful, have become niche for ultra-high capacity but lower-performance roles.

2. NAND Layer Counts Have Exploded

In 2018, we were around 64-96 layers.
Today, 200+ layer NAND is common, with manufacturers pushing toward 300+ layers.

This means single NAND packages can store far more data.

3. Enterprise Drives Have Reached 60–120TB Class

Multiple manufacturers now offer:

  • 60TB NVMe drives
  • 64TB enterprise SSDs
  • 100TB-class drives in development
  • Specialized QLC-based ultra-capacity drives for cloud providers

While not all are widely available or affordable, they exist.

4. Cloud hyperscalers use custom SSDs

Companies like:

  • Amazon
  • Google
  • Meta
  • Microsoft

…now rely heavily on custom-designed SSDs with capacities far beyond consumer announcements.

These may not publicly advertise capacities, but hyperscale custom NVMe drives often approach or exceed the 50–100TB range.

5. QLC NAND has become mainstream for archival roles

Quad-Level Cell flash stores more bits per cell, enabling high capacity at lower cost. It’s less durable than SLC or TLC, but perfect for read-heavy archival tasks.

The ExaDrive used a mix of enterprise-grade NAND, but modern ultra-dense drives often rely on QLC to push capacities even further.

6. 100TB is no longer the absolute upper bound

We’ve already seen roadmaps, prototypes, and hyperscale deployments that exceed 100TB per drive.

While not mainstream yet, the ceiling is moving upward:

  • 128TB
  • 150TB
  • 200TB+ (future)

These capacities are coming—sooner than many expect. The 100TB ExaDrive simply broke the barrier first.

Additional Factors Shaping the Future of Ultra-Large SSDs

To give you a complete picture, here are more key trends that influence high-capacity SSD design today:

1. EDSFF (Enterprise & Datacenter SSD Form Factors)

New shapes like:

  • E1.S
  • E1.L
  • E3.S
  • E3.L

…allow significantly more NAND to be packed in than traditional 2.5″ or 3.5” drives.

2. PCIe 5.0 and 6.0 Bandwidth

Higher interface bandwidth enables massive drives to actually use their flash efficiently and avoid bottlenecks.

3. Zoned Storage (ZNS)

Zoned namespaces allow the host system to manage data placement more intelligently, reducing write amplification and optimizing performance—great for huge QLC SSDs.

4. Computational Storage

Some ultra-large SSDs now include onboard processors that perform compression, encryption, and filtering on the drive itself, reducing CPU load and network traffic.

5. AI and Machine Learning Needs

Modern AI training datasets can hit petabyte scale easily. High-density NVMe storage is becoming essential to keep data close to compute resources.

6. Energy Efficiency Mandates

Governments and cloud providers are pushing for greener data centers. High-density SSDs help achieve lower watts per terabyte.

Practical Takeaway: What Does a 100TB SSD Mean for the Average Person?

Even though you won’t be installing a 100TB ExaDrive in your gaming setup anytime soon, the impact absolutely trickles down.

Here’s what it means for everyday users:

1. Consumer SSD Prices Keep Dropping

High-end enterprise storage development drives:

  • Better yields
  • Cheaper NAND
  • More efficient controllers

That eventually lowers prices for mainstream drives.

2. Larger consumer SSDs become more common

We’ve already seen:

  • 4TB becoming normal
  • 8TB entering mainstream pricing
  • 16TB consumer SSDs beginning to appear

Expect this trend to continue.

3. Cloud storage gets cheaper and faster

Google Drive, Dropbox, iCloud, and OneDrive all benefit from the economics of large-scale flash.

As high-density SSDs become more economical, cloud storage becomes:

  • Cheaper to operate
  • Faster to access
  • More reliable

4. Gaming, content creation, and video editing improve

Most AAA games today easily exceed 100GB. Ultra-fast NVMe storage is becoming a baseline requirement for modern engines.

Larger SSDs at lower prices smooth the experience.

5. Home servers and NAS systems become more capable

SSDs in NAS boxes used to be unthinkably expensive. Now they’re increasingly common, especially for caching, metadata storage, and high-speed datasets.

Is 100TB the New Normal? Not Yet — But It’s the Beginning

The ExaDrive 100TB was more than a product.

It was a statement.

It proved that NAND could scale into territory once reserved exclusively for nearline HDDs. It forced the industry to rethink capacity, density, power efficiency, and reliability.

Today, we’re seeing the ripple effects everywhere:

  • Enterprise SSDs are getting larger.
  • Hard drives are losing ground outside bulk cold storage.
  • Cloud providers are embracing massive flash tiers.
  • NAND technology continues to evolve at incredible speed.

The 100TB barrier has been broken—and we’re headed toward even more staggering numbers.

Final Thoughts

The 100TB ExaDrive was one of those rare products that seemed almost mythological when it launched. A drive so massive that many people questioned whether it was real. A drive meant not for the public, but for the ever-expanding backbone of the world’s data infrastructure.

It didn’t become mainstream, but it didn’t need to.

Its existence reshaped expectations, accelerated innovation, and helped redefine what “large storage” even meant.

Today, 100TB is no longer unimaginable. It’s simply a milestone—the first major step toward a future where flash replaces nearly all spinning disks, where AI workloads demand petabytes of fast-access data, and where the limits of storage density continue to be pushed upward year after year.

The story of the largest SSD isn’t just about capacity.
It’s about ambition.
Optimization.
And the relentless march of technology.

And if the trends of the last five years are any indication, 100TB SSDs are only the beginning.

About the Author

You may also like these