We collect cookies to analyze our website traffic and performance; we never collect any personal data. Cookie Policy
Accept
NEW YORK DAWN™NEW YORK DAWN™NEW YORK DAWN™
Notification Show More
Font ResizerAa
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Reading: Cracking AI’s storage bottleneck and supercharging inference on the edge
Share
Font ResizerAa
NEW YORK DAWN™NEW YORK DAWN™
Search
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Follow US
NEW YORK DAWN™ > Blog > Technology > Cracking AI’s storage bottleneck and supercharging inference on the edge
Cracking AI’s storage bottleneck and supercharging inference on the edge
Technology

Cracking AI’s storage bottleneck and supercharging inference on the edge

Last updated: July 7, 2025 1:52 pm
Editorial Board Published July 7, 2025
Share
SHARE

As AI functions more and more permeate enterprise operations, from enhancing affected person care by way of superior medical imaging to powering advanced fraud detection fashions and even aiding wildlife conservation, a vital bottleneck typically emerges: information storage.

Throughout VentureBeat’s Remodel 2025, Greg Matson, head of merchandise and advertising and marketing, Solidigm and Roger Cummings, CEO of PEAK:AIO spoke with Michael Stewart, managing associate at M12 about how improvements in storage know-how allows enterprise AI use instances in healthcare.

The MONAI framework is a breakthrough in medical imaging, constructing it sooner, extra safely, and extra securely. Advances in storage know-how is what allows researchers to construct on prime of this framework, iterate and innovate shortly. PEAK:AIO partnered with Solidgm to combine power-efficient, performant, and high-capacity storage which enabled MONAI to retailer greater than two million full-body CT scans on a single node inside their IT atmosphere.

“As enterprise AI infrastructure evolves rapidly, storage hardware increasingly needs to be tailored to specific use cases, depending on where they are in the AI data pipeline,” Matson stated. “The type of use case we talked about with MONAI, an edge-use case, as well as the feeding of a training cluster, are well served by very high-capacity solid-state storage solutions, but the actual inference and model training need something different. That’s a very high-performance, very high I/O-per-second requirement from the SSD. For us, RAG is bifurcating the types of products that we make and the types of integrations we have to make with the software.”

Bettering AI inference on the edge

For peak efficiency on the edge, it’s vital to scale storage all the way down to a single node, so as to carry inference nearer to the info. And what’s key’s eradicating reminiscence bottlenecks. That may be accomplished by making reminiscence part of the AI infrastructure, so as to scale it together with information and metadata. The proximity of knowledge to compute dramatically will increase the time to perception.

“You see all the huge deployments, the big green field data centers for AI, using very specific hardware designs to be able to bring the data as close as possible to the GPUs,” Matson stated. “They’ve been building out their data centers with very high-capacity solid-state storage, to bring petabyte-level storage, very accessible at very high speeds, to the GPUs. Now, that same technology is happening in a microcosm at the edge and in the enterprise.”

It’s turning into vital to purchasers of AI programs to make sure you’re getting essentially the most efficiency out of your system by operating it on all strong state. That permits you to carry big quantities of knowledge, and allows unbelievable processing energy in a small system on the edge.

The way forward for AI {hardware}

“It’s imperative that we provide solutions that are open, scalable, and at memory speed, using some of the latest and greatest technology out there to do that,” Cummings stated. “That’s our goal as a company, to provide that openness, that speed, and the scale that organizations need. I think you’re going to see the economies match that as well.”

For the general coaching and inference information pipeline, and inside inference itself, {hardware} wants will hold growing, whether or not it’s a really high-speed SSD or a really high-capacity resolution that’s energy environment friendly.

“I would say it’s going to move even further toward very high-capacity, whether it’s a one-petabyte SSD out a couple of years from now that runs at very low power and that can basically replace four times as many hard drives, or a very high-performance product that’s almost near memory speeds,” Matson stated. “You’ll see that the big GPU vendors are looking at how to define the next storage architecture, so that it can help augment, very closely, the HBM in the system. What was a general-purpose SSD in cloud computing is now bifurcating into capacity and performance. We’ll keep doing that further out in both directions over the next five or 10 years.”

Each day insights on enterprise use instances with VB Each day

If you wish to impress your boss, VB Each day has you coated. We provide the inside scoop on what corporations are doing with generative AI, from regulatory shifts to sensible deployments, so you possibly can share insights for max ROI.

An error occured.

You Might Also Like

Busted by the em sprint — AI’s favourite punctuation mark, and the way it’s blowing your cowl

OpenCUA’s open supply computer-use brokers rival proprietary fashions from OpenAI and Anthropic

Meta is partnering with Midjourney and can license its know-how for ‘future models and products’

4 huge enterprise classes from Walmart’s AI safety: agentic dangers, id reboot, velocity with governance, and AI vs. AI protection

MCP-Universe benchmark exhibits GPT-5 fails greater than half of real-world orchestration duties

TAGGED:AIsbottleneckCrackingedgeinferenceStoragesupercharging
Share This Article
Facebook Twitter Email Print

Follow US

Find US on Social Medias
FacebookLike
TwitterFollow
YoutubeSubscribe
TelegramFollow
Popular News
Lord Huron’s Ben Schneider on ‘crucial weirdos,’ cosmic jukeboxes and unanswered questions
Entertainment

Lord Huron’s Ben Schneider on ‘crucial weirdos,’ cosmic jukeboxes and unanswered questions

Editorial Board July 23, 2025
Aliens, sloths and silliness: Tremendous Bowl 2025 advert preview
In Venice, a Young Boatman Steers a Course of His Own
Well being care workforce: New mannequin tasks EU wants as much as 2071
Day by day NYC marketing campaign updates: State Sen. Liz Krueger endorses Brad Hoylman-Sigal

You Might Also Like

Don’t sleep on Cohere: Command A Reasoning, its first reasoning mannequin, is constructed for enterprise customer support and extra
Technology

Don’t sleep on Cohere: Command A Reasoning, its first reasoning mannequin, is constructed for enterprise customer support and extra

August 22, 2025
MIT report misunderstood: Shadow AI financial system booms whereas headlines cry failure
Technology

MIT report misunderstood: Shadow AI financial system booms whereas headlines cry failure

August 21, 2025
Inside Walmart’s AI safety stack: How a startup mentality is hardening enterprise-scale protection 
Technology

Inside Walmart’s AI safety stack: How a startup mentality is hardening enterprise-scale protection 

August 21, 2025
Cracking AI’s storage bottleneck and supercharging inference on the edge
Technology

Chan Zuckerberg Initiative’s rBio makes use of digital cells to coach AI, bypassing lab work

August 21, 2025

Categories

  • Health
  • Sports
  • Politics
  • Entertainment
  • Technology
  • World
  • Art

About US

New York Dawn is a proud and integral publication of the Enspirers News Group, embodying the values of journalistic integrity and excellence.
Company
  • About Us
  • Newsroom Policies & Standards
  • Diversity & Inclusion
  • Careers
  • Media & Community Relations
  • Accessibility Statement
Contact Us
  • Contact Us
  • Contact Customer Care
  • Advertise
  • Licensing & Syndication
  • Request a Correction
  • Contact the Newsroom
  • Send a News Tip
  • Report a Vulnerability
Term of Use
  • Digital Products Terms of Sale
  • Terms of Service
  • Privacy Policy
  • Cookie Settings
  • Submissions & Discussion Policy
  • RSS Terms of Service
  • Ad Choices
© 2024 New York Dawn. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?