We collect cookies to analyze our website traffic and performance; we never collect any personal data. Cookie Policy
Accept
NEW YORK DAWN™NEW YORK DAWN™NEW YORK DAWN™
Notification Show More
Font ResizerAa
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Reading: Rapt AI and AMD work to make GPU utilization extra environment friendly
Share
Font ResizerAa
NEW YORK DAWN™NEW YORK DAWN™
Search
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Follow US
NEW YORK DAWN™ > Blog > Technology > Rapt AI and AMD work to make GPU utilization extra environment friendly
Rapt AI and AMD work to make GPU utilization extra environment friendly
Technology

Rapt AI and AMD work to make GPU utilization extra environment friendly

Last updated: March 26, 2025 7:42 pm
Editorial Board Published March 26, 2025
Share
SHARE

Rapt AI, a supplier of AI-powered AI-workload automation for GPUs and AI accelerators, has teamed with AMD to reinforce AI infrastructure.

The long-term strategic collaboration goals to enhance AI inference and coaching workload administration and efficiency on AMD Intuition GPUs, providing prospects a scalable and cost-effective answer for deploying AI purposes.

As AI adoption accelerates, organizations are grappling with useful resource allocation, efficiency bottlenecks, and complicated GPU administration.

By integrating Rapt’s clever workload automation platform with AMD Intuition MI300X, MI325X and upcoming MI350 sequence GPUs, this collaboration delivers a scalable, high-performance, and cost-effective answer that allows prospects to maximise AI inference and coaching effectivity throughout on-premises and multi-cloud infrastructures.

A extra environment friendly answer

AMD Intuition MI325X GPU.

Charlie Leeming, CEO of Rapt AI, mentioned in a press briefing, “The AI models we are seeing today are so large and most importantly are so dynamic and unpredictable. The older tools for optimizing don’t really fit at all. We observed these dynamics. Enterprises are throwing lots of money. Hiring a new set of talent in AI. It’s one of these disruptive technologies. We have a scenario where CFOs and CIOs are asking where is the return. In some cases, there is tens of millions, hundreds of millions or billions of dollars spend on GPU-related infrastructure.”

Leeming mentioned Anil Ravindranath, CTO of Rapt AI, noticed the answer. And that concerned deploying displays to allow observations of the infrastructure.

“We feel we have the right solution at the right time. We came out of stealth last fall. We are in a growing number of Fortune 100 companies. Two are running the code among cloud service providers,” Leeming mentioned.

And he mentioned, “We do have strategic partners but our conversations with AMD went extremely well. They are building tremendous GPUs, AI accelerators. We are known for putting the maximum amount of workload on GPUs. Inference is taking off. It’s in production stage now. AI workloads are exploding. Their data scientists are running as fast as they can. They are panicking, they need tools, they need efficiency, they need automation. It’s screaming for the right solution. Inefficiencies — 30% GPU underutilization. Customers do want flexibility. Large customers are asking if you support AMD.”

Enhancements that may take 9 hours will be performed in three minutes, he mentioned. Ravindranath mentioned in a press briefing the Rapt AI platform permits as much as 10 occasions mannequin run capability on the similar AI compute spending stage, as much as 90% value financial savings, and 0 people in a loop and no code adjustments. For productiveness, this implies no extra ready for compute and time spent tuning infrastructure.

Lemming mentioned different strategies have been round for some time and haven’t reduce it. Run AI, a rival, overlaps in a aggressive manner considerably. He mentioned his firm observes in minutes as a substitute of hours after which optimizes the infrastructure. Ravindranath mentioned Run AI is extra like a scheduler however Rapt AI positions itself for unpredictable outcomes and offers with it.

“We run the model and figure it out, and that’s a huge benefit for inference workloads. It should just automatically run,” Ravindranath mentioned.

The advantages: decrease prices, higher GPU utilization

AMD Instinct MI300X Delidded Die FaceAMD Intuition MI300X GPU.

The businesses mentioned that AMD Intuition GPUs, with their industry-leading reminiscence capability, mixed withRapt’s clever useful resource optimization, helps guarantee most GPU utilization for AI workloads, serving to decrease complete value of possession (TCO).

Rapt’s platform streamlines GPU administration, eliminating the necessity for knowledge scientists to spend precious time on trial-and-error infrastructure configurations. By routinely optimizing useful resource allocation for his or her particular workloads, it empowers them to give attention to innovation moderately than infrastructure. It seamlessly helps various GPU environments (AMD and others, whether or not within the cloud, on premises or each) by means of a single occasion, serving to guarantee most infrastructure flexibility.

The mixed answer intelligently optimizes job density and useful resource allocation on AMD Intuition GPUs, leading to higher inference efficiency and scalability for manufacturing AI deployments. Rapt’s auto-scaling capabilities additional assist guarantee environment friendly useful resource use primarily based on demand, lowering latency and maximizing value effectivity.

Rapt’s platform works out-of-the-box with AMD Intuition GPUs, serving to guarantee fast efficiency advantages. Ongoing collaboration between Rapt and AMD will drive additional optimizations in thrilling areas akin to GPU scheduling, reminiscence utilization and extra, serving to guarantee prospects are outfitted with a future prepared AI infrastructure.

“At AMD, we are committed to delivering high-performance, scalable AI solutions that empower organizations to unlock the full potential of their AI workloads.” mentioned Negin Oliver, company vice chairman of enterprise growth for knowledge heart GPU enterprise at AMD, in a press release. “Our collaboration with Rapt AI combines the cutting-edge capabilities of AMD Instinct GPUs with Rapt’s intelligent workload automation, enabling customers to achieve greater efficiency, flexibility, and cost savings across their AI infrastructure.”

GB Every day

An error occured.

You Might Also Like

OpenAI launches analysis preview of Codex AI software program engineering agent for builders — with parallel tasking

Acer unveils AI-powered wearables at Computex 2025

Elon Musk’s xAI tries to elucidate Grok’s South African race relations freakout the opposite day

The $1 Billion database wager: What Databricks’ Neon acquisition means on your AI technique

Software program engineering-native AI fashions have arrived: What Windsurf’s SWE-1 means for technical decision-makers

TAGGED:AMDefficientGPURaptutilizationwork
Share This Article
Facebook Twitter Email Print

Follow US

Find US on Social Medias
FacebookLike
TwitterFollow
YoutubeSubscribe
TelegramFollow
Popular News
Dwelling longer, dwelling higher: New advances in Down syndrome care
Health

Dwelling longer, dwelling higher: New advances in Down syndrome care

Editorial Board March 21, 2025
10 Artwork Books to Deliver to the Seashore This Summer season
Pores and skin-to-skin contact improves breastfeeding however not cognitive outcomes in very preterm infants: Medical trial
Voltron Knowledge simply partnered with Accenture to resolve one in all AI’s largest complications
White Home fires USAID inspector common after warning about funding oversight, official says

You Might Also Like

Cut back mannequin integration prices whereas scaling AI: LangChain’s open ecosystem delivers the place closed distributors can’t
Technology

Cut back mannequin integration prices whereas scaling AI: LangChain’s open ecosystem delivers the place closed distributors can’t

May 16, 2025
Cut back mannequin integration prices whereas scaling AI: LangChain’s open ecosystem delivers the place closed distributors can’t
Technology

From OAuth bottleneck to AI acceleration: How CIAM options are eradicating the highest integration barrier in enterprise AI agent deployment

May 15, 2025
Take-Two studies stable earnings and explains GTA VI delay
Technology

Take-Two studies stable earnings and explains GTA VI delay

May 15, 2025
Nintendo opens a San Francisco retailer that may imply lots to followers | The DeanBeat
Technology

Nintendo opens a San Francisco retailer that may imply lots to followers | The DeanBeat

May 15, 2025

Categories

  • Health
  • Sports
  • Politics
  • Entertainment
  • Technology
  • World
  • Art

About US

New York Dawn is a proud and integral publication of the Enspirers News Group, embodying the values of journalistic integrity and excellence.
Company
  • About Us
  • Newsroom Policies & Standards
  • Diversity & Inclusion
  • Careers
  • Media & Community Relations
  • Accessibility Statement
Contact Us
  • Contact Us
  • Contact Customer Care
  • Advertise
  • Licensing & Syndication
  • Request a Correction
  • Contact the Newsroom
  • Send a News Tip
  • Report a Vulnerability
Term of Use
  • Digital Products Terms of Sale
  • Terms of Service
  • Privacy Policy
  • Cookie Settings
  • Submissions & Discussion Policy
  • RSS Terms of Service
  • Ad Choices
© 2024 New York Dawn. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?