Knowledge safety firm Fortanix Inc. introduced a brand new joint answer with NVIDIA: a turnkey platform that enables organizations to deploy agentic AI inside their very own knowledge facilities or sovereign environments, backed by NVIDIA’s "confidential computing" GPUs.
“Our goal is to make AI trustworthy by securing every layer—from the chip to the model to the data," said Fortanix CEO and co-founder Anand Kashyap, in a recent video call interview with VentureBeat. "Confidential computing gives you that end-to-end trust so you can confidently use AI with sensitive or regulated information.”
The answer arrives at a pivotal second for industries akin to healthcare, finance, and authorities — sectors desperate to embrace AI however constrained by strict privateness and regulatory necessities.
Fortanix’s new platform, powered by NVIDIA Confidential Computing, allows enterprises to construct and run AI programs on delicate knowledge with out sacrificing safety or management.
“Enterprises in finance, healthcare and government want to harness the power of AI, but compromising on trust, compliance, or control creates insurmountable risk,” mentioned Anuj Jaiswal, chief product officer at Fortanix, in a press launch. “We’re giving enterprises a sovereign, on-prem platform for AI agents—one that proves what’s running, protects what matters, and gets them to production faster.”
Safe AI, Verified from Chip to Mannequin
On the coronary heart of the Fortanix–NVIDIA collaboration is a confidential AI pipeline that ensures knowledge, fashions, and workflows stay protected all through their lifecycle.
The system makes use of a mixture of Fortanix Knowledge Safety Supervisor (DSM) and Fortanix Confidential Computing Supervisor (CCM), built-in immediately into NVIDIA’s GPU structure.
“You can think of DSM as the vault that holds your keys, and CCM as the gatekeeper that verifies who’s allowed to use them," Kashyap said. "DSM enforces policy, CCM enforces trust.”
DSM serves as a FIPS 140-2 Stage 3 {hardware} safety module that manages encryption keys and enforces strict entry controls.
CCM, launched alongside this announcement, verifies the trustworthiness of AI workloads and infrastructure utilizing composite attestation—a course of that validates each CPUs and GPUs earlier than permitting entry to delicate knowledge.
Solely when a workload is verified by CCM does DSM launch the cryptographic keys essential to decrypt and course of knowledge.
“The Confidential Computing Manager checks that the workload, the CPU, and the GPU are running in a trusted state," explained Kashyap. "It issues a certificate that DSM validates before releasing the key. That ensures the right workload is running on the right hardware before any sensitive data is decrypted.”
This “attestation-gated” mannequin creates what Fortanix describes as a provable chain of belief extending from the {hardware} chip to the appliance layer.
It’s an method aimed squarely at industries the place confidentiality and compliance are non-negotiable.
From Pilot to Manufacturing—With out the Safety Commerce-Off
In keeping with Kashyap, the partnership marks a step ahead from conventional knowledge encryption and key administration towards securing complete AI workloads.
Kashyap defined that enterprises can deploy the Fortanix–NVIDIA answer incrementally, utilizing a lift-and-shift mannequin emigrate present AI workloads right into a confidential surroundings.
“We offer two form factors: SaaS with zero footprint, and self-managed. Self-managed can be a virtual appliance or a 1U physical FIPS 140-2 Level 3 appliance," he noted. "The smallest deployment is a three-node cluster, with larger clusters of 20–30 nodes or more.”
Prospects already working AI fashions—whether or not open-source or proprietary—can transfer them onto NVIDIA’s Hopper or Blackwell GPU architectures with minimal reconfiguration.
For organizations constructing out new AI infrastructure, Fortanix’s Armet AI platform supplies orchestration, observability, and built-in guardrails to hurry up time to manufacturing.
“The result is that enterprises can move from pilot projects to trusted, production-ready AI in days rather than months,” Jaiswal mentioned.
Compliance by Design
Compliance stays a key driver behind the brand new platform’s design. Fortanix’s DSM enforces role-based entry management, detailed audit logging, and safe key custody—components that assist enterprises exhibit compliance with stringent knowledge safety rules.
These controls are important for regulated industries akin to banking, healthcare, and authorities contracting.
The corporate emphasizes that the answer is constructed for each confidentiality and sovereignty.
For governments and enterprises that should retain native management over their AI environments, the system helps totally on-premises or air-gapped deployment choices.
Fortanix and NVIDIA have collectively built-in these applied sciences into the NVIDIA AI Manufacturing facility Reference Design for Authorities, a blueprint for constructing safe nationwide or enterprise-level AI programs.
Future-Proofed for a Publish-Quantum Period
Along with present encryption requirements akin to AES, Fortanix helps post-quantum cryptography (PQC) inside its DSM product.
As world analysis in quantum computing accelerates, PQC algorithms are anticipated to grow to be a crucial element of safe computing frameworks.
“We don’t invent cryptography; we implement what’s proven,” Kashyap mentioned. “But we also make sure our customers are ready for the post-quantum era when it arrives.”
Actual-World Flexibility
Whereas the platform is designed for on-premises and sovereign use instances, Kashyap emphasised that it may additionally run in main cloud environments that already assist confidential computing.
Enterprises working throughout a number of areas can preserve constant key administration and encryption controls, both by way of centralized key internet hosting or replicated key clusters.
This flexibility permits organizations to shift AI workloads between knowledge facilities or cloud areas—whether or not for efficiency optimization, redundancy, or regulatory causes—with out dropping management over their delicate info.
Fortanix converts utilization into “credits,” which correspond to the variety of AI situations working inside a manufacturing unit surroundings. The construction permits enterprises to scale incrementally as their AI initiatives develop.
Fortanix will showcase the joint platform at NVIDIA GTC, held October 27–29, 2025, on the Walter E. Washington Conference Middle in Washington, D.C. Guests can discover Fortanix at sales space I-7 for dwell demonstrations and discussions on securing AI workloads in extremely regulated environments.
About Fortanix
Fortanix Inc. was based in 2016 in Mountain View, California, by Anand Kashyap and Ambuj Kumar, each former Intel engineers who labored on trusted execution and encryption applied sciences. The corporate was created to commercialize confidential computing—then an rising idea—by extending the safety of encrypted knowledge past storage and transmission to knowledge in lively use, in accordance with TechCrunch and the corporate’s personal About web page.
Kashyap, who beforehand served as a senior safety architect at Intel and VMware, and Kumar, a former engineering lead at Intel, drew on years of labor in trusted {hardware} and virtualization programs. Their shared perception into the hole between research-grade cryptography and enterprise adoption drove them to discovered Fortanix, in accordance with Forbes and Crunchbase.
At present, Fortanix is acknowledged as a world chief in confidential computing and knowledge safety, providing options that defend knowledge throughout its lifecycle—at relaxation, in transit, and in use.
Fortanix serves enterprises and governments worldwide with deployments starting from cloud-native companies to high-security, air-gapped programs.
"Historically we provided encryption and key-management capabilities," Kashyap mentioned. "Now we’re going additional to safe the workload itself—particularly AI—so a complete AI pipeline can run protected with confidential computing. That applies whether or not the AI runs within the cloud or in a sovereign surroundings dealing with delicate or regulated knowledge.

