CISOs know exactly the place their AI nightmare unfolds quickest. It’s inference, the weak stage the place dwell fashions meet real-world knowledge, leaving enterprises uncovered to immediate injection, knowledge leaks, and mannequin jailbreaks.
Databricks Ventures and Noma Safety are confronting these inference-stage threats head-on. Backed by a contemporary $32 million Sequence A spherical led by Ballistic Ventures and Glilot Capital, with robust assist from Databricks Ventures, the partnership goals to deal with the essential safety gaps which have hindered enterprise AI deployments.
“The number one reason enterprises hesitate to deploy AI at scale fully is security,” mentioned Niv Braun, CEO of Noma Safety, in an unique interview with VentureBeat. “With Databricks, we’re embedding real-time threat analytics, advanced inference-layer protections, and proactive AI red teaming directly into enterprise workflows. Our joint approach enables organizations to accelerate their AI ambitions safely and confidently finally,” Braun mentioned.
Securing AI inference calls for real-time analytics and runtime protection, Gartner finds
Conventional cybersecurity prioritizes perimeter defenses, leaving AI inference vulnerabilities dangerously missed. Andrew Ferguson, Vice President at Databricks Ventures, highlighted this essential safety hole in an unique interview with VentureBeat, emphasizing buyer urgency relating to inference-layer safety. “Our customers clearly indicated that securing AI inference in real-time is crucial, and Noma uniquely delivers that capability,” Ferguson mentioned. “Noma directly addresses the inference security gap with continuous monitoring and precise runtime controls.”
Braun expanded on this essential want. “We built our runtime protection specifically for increasingly complex AI interactions,” Braun defined. “Real-time threat analytics at the inference stage ensure enterprises maintain robust runtime defenses, minimizing unauthorized data exposure and adversarial model manipulation.”
Gartner’s current evaluation confirms that enterprise demand for superior AI Belief, Danger, and Safety Administration (TRiSM) capabilities is surging. Gartner predicts that via 2026, over 80% of unauthorized AI incidents will consequence from inside misuse slightly than exterior threats, reinforcing the urgency for built-in governance and real-time AI safety.
Gartner’s AI TRiSM framework illustrates complete safety layers important for managing enterprise AI threat successfully. (Supply: Gartner)
Noma’s proactive pink teaming goals to make sure AI integrity from the outset
Noma’s proactive pink teaming strategy is strategically central to figuring out vulnerabilities lengthy earlier than AI fashions attain manufacturing, Braun informed VentureBeat. By simulating subtle adversarial assaults throughout pre-production testing, Noma exposes and addresses dangers early, considerably enhancing the robustness of runtime safety.
Throughout his interview with VentureBeat, Braun elaborated on the strategic worth of proactive pink teaming: “Red teaming is essential. We proactively uncover vulnerabilities pre-production, ensuring AI integrity from day one.”
“Reducing time to production without compromising security requires avoiding over-engineering. We design testing methodologies that directly inform runtime protections, helping enterprises move securely and efficiently from testing to deployment”, Braun suggested.
Braun elaborated additional on the complexity of recent AI interactions and the depth required in proactive pink teaming strategies. He burdened that this course of should evolve alongside more and more subtle AI fashions, significantly these of the generative kind: “Our runtime protection was specifically built to handle increasingly complex AI interactions,” Braun defined. “Each detector we employ integrates multiple security layers, including advanced NLP models and language-modeling capabilities, ensuring we provide comprehensive security at every inference step.”
The pink staff workouts not solely validate the fashions but in addition strengthen enterprise confidence in deploying superior AI methods safely at scale, straight aligning with the expectations of main enterprise Chief Info Safety Officers (CISOs).
How Databricks and Noma Block Crucial AI Inference Threats
Securing AI inference from rising threats has turn into a prime precedence for CISOs as enterprises scale their AI mannequin pipelines. “The number one reason enterprises hesitate to deploy AI at scale fully is security,” emphasised Braun. Ferguson echoed this urgency, noting, “Our customers have clearly indicated securing AI inference in real-time is critical, and Noma uniquely delivers on that need.”
Collectively, Databricks and Noma provide built-in, real-time safety in opposition to subtle threats, together with immediate injection, knowledge leaks, and mannequin jailbreaks, whereas aligning intently with requirements equivalent to Databricks’ DASF 2.0 and OWASP pointers for strong governance and compliance.
The desk under summarizes key AI inference threats and the way the Databricks-Noma partnership mitigates them:
Risk VectorDescriptionPotential ImpactNoma-Databricks MitigationPrompt InjectionMalicious inputs are overriding mannequin directions.Unauthorized knowledge publicity and dangerous content material technology.Immediate scanning with multilayered detectors (Noma); Enter validation through DASF 2.0 (Databricks).Delicate Knowledge LeakageAccidental publicity of confidential knowledge.Compliance breaches, lack of mental property.Actual-time delicate knowledge detection and masking (Noma); Unity Catalog governance and encryption (Databricks).Mannequin JailbreakingBypassing embedded security mechanisms in AI fashions.Technology of inappropriate or malicious outputs.Runtime jailbreak detection and enforcement (Noma); MLflow mannequin governance (Databricks).Agent Instrument ExploitationMisuse of built-in AI agent functionalities.Unauthorized system entry and privilege escalation.Actual-time monitoring of agent interactions (Noma); Managed deployment environments (Databricks).Agent Reminiscence PoisoningInjection of false knowledge into persistent agent reminiscence.Compromised decision-making, misinformation.AI-SPM integrity checks and reminiscence safety (Noma); Delta Lake knowledge versioning (Databricks).Oblique Immediate InjectionEmbedding malicious directions in trusted inputs.Agent hijacking, unauthorized activity execution.Actual-time enter scanning for malicious patterns (Noma); Safe knowledge ingestion pipelines (Databricks).
How Databricks Lakehouse structure helps AI governance and safety
Databricks’ Lakehouse structure combines the structured governance capabilities of conventional knowledge warehouses with the scalability of information lakes, centralizing analytics, machine studying, and AI workloads inside a single, ruled surroundings.
By embedding governance straight into the information lifecycle, Lakehouse structure addresses compliance and safety dangers, significantly throughout the inference and runtime phases, aligning intently with business frameworks equivalent to OWASP and MITRE ATLAS.
Throughout our interview, Braun highlighted the platform’s alignment with the stringent regulatory calls for he’s seeing in gross sales cycles and with present clients. “We automatically map our security controls onto widely adopted frameworks like OWASP and MITRE ATLAS. This allows our customers to confidently comply with critical regulations such as the EU AI Act and ISO 42001. Governance isn’t just about checking boxes. It’s about embedding transparency and compliance directly into operational workflows”.

Databricks Lakehouse integrates governance and analytics to securely handle AI workloads. (Supply: Gartner)
How Databricks and Noma plan to safe enterprise AI at scale
Enterprise AI adoption is accelerating, however as deployments broaden, so do safety dangers, particularly on the mannequin inference stage.
The partnership between Databricks and Noma Safety addresses this straight by offering built-in governance and real-time menace detection, with a give attention to securing AI workflows from growth via manufacturing.
Ferguson defined the rationale behind this mixed strategy clearly: “Enterprise AI requires comprehensive security at every stage, especially at runtime. Our partnership with Noma integrates proactive threat analytics directly into AI operations, giving enterprises the security coverage they need to scale their AI deployments confidently”.
Every day insights on enterprise use circumstances with VB Every day
If you wish to impress your boss, VB Every day has you coated. We provide the inside scoop on what corporations are doing with generative AI, from regulatory shifts to sensible deployments, so you possibly can share insights for max ROI.
An error occured.


