We collect cookies to analyze our website traffic and performance; we never collect any personal data. Cookie Policy
Accept
NEW YORK DAWN™NEW YORK DAWN™NEW YORK DAWN™
Notification Show More
Font ResizerAa
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Reading: Immediate Safety's Itamar Golan on why generative AI safety requires constructing a class, not a characteristic
Share
Font ResizerAa
NEW YORK DAWN™NEW YORK DAWN™
Search
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Follow US
NEW YORK DAWN™ > Blog > Technology > Immediate Safety's Itamar Golan on why generative AI safety requires constructing a class, not a characteristic
Immediate Safety's Itamar Golan on why generative AI safety requires constructing a class, not a characteristic
Technology

Immediate Safety's Itamar Golan on why generative AI safety requires constructing a class, not a characteristic

Last updated: November 27, 2025 5:28 pm
Editorial Board Published November 27, 2025
Share
SHARE

VentureBeat just lately sat down (nearly) with Itamar Golan, co-founder and CEO of Immediate Safety, to speak by way of the GenAI safety challenges organizations of all sizes face.

We talked about shadow AI sprawl, the strategic choices that led Golan to pursue constructing a market-leading platform versus competing on options, and a real-world incident that crystallized why defending AI purposes isn't non-obligatory anymore. Golan supplied an unvarnished view of the corporate's mission to empower enterprises to undertake AI securely, and the way that imaginative and prescient led to SentinelOne's estimated $250 million acquisition in August 2025.

Golan's path to founding Immediate Safety started with tutorial work on transformer architectures, nicely earlier than they turned foundational to at the moment's massive language fashions. His expertise constructing one of many earliest GenAI-powered safety features utilizing GPT-2 and GPT-3 satisfied him that LLM-driven purposes had been creating a completely new assault floor. He based Immediate Safety in August 2023, raised $23 million throughout two rounds, constructed a 50-person group, and achieved a profitable exit in beneath two years.

The timing of our dialog couldn’t be higher. VentureBeat evaluation exhibits shadow AI now prices enterprises $4.63 million per breach, 16% above common, but 97% of breached organizations lack fundamental AI entry controls, in accordance with IBM's 2025 information. VentureBeat estimates that shadow AI apps might double by mid-2026 primarily based on present 5% month-to-month development charges. Cyberhaven information reveals 73.8% of ChatGPT office accounts are unauthorized, and enterprise AI utilization has grown 61x in simply 24 months. As Golan advised VentureBeat in earlier protection, "We see 50 new AI apps a day, and we've already cataloged over 12,000. Around 40% of these default to training on any data you feed them, meaning your intellectual property can become part of their models."

The next has been edited for readability and size.

VentureBeat: What made you acknowledge that GenAI safety wanted a devoted firm when most enterprises had been nonetheless determining easy methods to deploy their first LLMs? Was there a selected second, buyer dialog, or assault sample you noticed that satisfied you this was a fundable, venture-scale alternative?

Itamar Golan: From an early age, I used to be drawn to arithmetic, information, and the rising world of synthetic intelligence. That curiosity formed my tutorial path, culminating in a research on transformer architectures, nicely earlier than they turned foundational to at the moment's massive language fashions. My ardour for AI additionally guided my early profession as a knowledge scientist, the place my work more and more intersected with cybersecurity.

All the pieces accelerated with the discharge of the primary OpenAI API. Round that point, as a part of my earlier job, I teamed up with Lior Drihem, who would later grow to be my co-founder and Immediate Safety's CTO. Collectively, we constructed one of many earliest safety features powered by generative AI, utilizing GPT-2 and GPT-3 to generate contextual, actionable remediation steps for safety alerts. This decreased the time safety groups wanted to grasp and resolve points.

That have made it clear that purposes powered by GPT-like fashions had been opening a completely new and susceptible assault floor. Recognizing this shift, we based Immediate Safety in August 2023 to deal with these rising dangers. Our aim was to empower organizations to experience this wave of innovation and unleash the potential of AI with out it turning into a safety and governance nightmare.

Immediate Safety turned recognized for immediate injection protection, however you had been fixing a broader set of GenAI safety challenges. Stroll me by way of the total scope of what the platform addressed: information leakage, mannequin governance, compliance, purple teaming, no matter else. What capabilities ended up resonating most with prospects which will have shocked you?

From the start, we designed Immediate Safety to cowl a broad vary of use circumstances. Focusing solely on worker monitoring or prompt-injection safety for inside AI purposes was by no means sufficient. To really give safety groups the boldness to undertake AI safely, we wanted to guard each touchpoint throughout the group, and do all of it at runtime.

For a lot of prospects, the true turning level was discovering simply what number of AI instruments their staff had been already utilizing. Early on, corporations usually discovered not simply ChatGPT however dozens of unmanaged AI providers in lively use utterly exterior IT's visibility. That made shadow AI discovery a crucial a part of our answer.

Equally necessary was real-time sensitive-data sanitization. As an alternative of blocking AI instruments outright, we enabled staff to make use of them safely by robotically eradicating delicate info from prompts earlier than it ever reached an exterior mannequin. It struck the stability organizations wanted: robust safety with out sacrificing productiveness. Staff might hold working with AI, whereas safety groups knew that no delicate information was leaking out.

What shocked many purchasers was how enabling protected utilization — fairly than proscribing it — drove quicker adoption and belief. As soon as they noticed AI as a managed, safe channel as a substitute of a forbidden one, utilization exploded responsibly.

You constructed Immediate Safety right into a market chief. What had been the 2 to 3 strategic choices that really accelerated your development? Was it specializing in a selected vertical?

Wanting again, the true acceleration didn't come from luck or timing: It got here from a number of deliberate selections I made early. These selections had been uncomfortable, costly, and slowed us down within the quick time period, however they created huge leverage over time.

First, I selected to construct a class, not a characteristic. From day one, I refused to place Immediate Safety as "just" safety in opposition to immediate injection or information leakage, as a result of I noticed that as a lifeless finish.

As an alternative, I framed Immediate because the AI safety management layer for the enterprise, the platform that governs how people, brokers, and purposes work together with LLMs. That call was basic, permitting us to create a finances as a substitute of preventing for it, sit on the CISO desk as a strategic layer fairly than a instrument, and construct platform-level pricing and long-term relevance as a substitute of a slim level answer. I wasn't attempting to win a characteristic race; I used to be constructing a brand new class.

Second, I selected enterprise complexity earlier than it was comfy. Whereas most startups keep away from complexity till they're compelled into it, I did the alternative: I constructed for enterprise deployment fashions early, together with self-hosted and hybrid; coated actual enterprise surfaces like browsers, IDEs, inside instruments, MCPs, and agentic workflows; and accepted longer cycles and extra advanced engineering in alternate for credibility. It wasn't the best route, but it surely gave us one thing rivals couldn't faux: enterprise readiness earlier than the market even knew it could want it.

Third, I selected depth over logos. Somewhat than chasing quantity or self-importance metrics, I went deep with a smaller variety of very critical prospects, embedding ourselves into how they rolled out AI internally, how they considered threat, coverage, and governance, and the way they deliberate long-term AI adoption. These prospects didn't simply purchase the product: they formed it. That created a product that mirrored enterprise actuality, produced proof factors that moved boardrooms and never simply safety groups, and constructed a stage of defensibility that got here from entrenchment fairly than advertising.

You had been educating the market on threats most CISOs hadn't even thought-about but. How did your positioning and messaging evolve from 12 months one to the acquisition?

Within the early days, we had been educating a market that was nonetheless attempting to grasp whether or not AI adoption prolonged past a number of staff utilizing ChatGPT for productiveness. Our positioning targeted closely on consciousness, displaying CISOs that AI utilization was already sprawling throughout their organizations and that this created actual, instant dangers they hadn't accounted for.

I wasn't attempting to win a characteristic race; I used to be constructing a brand new class.

Because the market matured, our messaging shifted from "this is happening" to "here's how you stay ahead." CISOs now totally acknowledge the dimensions of AI sprawl and know that straightforward URL filtering or fundamental controls received't suffice. As an alternative of debating the issue, they're in search of a method to allow protected AI use with out the operational burden of monitoring each new instrument, website, copilot, or AI agent staff uncover.

By the point of the acquisition, our positioning centered on being the protected enabler: an answer that delivers visibility, safety, and governance on the velocity of AI innovation.

Our analysis exhibits that enterprises are struggling to get approvals from senior administration to deploy GenAI safety instruments. How are safety departments persuading their C-level executives to maneuver ahead?

Essentially the most profitable CISOs are framing GenAI safety as a pure extension of current information safety mandates, not an experimental finances line. They place it as defending the identical property, company information, IP, and person belief, in a brand new, quickly rising channel.

What's probably the most critical GenAI safety incident or near-miss you encountered whereas constructing Immediate Safety that actually drove dwelling how crucial these protections are? How did that incident form your product roadmap or go-to-market method?

The second that crystallized the whole lot for me occurred with a big, extremely regulated firm that launched a customer-facing GenAI help agent. This wasn't a sloppy experiment. They’d the whole lot the safety textbooks suggest: WAF, CSPM, shift-left, common purple teaming, a safe SDLC, the works. On paper, they had been doing the whole lot proper.

What they didn't totally account for was that the AI agent itself had grow to be a brand new, uncovered assault floor. Inside weeks of launch, a non-technical person found that by fastidiously crafting the proper dialog stream (not code, not exploits, simply pure language) they may prompt-inject the agent into revealing info from different prospects' help tickets and inside case summaries. It wasn't a nation-state attacker. It wasn't somebody with superior expertise. It was primarily a curious person with time and creativity. And but, by way of that single conversational interface, they managed to entry a number of the most delicate buyer information the corporate holds.

It was each fascinating and terrifying: realizing how creativity alone might grow to be an exploit vector.

That was the second I actually understood what GenAI modifications concerning the menace mannequin. AI doesn't simply introduce new dangers, it democratizes them. It makes programs hackable by individuals who by no means had the ability set earlier than, compresses the time it takes to find exploits, and massively expands the harm radius as soon as one thing breaks. That incident validated our unique method, and it pushed us to double down on defending AI purposes, not simply inside use. We accelerated work round:

• Runtime safety for customer-facing AI apps

• Immediate injection and context manipulation detection

• Cross-tenant information leakage prevention on the mannequin interplay layer

It additionally reshaped our go-to-market. As an alternative of solely speaking about inside AI governance, we started displaying safety leaders how GenAI turns their customer-facing surfaces into high-risk, high-exposure property in a single day.

What's your function and focus now that you just're a part of SentinelOne? How has working inside a bigger platform firm modified what you're capable of construct in comparison with operating an unbiased startup? What acquired simpler, and what acquired more durable?

The main focus now could be on extending AI safety throughout your entire platform, bringing runtime GenAI safety, visibility, and coverage enforcement into the identical ecosystem that already secures endpoints, identities, and cloud workloads. The mission hasn't modified; the attain has.

In the end, we're constructing towards a future the place AI itself turns into a part of the protection material: not simply one thing to safe, however one thing that secures you.

The larger image

M&A exercise continues to speed up for GenAI startups which have confirmed they will scale to enterprise-level safety with out sacrificing accuracy or velocity. Palo Alto Networks paid $700 million for Shield AI. Tenable acquired Apex for $100 million. Cisco purchased Sturdy Intelligence for a reported $500 million. As Golan famous, the businesses that survive the following wave of AI-enabled assaults will likely be people who embedded safety into their AI adoption technique from the start.

Put up-acquisition, Immediate Safety's capabilities will lengthen throughout SentinelOne's Singularity Platform, together with MCP gateway safety between AI purposes and greater than 13,000 recognized MCP servers. Immediate Safety can also be delivering model-agnostic protection throughout all main LLM suppliers, together with OpenAI, Anthropic, and Google, in addition to self-hosted or on-prem fashions as a part of the corporate's integration into the Singularity Platform.

You Might Also Like

Airtable's Superagent maintains full execution visibility to unravel multi-agent context drawback

Factify desires to maneuver previous PDFs and .docx by giving digital paperwork their very own mind

Adaptive6 emerges from stealth to scale back enterprise cloud waste (and it's already optimizing Ticketmaster)

How SAP Cloud ERP enabled Western Sugar’s transfer to AI-driven automation

SOC groups are automating triage — however 40% will fail with out governance boundaries

TAGGED:BuildingcategoryfeaturegenerativeGolanItamarpromptrequiresSecuritysecurity039s
Share This Article
Facebook Twitter Email Print

Follow US

Find US on Social Medias
FacebookLike
TwitterFollow
YoutubeSubscribe
TelegramFollow
Popular News
Research hyperlinks high-fiber food regimen to delayed development of blood most cancers
Health

Research hyperlinks high-fiber food regimen to delayed development of blood most cancers

Editorial Board December 8, 2024
Carrying On the Legacy of Trailblazing Curator Maurice Berger
That is the uncommon vibrant spot in a troublesome Hollywood job market
NFL information grievance towards gamers’ union to cease report playing cards
Rory McIlroy breaks his US Open silence after capturing a 74 within the third spherical at Oakmont

You Might Also Like

The AI visualization tech stack: From 2D to holograms
Technology

The AI visualization tech stack: From 2D to holograms

January 27, 2026
Theorem needs to cease AI-written bugs earlier than they ship — and simply raised M to do it
Technology

Theorem needs to cease AI-written bugs earlier than they ship — and simply raised $6M to do it

January 27, 2026
How Moonshot's Kimi K2.5 helps AI builders spin up agent swarms simpler than ever
Technology

How Moonshot's Kimi K2.5 helps AI builders spin up agent swarms simpler than ever

January 27, 2026
Contextual AI launches Agent Composer to show enterprise RAG into production-ready AI brokers
Technology

Contextual AI launches Agent Composer to show enterprise RAG into production-ready AI brokers

January 27, 2026

Categories

  • Health
  • Sports
  • Politics
  • Entertainment
  • Technology
  • Art
  • World

About US

New York Dawn is a proud and integral publication of the Enspirers News Group, embodying the values of journalistic integrity and excellence.
Company
  • About Us
  • Newsroom Policies & Standards
  • Diversity & Inclusion
  • Careers
  • Media & Community Relations
  • Accessibility Statement
Contact Us
  • Contact Us
  • Contact Customer Care
  • Advertise
  • Licensing & Syndication
  • Request a Correction
  • Contact the Newsroom
  • Send a News Tip
  • Report a Vulnerability
Term of Use
  • Digital Products Terms of Sale
  • Terms of Service
  • Privacy Policy
  • Cookie Settings
  • Submissions & Discussion Policy
  • RSS Terms of Service
  • Ad Choices
© 2024 New York Dawn. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?