Wells Fargo has quietly achieved what most enterprises are nonetheless dreaming about: constructing a large-scale, production-ready generative AI system that truly works. In 2024 alone, the financial institution’s AI-powered assistant, Fargo, dealt with 245.4 million interactions – greater than doubling its authentic projections – and it did so with out ever exposing delicate buyer knowledge to a language mannequin.
Fargo helps prospects with on a regular basis banking wants by way of voice or textual content, dealing with requests reminiscent of paying payments, transferring funds, offering transaction particulars, and answering questions on account exercise. The assistant has confirmed to be a sticky device for customers, averaging a number of interactions per session.
The system works by way of a privacy-first pipeline. A buyer interacts by way of the app, the place speech is transcribed domestically with a speech-to-text mannequin. That textual content is then scrubbed and tokenized by Wells Fargo’s inner techniques, together with a small language mannequin (SLM) for personally identifiable info (PII) detection. Solely then is a name made to Google’s Flash 2.0 mannequin to extract the consumer’s intent and related entities. No delicate knowledge ever reaches the mannequin.
“The orchestration layer talks to the model,” Wells Fargo CIO Chintan Mehta mentioned in an interview with VentureBeat. “We’re the filters in front and behind.”
The one factor the mannequin does, he defined, is decide the intent and entity primarily based on the phrase a consumer submits, reminiscent of figuring out {that a} request entails a financial savings account. “All the computations and detokenization, everything is on our end,” Mehta mentioned. “Our APIs… none of them pass through the LLM. All of them are just sitting orthogonal to it.”
Wells Fargo’s inner stats present a dramatic ramp: from 21.3 million interactions in 2023 to greater than 245 million in 2024, with over 336 million cumulative interactions since launch. Spanish language adoption has additionally surged, accounting for greater than 80% of utilization since its September 2023 rollout.
This structure displays a broader strategic shift. Mehta mentioned the financial institution’s method is grounded in constructing “compound systems,” the place orchestration layers decide which mannequin to make use of primarily based on the duty. Gemini Flash 2.0 powers Fargo, however smaller fashions like Llama are used elsewhere internally, and OpenAI fashions will be tapped as wanted.
“We’re poly-model and poly-cloud,” he mentioned, noting that whereas the financial institution leans closely on Google’s cloud immediately, it additionally makes use of Microsoft’s Azure.
Mehta says model-agnosticism is important now that the efficiency delta between the highest fashions is tiny. He added that some fashions nonetheless excel in particular areas—Claude Sonnet 3.7 and OpenAI’s o3 mini excessive for coding, OpenAI’s o3 for deep analysis, and so forth—however in his view, the extra necessary query is how they’re orchestrated into pipelines.
Context window dimension stays one space the place he sees significant separation. Mehta praised Gemini 2.5 Professional’s 1M-token capability as a transparent edge for duties like retrieval augmented technology (RAG), the place pre-processing unstructured knowledge can add delay. “Gemini has absolutely killed it when it comes to that,” he mentioned. For a lot of use instances, he mentioned, the overhead of preprocessing knowledge earlier than deploying a mannequin typically outweighs the profit.
Fargo’s design reveals how massive context fashions can allow quick, compliant, high-volume automation – even with out human intervention. And that’s a pointy distinction to opponents. At Citi, for instance, analytics chief Promiti Dutta mentioned final 12 months that the dangers of external-facing massive language fashions (LLMs) have been nonetheless too excessive. In a chat hosted by VentureBeat, she described a system the place help brokers don’t converse on to prospects, on account of issues about hallucinations and knowledge sensitivity.
Wells Fargo solves these issues by way of its orchestration design. Slightly than counting on a human within the loop, it makes use of layered safeguards and inner logic to maintain LLMs out of any data-sensitive path.
Agentic strikes and multi-agent design
Wells Fargo can be transferring towards extra autonomous techniques. Mehta described a latest challenge to re-underwrite 15 years of archived mortgage paperwork. The financial institution used a community of interacting brokers, a few of that are constructed on open supply frameworks like LangGraph. Every agent had a particular position within the course of, which included retrieving paperwork from the archive, extracting their contents, matching the info to techniques of document, after which persevering with down the pipeline to carry out calculations – all duties that historically require human analysts. A human evaluations the ultimate output, however many of the work ran autonomously.
The financial institution can be evaluating reasoning fashions for inner use, the place Mehta mentioned differentiation nonetheless exists. Whereas most fashions now deal with on a regular basis duties nicely, reasoning stays an edge case the place some fashions clearly do it higher than others, they usually do it in several methods.
Why latency (and pricing) matter
At Wayfair, CTO Fiona Tan mentioned Gemini 2.5 Professional has proven sturdy promise, particularly within the space of velocity. “In some cases, Gemini 2.5 came back faster than Claude or OpenAI,” she mentioned, referencing latest experiments by her crew.
Tan mentioned that decrease latency opens the door to real-time buyer purposes. At present, Wayfair makes use of LLMs for largely internal-facing apps—together with in merchandising and capital planning—however quicker inference would possibly allow them to prolong LLMs to customer-facing merchandise like their Q&A device on product element pages.
Tan additionally famous enhancements in Gemini’s coding efficiency. “It seems pretty comparable now to Claude 3.7,” she mentioned. The crew has begun evaluating the mannequin by way of merchandise like Cursor and Code Help, the place builders have the pliability to decide on.
Google has since launched aggressive pricing for Gemini 2.5 Professional: $1.24 per million enter tokens and $10 per million output tokens. Tan mentioned that pricing, plus SKU flexibility for reasoning duties, makes Gemini a robust possibility going ahead.
The broader sign for Google Cloud Subsequent
Wells Fargo and Wayfair’s tales land at an opportune second for Google, which is internet hosting its annual Google Cloud Subsequent convention this week in Las Vegas. Whereas OpenAI and Anthropic have dominated the AI discourse in latest months, enterprise deployments could quietly swing again towards Google’s favor.
On the convention, Google is predicted to spotlight a wave of agentic AI initiatives, together with new capabilities and tooling to make autonomous brokers extra helpful in enterprise workflows. Already finally 12 months’s Cloud Subsequent occasion, CEO Thomas Kurian predicted brokers will probably be designed to assist customers “achieve specific goals” and “connect with other agents” to finish duties — themes that echo most of the orchestration and autonomy rules Mehta described.
Wells Fargo’s Mehta emphasised that the actual bottleneck for AI adoption received’t be mannequin efficiency or GPU availability. “I think this is powerful. I have zero doubt about that,” he mentioned, about generative AI’s promise to return worth for enterprise apps. However he warned that the hype cycle could also be working forward of sensible worth. “We have to be very thoughtful about not getting caught up with shiny objects.”
His greater concern? Energy. “The constraint isn’t going to be the chips,” Mehta mentioned. “It’s going to be power generation and distribution. That’s the real bottleneck.”
Each day insights on enterprise use instances with VB Each day
If you wish to impress your boss, VB Each day has you lined. We provide the inside scoop on what firms are doing with generative AI, from regulatory shifts to sensible deployments, so you may share insights for optimum ROI.
An error occured.