Canadian AI startup Cohere — cofounded by one of many authors of the unique transformer paper that kickstarted the massive language mannequin (LLM) revolution again in 2017 — at the moment unveiled Command A, its newest generative AI mannequin designed for enterprise functions.
Because the successor to Command-R, which debuted in March 2024, and Command R+ following it, Command A builds on Cohere’s give attention to retrieval-augmented technology (RAG), exterior software use and enterprise AI effectivity — particularly almost about compute and the velocity at which it serves up solutions.
That’s going to make it a horny choice for enterprises seeking to acquire an AI benefit with out breaking the financial institution, and for functions the place immediate responses are wanted — corresponding to finance, well being, drugs, science and legislation.
With sooner speeds, decrease {hardware} necessities and expanded multilingual capabilities, Command A positions itself as a powerful various to fashions corresponding to GPT-4o and DeepSeek-V3 — traditional LLMs, not the brand new reasoning fashions which have taken the AI business by storm recently.
Not like its predecessor, which supported a context size of 128,000 tokens (referencing the quantity of knowledge the LLM can deal with in a single enter/output change, about equal to a 300-page novel), Command A doubles the context size to 256,000 tokens (equal to 600 pages of textual content) whereas enhancing general effectivity and enterprise readiness.
It additionally comes on the heels Cohere for AI — the non-profit subsidiary of the corporate — releasing an open-source (for analysis solely) multilingual imaginative and prescient mannequin known as Aya Imaginative and prescient earlier this month.
A step up from Command-R
When Command-R launched in early 2024, it launched key improvements like optimized RAG efficiency, higher information retrieval and lower-cost AI deployments.
It gained traction with enterprises, integrating into enterprise options from firms like Oracle, Notion, Scale AI, Accenture and McKinsey, although a November 2024 report from Menlo Ventures surveying enterprise adoption put Cohere’s market share amongst enterprises at a slim 3%, far beneath OpenAI (34%), Anthropic (24%), and even small startups like Mistral (5%).
Now, in a bid to turn into a much bigger enterprise draw, Command A pushes these capabilities even additional. In accordance with Cohere, it:
Matches or outperforms OpenAI’s GPT-4o and DeepSeek-V3 in enterprise, STEM and coding duties
Operates on simply two GPUs (A100 or H100), a significant effectivity enchancment in comparison with fashions that require as much as 32 GPUs
Achieves sooner token technology, producing 156 tokens per second — 1.75x sooner than GPT-4o and a couple of.4x sooner than DeepSeek-V3
Reduces latency, with a 6,500ms time-to-first-token, in comparison with 7,460ms for GPT-4o and 14,740ms for DeepSeek-V3
Strengthens multilingual AI capabilities, with improved Arabic dialect matching and expanded help for 23 international languages.
Cohere notes in its developer documentation on-line that: “Command A is Chatty. By default, the model is interactive and optimized for conversation, meaning it is verbose and uses markdown to highlight code. To override this behavior, developers should use a preamble which asks the model to simply provide the answer and to not use markdown or code block markers.”
Constructed for the enterprise
Cohere has continued its enterprise-first technique with Command A, making certain that it integrates seamlessly into enterprise environments. Key options embrace:
Superior retrieval-augmented technology (RAG): Allows verifiable, high-accuracy responses for enterprise functions
Agentic software use: Helps advanced workflows by integrating with enterprise instruments
North AI platform integration: Works with Cohere’s North AI platform, permitting companies to automate duties utilizing safe, enterprise-grade AI brokers
Scalability and price effectivity: Personal deployments are as much as 50% cheaper than API-based entry.
Multilingual and extremely performant in Arabic
A standout characteristic of Command A is its capability to generate correct responses throughout 23 of essentially the most spoken languages all over the world, together with improved dealing with of Arabic dialects. Supported languages (in response to the developer documentation on Cohere’s web site) are:
English
French
Spanish
Italian
German
Portuguese
Japanese
Korean
Chinese language
Arabic
Russian
Polish
Turkish
Vietnamese
Dutch
Czech
Indonesian
Ukrainian
Romanian
Greek
Hindi
Hebrew
Persian
In benchmark evaluations:
Command A scored 98.2% accuracy in responding in Arabic to English prompts — larger than each DeepSeek-V3 (94.9%) and GPT-4o (92.2%).
It considerably outperformed opponents in dialect consistency, attaining an ADI2 rating of 24.7, in comparison with 15.9 (GPT-4o) and 15.7 (DeepSeek-V3).
Credit score: Cohere
Constructed for velocity and effectivity
Pace is a vital issue for enterprise AI deployment, and Command A has been engineered to ship outcomes sooner than a lot of its opponents.
Token streaming velocity for 100K context requests: 73 tokens/sec (in comparison with GPT-4o at 38/sec and DeepSeek-V3 at 32/sec)
Sooner first token technology: Reduces response time considerably in comparison with different large-scale fashions
Pricing and availability
Command A is now obtainable on the Cohere platform and with open weights for analysis use solely on Hugging Face beneath a Inventive Commons Attribution Non Business 4.0 Worldwide (CC-by-NC 4.0) license, with broader cloud supplier help coming quickly.
Enter tokens: $2.50 per million
Output tokens: $10.00 per million
Personal and on-prem deployments can be found upon request.
Business reactions
A number of AI researchers and Cohere workforce members have shared their enthusiasm for Command A.
Dwaraknath Ganesan, pretraining at Cohere, commented on X: “Extremely excited to reveal what we have been working on for the last few months! Command A is amazing. Can be deployed on just 2 H100 GPUs! 256K context length, expanded multilingual support, agentic tool use… very proud of this one.”
Pierre Richemond, AI researcher at Cohere, added: “Command A is our new GPT-4o/DeepSeek v3 level, open-weights 111B model sporting a 256K context length that has been optimized for efficiency in enterprise use cases.”
Constructing on the inspiration of Command-R, Cohere’s Command A represents the following step in scalable, cost-efficient enterprise AI.
With sooner speeds, a bigger context window, improved multilingual dealing with and decrease deployment prices, it presents companies a strong various to current AI fashions.
Every day insights on enterprise use instances with VB Every day
If you wish to impress your boss, VB Every day has you lined. We provide the inside scoop on what firms are doing with generative AI, from regulatory shifts to sensible deployments, so you may share insights for optimum ROI.
An error occured.