We collect cookies to analyze our website traffic and performance; we never collect any personal data. Cookie Policy
Accept
NEW YORK DAWN™NEW YORK DAWN™NEW YORK DAWN™
Notification Show More
Font ResizerAa
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Reading: Past benchmarks: How DeepSeek-R1 and o1 carry out on real-world duties
Share
Font ResizerAa
NEW YORK DAWN™NEW YORK DAWN™
Search
  • Home
  • Trending
  • New York
  • World
  • Politics
  • Business
    • Business
    • Economy
    • Real Estate
  • Crypto & NFTs
  • Tech
  • Lifestyle
    • Lifestyle
    • Food
    • Travel
    • Fashion
    • Art
  • Health
  • Sports
  • Entertainment
Follow US
NEW YORK DAWN™ > Blog > Technology > Past benchmarks: How DeepSeek-R1 and o1 carry out on real-world duties
Past benchmarks: How DeepSeek-R1 and o1 carry out on real-world duties
Technology

Past benchmarks: How DeepSeek-R1 and o1 carry out on real-world duties

Last updated: January 31, 2025 8:27 pm
Editorial Board Published January 31, 2025
Share
SHARE

DeepSeek-R1 has certainly created lots of pleasure and concern, particularly for OpenAI’s rival mannequin o1. So, we put them to check in a side-by-side comparability on a number of easy information evaluation and market analysis duties. 

To place the fashions on equal footing, we used Perplexity Professional Search, which now helps each o1 and R1. Our objective was to look past benchmarks and see if the fashions can truly carry out advert hoc duties that require gathering info from the net, choosing out the suitable items of information and performing easy duties that will require substantial guide effort. 

Each fashions are spectacular however make errors when the prompts lack specificity. o1 is barely higher at reasoning duties however R1’s transparency provides it an edge in instances (and there can be fairly a number of) the place it makes errors.

Here’s a breakdown of some of our experiments and the hyperlinks to the Perplexity pages the place you’ll be able to evaluate the outcomes your self.

Calculating returns on investments from the net

Our first take a look at gauged whether or not fashions might calculate returns on funding (ROI). We thought-about a state of affairs the place the consumer has invested $140 within the Magnificent Seven (Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, Tesla) on the primary day of each month from January to December 2024. We requested the mannequin to calculate the worth of the portfolio on the present date.

To perform this job, the mannequin must pull Magazine 7 value info for the primary day of every month, break up the month-to-month funding evenly throughout the shares ($20 per inventory), sum them up and calculate the portfolio worth in accordance with the worth of the shares on the present date.

On this job, each fashions failed. o1 returned an inventory of inventory costs for January 2024 and January 2025 together with a system to calculate the portfolio worth. Nonetheless, it did not calculate the proper values and principally mentioned that there could be no ROI. However, R1 made the error of solely investing in January 2024 and calculating the returns for January 2025.

o1’s reasoning hint doesn’t present sufficient info

Nonetheless, what was attention-grabbing was the fashions’ reasoning course of. Whereas o1 didn’t present a lot particulars on the way it had reached its outcomes, R1’s reasoning traced confirmed that it didn’t have the proper info as a result of Perplexity’s retrieval engine had did not receive the month-to-month information for inventory costs (many retrieval-augmented technology functions fail not due to the mannequin lack of skills however due to unhealthy retrieval). This proved to be an necessary little bit of suggestions that led us to the following experiment.

R1 partial reasoningThe R1 reasoning hint reveals that it’s lacking info

Reasoning over file content material

We determined to run the identical experiment as earlier than, however as an alternative of prompting the mannequin to retrieve the data from the net, we determined to supply it in a textual content file. For this, we copy-pasted inventory month-to-month information for every inventory from Yahoo! Finance right into a textual content file and gave it to the mannequin. The file contained the title of every inventory plus the HTML desk that contained the worth for the primary day of every month from January to December 2024 and the final recorded value. The info was not cleaned to scale back the guide effort and take a look at whether or not the mannequin might choose the suitable elements from the information.

Once more, each fashions failed to supply the suitable reply. o1 appeared to have extracted the information from the file, however instructed the calculation be completed manually in a device like Excel. The reasoning hint was very obscure and didn’t comprise any helpful info to troubleshoot the mannequin. R1 additionally failed and didn’t present a solution, however the reasoning hint contained lots of helpful info.

For instance, it was clear that the mannequin had accurately parsed the HTML information for every inventory and was in a position to extract the proper info. It had additionally been in a position to do the month-by-month calculation of investments, sum them and calculate the ultimate worth in accordance with the newest inventory value within the desk. Nonetheless, that last worth remained in its reasoning chain and did not make it into the ultimate reply. The mannequin had additionally been confounded by a row within the Nvidia chart that had marked the corporate’s 10:1 inventory break up on June 10, 2024, and ended up miscalculating the ultimate worth of the portfolio. 

r1 reasoning nvidiaR1 hid the ends in its reasoning hint together with details about the place it went unsuitable

Once more, the true differentiator was not the outcome itself, however the capacity to analyze how the mannequin arrived at its response. On this case, R1 supplied us with a greater expertise, permitting us to grasp the mannequin’s limitations and the way we are able to reformulate our immediate and format our information to get higher outcomes sooner or later.

Evaluating information over the net

One other experiment we carried out required the mannequin to match the stats of 4 main NBA facilities and decide which one had the most effective enchancment in area objective share (FG%) from the 2022/2023 to the 2023/2024 seasons. This job required the mannequin to do multi-step reasoning over totally different information factors. The catch within the immediate was that it included Victor Wembanyama, who simply entered the league as a rookie in 2023.

The retrieval for this immediate was a lot simpler, since participant stats are extensively reported on the net and are often included of their Wikipedia and NBA profiles. Each fashions answered accurately (it’s Giannis in case you had been curious), though relying on the sources they used, their figures had been a bit totally different. Nonetheless, they didn’t understand that Wemby didn’t qualify for the comparability and gathered different stats from his time within the European league.

In its reply, R1 supplied a greater breakdown of the outcomes with a comparability desk together with hyperlinks to the sources it used for its reply. The added context enabled us to appropriate the immediate. After we modified the immediate specifying that we had been on the lookout for FG% from NBA seasons, the mannequin accurately dominated out Wemby from the outcomes.

corrected NBA promptIncluding a easy phrase to the immediate made all of the distinction within the outcome. That is one thing {that a} human would implicitly know. Be as particular as you’ll be able to in your immediate, and attempt to embrace info {that a} human would implicitly assume.

Remaining verdict

Reasoning fashions are highly effective instruments, however nonetheless have a methods to go earlier than they are often totally trusted with duties, particularly as different elements of enormous language mannequin (LLM) functions proceed to evolve. From our experiments, each o1 and R1 can nonetheless make fundamental errors. Regardless of exhibiting spectacular outcomes, they nonetheless want a little bit of handholding to offer correct outcomes.

Ideally, a reasoning mannequin ought to be capable to clarify to the consumer when it lacks info for the duty. Alternatively, the reasoning hint of the mannequin ought to be capable to information customers to raised perceive errors and proper their prompts to extend the accuracy and stability of the mannequin’s responses. On this regard, R1 had the higher hand. Hopefully, future reasoning fashions, together with OpenAI’s upcoming o3 collection, will present customers with extra visibility and management.

Day by day insights on enterprise use instances with VB Day by day

If you wish to impress your boss, VB Day by day has you coated. We provide the inside scoop on what firms are doing with generative AI, from regulatory shifts to sensible deployments, so you’ll be able to share insights for optimum ROI.

An error occured.

vb daily phone

You Might Also Like

Google claims Gemini 2.5 Professional preview beats DeepSeek R1 and Grok 3 Beta in coding efficiency

Solidroad simply raised $6.5M to reinvent customer support with AI that coaches, not replaces

Google Play launches Diamond District expertise in Roblox

Databricks and Noma sort out CISOs’ AI nightmares round inference vulnerabilities

How a lot data do LLMs actually memorize? Now we all know, because of Meta, Google, Nvidia and Cornell

TAGGED:benchmarksDeepSeekR1performRealWorldtasks
Share This Article
Facebook Twitter Email Print

Follow US

Find US on Social Medias
FacebookLike
TwitterFollow
YoutubeSubscribe
TelegramFollow
Popular News
Tyler Nubin, not simply Deonte Banks, should profit from Giants’ Marquand Manuel DBs teaching rent
Sports

Tyler Nubin, not simply Deonte Banks, should profit from Giants’ Marquand Manuel DBs teaching rent

Editorial Board January 23, 2025
Publicity to wildlife and forest walks can assist ease signs of PTSD in US conflict veterans
Texas Republicans Approve Far-Right Platform Declaring Biden’s Election Illegitimate
Are larger doses of folic acid in being pregnant secure?
Why Cristin Milioti’s ‘Penguin’ villain fills her with pleasure

You Might Also Like

Play Community wins a number of authorized circumstances in token dispute with Prepared Makers Inc.
Technology

Play Community wins a number of authorized circumstances in token dispute with Prepared Makers Inc.

June 5, 2025
Latent Know-how raises M to alter animation with generative physics
Technology

Latent Know-how raises $8M to alter animation with generative physics

June 5, 2025
Nintendo brings again late-night console launches with debut of Swap 2
Technology

Nintendo brings again late-night console launches with debut of Swap 2

June 5, 2025
Nintendo Change 2 will get official gaming equipment from Belkin
Technology

Nintendo Change 2 will get official gaming equipment from Belkin

June 5, 2025

Categories

  • Health
  • Sports
  • Politics
  • Entertainment
  • Technology
  • World
  • Art

About US

New York Dawn is a proud and integral publication of the Enspirers News Group, embodying the values of journalistic integrity and excellence.
Company
  • About Us
  • Newsroom Policies & Standards
  • Diversity & Inclusion
  • Careers
  • Media & Community Relations
  • Accessibility Statement
Contact Us
  • Contact Us
  • Contact Customer Care
  • Advertise
  • Licensing & Syndication
  • Request a Correction
  • Contact the Newsroom
  • Send a News Tip
  • Report a Vulnerability
Term of Use
  • Digital Products Terms of Sale
  • Terms of Service
  • Privacy Policy
  • Cookie Settings
  • Submissions & Discussion Policy
  • RSS Terms of Service
  • Ad Choices
© 2024 New York Dawn. All Rights Reserved.
Welcome Back!

Sign in to your account

Lost your password?