Researchers at Nvidia have developed a brand new approach that flips the script on how massive language fashions (LLMs) study to cause.
The tactic, referred to as reinforcement studying pre-training (RLP), integrates RL into the preliminary coaching section somewhat than saving it for the top.
This strategy encourages the mannequin to “think for itself before predicting what comes next, thus teaching an independent thinking behavior earlier in the pretraining,” the researchers state of their paper.
By studying to cause on plain textual content while not having exterior verifiers, fashions educated with RLP present important enhancements in studying advanced reasoning duties downstream, hinting at a way forward for extra succesful and adaptable AI for real-world duties.
The standard LLM coaching cycle
Sometimes, massive language fashions are first pre-trained on huge quantities of textual content utilizing a "next-token prediction" goal, the place they’re given a string of textual content and requested to repeatedly guess what the subsequent phrase (or token) shall be. On this section, they study grammar, information, and primary associations.
Within the later post-training section, fashions often study advanced reasoning skills equivalent to chain-of-thought (CoT) the place a mannequin lays out its reasoning step-by-step. This stage typically includes supervised fine-tuning (SFT) or reinforcement studying from human suggestions (RLHF), which require specialised, curated datasets.
The paper’s authors argue this sequential course of doesn’t match human comprehension, which is “not a linear token-by-token process, but rather a parallel integration of input with prior knowledge.” Current pre-training strategies lack this mechanism, hindering a mannequin's potential to develop deep reasoning from the beginning.
How reinforcement studying pre-training works
RLP reframes this course of by treating CoT era as an motion the mannequin takes earlier than predicting the subsequent token. At every step, the mannequin first generates an inner "thought" or reasoning chain. It then predicts the subsequent phrase within the textual content, utilizing the unique context augmented with its new thought.
The mannequin receives a reward based mostly on how a lot its thought improved the accuracy of its prediction in comparison with a baseline that didn't generate a thought (pure next-token prediction). This reward sign is calculated routinely based mostly on the change in chance, eliminating the necessity for exterior verifiers or human-labeled knowledge.
The reward is optimistic solely when the generated thought helps the mannequin higher predict the subsequent token. By rewarding ideas based mostly on their predictive profit, RLP successfully teaches the mannequin the way to assume usefully on the identical huge, unstructured datasets used for traditional pre-training.
This steady suggestions loop permits the mannequin to study when a easy predictive guess is adequate and when it wants to have interaction in deeper reasoning. Because the researchers put it, “RLP is designed to shape thinking in base models by rewarding only those thoughts that measurably help next-token prediction.”
This foundational strategy, nonetheless, doesn't make later fine-tuning phases out of date. In response to Bryan Catanzaro, VP of utilized deep studying analysis at Nvidia and a co-author of the paper, RLP is designed to enhance, not exchange, these essential steps. "RLP isn’t meant to replace the later post-training stages like supervised fine-tuning or reinforcement learning from human feedback," Catanzaro advised VentureBeat. "Those stages remain crucial for refining model behavior… It’s really designed to amplify the effectiveness of those later phases by giving the model a head start."
RLP in motion
In experiments with Qwen3-1.7B and Nemotron-Nano-12B, Nvidia’s workforce examined RLP throughout a set of math and science reasoning benchmarks. The outcomes present that fashions enhanced with RLP constantly outperformed their conventionally educated counterparts, with notably robust positive aspects in reasoning-heavy duties.
For an enterprise, this improved reasoning might translate to extra dependable outputs in multi-step workflows like monetary evaluation or authorized doc summarization.
"RLP encourages the model during pretraining to think before it predicts, helping the model internalize a more coherent reasoning style," mentioned Catanzaro. "This might assist scale back delicate logical errors, particularly in longer workflows.”
Whereas stressing that RLP-trained fashions will nonetheless want the standard guardrails equivalent to verification layers, human oversight, and consistency checks, Catanzaro mentioned that “RLP gives you a stronger baseline."
Importantly, the benefits of RLP compound instead of disappearing during subsequent fine-tuning stages (catastrophic forgetting is a common problem in LLM training, where later training stages cause the model to forget its previously learned skills and knowledge). The RLP-trained model achieved an overall score that was 7-8% higher than baselines after an identical post-training regimen. The researchers conclude that RLP “establishes robust reasoning foundations that are not washed out by downstream alignment but instead compound with post-training.”
The effectivity of the approach is a key discovering. On the Qwen3-1.7B mannequin, RLP improved efficiency by 17% over customary steady pre-training and in addition beat the same approach referred to as Reinforcement Pretraining by way of prefix-matching rewards (RPT). This benefit held even when the baseline mannequin was educated with 35 instances extra knowledge to match the computational value, confirming the positive aspects come from the tactic itself, not simply extra processing.
Moreover, RLP demonstrates spectacular scalability and flexibility, efficiently extracting a reasoning sign from general-purpose net knowledge—not simply curated datasets. When utilized to the hybrid Mamba-Transformer mannequin Nemotron-Nano-12B, RLP achieved a 35% relative enchancment over a closely educated baseline whereas utilizing only a tiny fraction of the info.
Whereas these outcomes level towards a extra environment friendly path for constructing highly effective fashions, Catanzaro frames the innovation as a basic shift within the studying course of itself, somewhat than an instantaneous answer to excessive coaching prices.
"This research is exciting because it offers a shift in how models absorb information during pretraining leading to a smarter learning process," he defined. "It wouldn’t replace large-scale pretraining, but offer another creative method in building the best possible models."
A brand new basis for AI coaching
In the end, RLP factors towards a future the place pre-training is now not a monolithic technique of next-token prediction. As a substitute, the subsequent era of fashions could possibly be constructed on a hybrid of aims, creating AI that learns to assume extra robustly from day one. Catanzaro presents a strong analogy to border this shift:
"Next-token prediction teaches a model what the world looks like; reinforcement-style objectives like RLP can teach it how to think about what it’s seeing," he mentioned. "The combination of these two objectives could help models develop deeper, more structured thinking much earlier in training… Tools like RLP can build on top of that foundation, making learning more active, curious, and even more efficient."
There’s nonetheless lots to study in regards to the dynamics of reinforcement studying within the pre-training section, however what appears clear is that “introducing exploration earlier in training opens a new axis for scaling — not just in size, but in how models learn to reason,” Catanzaro mentioned.

