
Hierarchical Reasoning Model(HRM) by Sapient delivers 100x faster AI reasoning with just 27M parameters—efficient, brain-inspired, and open-source.
Imagine an AI that thinks like a human, solving complex puzzles as fast as you might crack a tricky riddle, all while using a fraction of the data and energy of today’s massive models. That’s the promise of the HRM, a groundbreaking innovation from Sapient Intelligence, a Singapore-based research group. Released as open-source code on GitHub in July 2025,this new reasoning model is shaking up the AI world with its brain-inspired approach to reasoning. While its impressive results on benchmarks like ARC-AGI are self-reported and await independent verification or trained checkpoints, HRM’s potential is undeniable. Let’s dive into what makes HRM special, how it works, and why it could change the future of AI.
What Is the Hierarchical Reasoning Model?
HRM is a new kind of AI designed to think more like a human brain. Unlike large language models (LLMs) that rely on step-by-step “chain-of-thought” (CoT) prompting—a slow process where AI spells out every reasoning step—HRM splits its thinking into two parts. A “high-level” module handles big-picture planning, like sketching out a strategy, while a “low-level” module tackles quick, detailed calculations, like filling in the blanks. This dual setup, inspired by how our brains process information at different speeds, lets HRM solve problems up to 100 times faster than LLMs.
What’s wild is that HRM needs way less data to shine. While LLMs guzzle billions of examples to get good, HRM masters complex tasks—like solving expert-level Sudoku or navigating 30×30 mazes—with just 1,000 training examples. With only 27 million parameters (tiny compared to LLMs’ billions), HRM is lean, efficient, and built to run on everyday devices like phones or small robots.
Why Is This a Big Deal?
HRM’s speed, size, and efficiency make it a game-changer. Here’s why it stands out:
Blazing Speed: By processing tasks in parallel, HRM solves problems like logistics or math puzzles in a flash. Imagine a delivery app rerouting drivers in seconds or a scientist analyzing data instantly.
Tiny but Powerful: With just 27 million parameters, HRM punches above its weight, outperforming giants like Claude 3.7 (with billions of parameters) on the ARC-AGI benchmark, scoring 40.3% compared to Claude’s 21.2%. It’s like a lightweight boxer knocking out a heavyweight.
Energy Saver: Big AI models eat up electricity, driving up costs and environmental concerns. HRM’s compact design uses far less power, making it greener and cheaper for businesses and researchers.
Data Efficiency: Needing only 1,000 examples to excel means HRM can tackle problems where data is scarce, like rare disease diagnostics or niche scientific research.
On the ARC-AGI benchmark—a test of abstract reasoning where AI must solve novel puzzles from just a few examples—HRM reportedly hit 40.3% accuracy on ARC-AGI-1, blowing past models like OpenAI’s o3-mini-high (34.5%) and DeepSeek R1. It also aced tasks like Sudoku-Extreme and Maze-Hard, where CoT-based models scored a flat 0%. These results are self-reported, as Sapient hasn’t yet submitted to the official ARC-AGI leaderboard or released trained checkpoints for others to verify, but the open-source code on GitHub lets researchers explore its potential.
How Does HRM Work? A Deeper Look
HRM’s magic lies in its brain-like architecture, rooted in three principles: hierarchical processing, temporal separation, and recurrent connectivity. Let’s break it down:
Hierarchical Processing: HRM has two modules that work together like a CEO and a team of workers. The high-level module (H-module) thinks strategically, forming abstract plans—like deciding the steps to solve a maze. The low-level module (L-module) handles fast, detailed computations, like calculating the exact path. This split lets HRM tackle complex tasks without getting bogged down.
Temporal Separation: The H-module works slowly, focusing on long-term reasoning, while the L-module operates quickly, handling immediate calculations. This mimics how our brains switch between deep thought and quick reactions, making the model efficient and stable.
Recurrent Connectivity: HRM uses recurrent neural networks, meaning it loops information back into itself, allowing it to refine its thinking over time. This helps it solve tasks that require backtracking, like Sudoku, where you might need to try multiple solutions.
Technically, HRM avoids common AI pitfalls like “vanishing gradients” (where learning stalls) and “early convergence” (where models stop improving too soon). It uses a technique called Adaptive Computation Time (ACT), paired with deep Q-learning, to dynamically decide how much computation each task needs. This is stabilized by bounded network parameters, weight decay, and post-normalization layers, ensuring training stays smooth without the instability often seen in Q-learning.
For example, on ARC-AGI tasks, HRM processes 30×30 grids (900 tokens) and generalizes rules from just 2–3 input-output pairs. In Sudoku-Extreme, it achieves near-perfect accuracy by systematically exploring solutions, unlike LLMs that struggle with the task’s long-term planning demands. The open-source code, available at github.com/sapientinc/HRM, includes scripts like build_sudoku_dataset.py and pretrain.py for training on datasets like Sudoku-Extreme (1,000 examples) or ARC-AGI-2 (1,120 examples), though trained checkpoints aren’t yet public.
What Could This Mean for You?
HRM’s breakthroughs could ripple across industries:
Healthcare: Its ability to reason with limited data could help doctors diagnose rare diseases, where patient data is sparse but accuracy is critical. Sapient is already partnering with medical institutions to explore this.
Climate Forecasting: HRM’s 97% accuracy in subseasonal-to-seasonal weather predictions could improve disaster preparedness and save resources.
Robotics: Its lightweight design makes it ideal as an on-device “decision brain” for robots, enabling real-time actions in dynamic environments like warehouses or homes.
Business: Companies could use HRM to optimize supply chains or predict market trends without needing massive datasets or supercomputers.
Are There Any Downsides?
HRM isn’t perfect. Its “black box” nature—reasoning internally rather than spelling out steps—makes it harder to understand how it reaches answers. This could be a problem in fields like medicine or law, where transparency is key. Also, HRM excels at logical, structured tasks like puzzles but isn’t built for creative jobs like writing stories or generating art, where LLMs still dominate. Some experts note that the “100x faster” claim might be task-specific, and the self-reported 40.3% ARC-AGI score needs independent confirmation, as no official leaderboard entry or trained checkpoints are available yet.
There’s also a debate about data usage. Some Reddit users argue the model’s training on 1,000 task-specific examples might be a form of “brute force” learning, though its ability to generalize to new tasks suggests otherwise. Sapient’s team is working on stronger ARC-AGI scores, so more clarity may come soon.
Why This Matters for the Future
HRM shows that AI doesn’t need to be big to be brilliant. By mimicking the human brain, it offers a path to faster, greener, and more accessible AI that can solve real-world problems without breaking the bank. Its open-source release on GitHub invites researchers to build on it, though the lack of trained checkpoints means you’ll need to train it yourself to test its claims. As industries demand AI that works with limited data and resources, the reasoning model could lead the way, especially in areas like healthcare, climate, and robotics.
This breakthrough also sparks a bigger question: are we closer to artificial general intelligence (AGI), where machines match or surpass human reasoning? Sapient’s CEO, Guan Wang, believes HRM’s brain-inspired design is a step toward AGI, as it “thinks and reasons like a person, not just crunches probabilities.” While it’s not there yet, HRM’s ability to tackle novel tasks with minimal data is a promising sign.
The Hierarchical Reasoning Model is a bold leap toward smarter, more efficient AI. With just 27 million parameters and 1,000 training examples, it outshines massive models on complex reasoning tasks, from ARC-AGI puzzles to expert-level Sudoku. Its brain-inspired architecture, combining hierarchical processing, temporal separation, and recurrent connectivity, offers a fresh approach to AI that’s fast, lean, and powerful. While its self-reported results await independent verification and trained checkpoints, the open-source code on GitHub invites the world to explore its potential. As Sapient Intelligence pushes for even better scores, HRM could redefine how we build AI, making it more human-like and ready to tackle the world’s toughest challenges.