In a bold step that could reshape the future of artificial intelligence, Meta Platforms Inc.—the parent company of Facebook, Instagram, and WhatsApp—is preparing to launch a dedicated research lab in 2025 focused on developing artificial superintelligence (ASI). This new unit, backed by CEO Mark Zuckerberg himself, reflects Meta’s intent to move beyond artificial general intelligence and enter territory where machines could exceed human intelligence in virtually every domain.
With this initiative, Meta is not just joining the AI race—it’s attempting to redefine it.
A Lab Designed for the Next Leap in Intelligence
Internally referred to as the “superintelligence group,” the upcoming lab will include around 50 handpicked researchers and engineers. Unlike conventional AI labs aiming for AGI—AI that can reason at a human level—Meta’s sights are set even higher.
The lab’s core focus will be to develop systems that push past the limitations of current models and explore capabilities that have, until now, only existed in theory or science fiction.
This renewed focus comes at a time when the AI space is more competitive than ever, and Zuckerberg’s direct oversight highlights how seriously Meta is treating the opportunity—and the threat—posed by rivals like OpenAI, Google, and Anthropic.
Scale AI: The Critical Partner in Meta’s Strategy
To support this vision, Meta is reportedly in advanced talks to invest upwards of $10 billion in Scale AI, a company known for its work in building high-quality training datasets for advanced models. Founded by Alexandr Wang, Scale AI has emerged as a key player in the AI supply chain—quietly powering many of the breakthroughs behind the scenes.
Wang is expected to take on a leadership role within Meta’s AI efforts, offering both technical depth and operational scale. For Meta, this partnership provides access to a vital pipeline: the kind of curated, high-volume data necessary to train systems that might one day rival or surpass human capabilities.
The Promise and the Peril of Superintelligent AI
If successful, Meta’s superintelligence lab could unlock major breakthroughs across industries. Potential applications include accelerating drug discovery, optimizing global energy systems, and solving previously intractable problems in climate science or materials engineering.
But the pursuit of superintelligence comes with serious questions—especially around safety, ethics, and control.
Some experts, including noted computer scientist Geoffrey Hinton, have warned that superintelligent systems could become unpredictable and even dangerous if not properly managed. Estimates vary, but concerns about existential risk are being taken more seriously within the research community.
Inside Meta views are mixed. While Chief AI Scientist Yann LeCun remains optimistic that systems can be designed with sufficient control and alignment, the company acknowledges that this is new ground. Responsible development and ongoing research into AI safety frameworks will be essential.
A New Chapter in the Global AI Competition
Meta’s decision to build a dedicated ASI lab puts it in direct competition with the world’s most advanced AI organizations:
-
OpenAI, supported by Microsoft, is advancing conversational AI and AGI through its GPT model family.
-
Google DeepMind is evolving its Gemini platform with increasingly complex multi-modal capabilities.
-
Anthropic, founded by former OpenAI researchers, is focused on building AI that is steerable and aligned with human intent.
What sets Meta apart is its access to social data, immersive hardware platforms like Quest and Ray-Ban smart glasses, and its increasing investment in infrastructure. If its Scale AI partnership materializes, Meta could secure a significant advantage in the one area that many say will determine the outcome of the AI race: training data.
Can Meta Earn Public Trust in the ASI Era?
While the technical ambitions are bold, Meta will need to be equally serious about trust and governance. Past scrutiny over privacy and data practices means the company will likely face a higher burden of proof than some of its peers.
Public acceptance of superintelligent AI will depend on transparent communication, international collaboration, and clearly defined safety measures. The lab will need to address concerns not just about what the technology can do—but who it serves, and how it is governed.
Looking Ahead
Meta’s superintelligence lab isn’t just another research group—it’s a strategic shift in how the company views its role in shaping the future of technology. With significant resources, partnerships, and talent behind it, the lab could position Meta at the forefront of AI’s next phase.
But this is still uncharted territory. Whether Meta succeeds in building something beyond AGI—or finds itself navigating complex ethical and regulatory terrain—the company’s move will likely influence how the world approaches artificial superintelligence in the years to come.