Skip to content
Home » Grok 3 Completes Pretraining with 10X more Compute than Grok 2: xAI Most Ambitious Project Yet

Grok 3 Completes Pretraining with 10X more Compute than Grok 2: xAI Most Ambitious Project Yet

xAI

Elon Musk revealed that Grok 3’s pretraining phase has concluded, utilizing computing resources that dwarf its predecessor by an order of magnitude. Elon says: “And Grok 3 is coming soon. Pretraining is now complete with 10X more compute than Grok 2.”

The scale of Grok 3’s training infrastructure marks a watershed moment in AI development. While Grok 2 leveraged 16k H100 GPUs from Oracle’s cloud services, Grok 3’s training potentially required up to 160k H100 GPUs – a staggering increase that raises questions about both the model’s capabilities and the sustainability of such intensive computing requirements.

xAI’s previously announced 100k-GPU cluster, constructed in just 122 days, (inside the world’s largest AI computing facility | 100k NVIDIA GPUs), as the backbone for this massive training operation. Infrastructure feat suggests careful planning and execution, though the sheer scale of the operation inevitably presented unique technical hurdles.

Despite the completed pretraining milestone, the full release timeline remains unclear. Initially targeted for late 2024, Grok 3’s debut appears to have shifted, though Elon’s announcement signals significant progress in the development pipeline.

The unprecedented scale of Grok 3’s training resources sets new benchmarks for the AI industry. No previous AI model has demanded such extensive computational power, raising questions about the future direction of large language model development and the resources required to compete at the highest level.

Related Post

xAI Colossus From 100k to 1M GPUs: Memphis Emerges as the New Supercomputing Powerhouse

xAI Raises $6B in Series C Funding, Plans Massive AI Infrastructure Expansion

xAI Launches Grok API: Developer Access to Advanced AI Model