Revolutionising AI Model Training Time
Cerebras Systems and G42, a UAE-based technology holding group, have jointly launched Condor Galaxy, an innovative network of nine interconnected supercomputers. This cutting-edge initiative aims to revolutionise AI computing by significantly reducing AI model training time.
A Leap Towards Advancements in AI
The first supercomputer, Condor Galaxy 1 (CG-1), features a staggering 4 exaFLOPs capacity and 54 million cores. The collaboration plans to deploy two more AI supercomputers, CG-2 and CG-3, in the United States by early 2024, totaling 36 exaFLOPs. The interconnected network is set to drive groundbreaking advancements in AI on a global scale.
Addressing Societal Challenges
Condor Galaxy’s purpose goes beyond AI capabilities. G42 and Cerebras envision utilizing the supercomputing network to tackle pressing challenges in healthcare, energy, climate action, and more.
Simplified Access to High-Performance AI Compute
Located in Santa Clara, California, CG-1 offers a cloud service, providing easy access to high-performance AI computers for customers without the complexities of managing or distributing models over physical systems.
Streamlined AI Model Training
CG-1 is optimised for Large Language Models and Generative AI, streamlining the training process with native support for long sequence lengths and without the need for complex distributed programming languages.
Condor Galaxy’s Ambitious Expansion
The ambitious expansion plan includes two more AI supercomputers, CG-2 and CG-3, forming a distributed AI supercomputer with a total compute power of 12 exaFLOPs and 162 million cores. The project aims to add six more Condor Galaxy supercomputers in 2024, reaching an astounding total compute power of 36 exaFLOPs.
Inspired by the Vastness of Space
The name “Condor Galaxy” draws inspiration from the NGC 6872 galaxy, renowned for its immense size and located 212 million light-years from Earth. This name symbolises the supercomputing network’s vast potential in the world of AI computing.