OpenAI to build gigawatt Stargate data center in the UAE



OpenAI today announced plans to build a large artificial intelligence data center in the United Arab Emirates. 

The facility is expected to consume 1 gigawatt of power at full capacity. One gigawatt corresponds to the electricity usage of about 700,000 homes. According to Reuters, it’s believed that the data center will house about 100,000 graphics processing units.

OpenAI is building the facility as part of a program called OpenAI for Countries that it debuted earlier this month. The initiative is modeled after Stargate, the ChatGPT’s developer push to establish a network of AI data centers in the U.S. at a cost of up to $500 billion. OpenAI will help countries that join the program build their own AI infrastructure.

The ChatGPT developer is partnering with Nvidia, Oracle Corp., Cisco Systems Inc., SoftBank Group Corp. and Emirati AI company G42 to build the upcoming UAE data center. The facility will be located on a 10-square-mile site in Abu Dhabi. The campus is expected to have 5 gigawatts’ worth of data center capacity once it becomes fully operational.

When OpenAI’s facility comes online in 2026, it will reportedly have an initial capacity of 200 megawatts, or one-fifth of a gigawatt. The facility is expected to be equipped with Nvidia’s GB300 appliances, its most advanced AI systems for data centers.

The GB300 is based on the Blackwell Ultra B300, an upgraded version of Nvidia’s Blackwell B200 chip. The company says that the former processor can perform some inference tasks 50% faster than the original. It also includes significantly more onboard memory. The larger memory pool allows AI models to keep more of their data in the Blackwell Ultra, which reduces the need to use slower off-chip RAM and thereby boosts processing.

The original Blackwell B200 comprised two GPUs linked together by an interconnect. According to Nvidia, those graphics cards use a method called micro-tensor scaling to compress relatively large units of data into four-bit values. Shrinking datasets reduces the amount of time required to process them, which speeds up calculations.

The GB300 system that OpenAI’s UAE data center will use combines 72 Blackwell Ultra B300 chips with 36 of Nvidia Corp.’s Grace central processing units. Those CPUs collectively have 2,592 cores. The GPUs and CPUs are linked together by an Nvidia technology called NVLink C2C that can shuttle data between chips at a rate of 900 gigabytes per second.

OpenAI hopes to launch similar projects in other markets. The company disclosed today that Chief Strategy Officer Jason Kwon will launch a roadshow in the Asia Pacific region to promote the OpenAI for Countries program. The ChatGPT developer previously stated that its initial goal for the initiative is to launch AI projects in 10 countries or regions.

Image: OpenAI

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *