A recent report revealed that the company has procured around 10,000 GPUs and recruited AI talent from DeepMind for the project involving a large language model (LLM).
Elon Musk bought thousands of GPUs for new artificial intelligence project on Twitter
As Business Insider points out, this project is still in its early stages, but the acquisition of a significant amount of computing power suggests Musk’s commitment to moving forward with it. Although the exact purpose of generative AI is unclear, it is speculated that possible applications include improving search functionality or generating targeted advertising content.
As for the exact hardware acquired by Twitter, it has not yet been specified. The company is known to have spent tens of millions of dollars on these compute GPUs, despite its ongoing financial woes. These processing units are expected to be deployed in one of Twitter’s two remaining data centers, with Atlanta being the most likely destination. Musk shut down Twitter’s main data center in Sacramento at the end of December, obviously reducing the company’s computing capabilities.
Twitter is also hiring additional engineers for the generative AI project. Earlier this year, the company hired Igor Babuschkin and Manuel Kroiss, AI research engineers at DeepMind, a subsidiary of Alphabet. Musk has been actively seeking talent in the AI industry to compete with OpenAI’s ChatGPT since at least February.
OpenAI, which is now a benchmark among AI companies, used Nvidia’s A100 GPUs to train its ChatGPT bot and continues to use these machines to run it. By now, Nvidia has released the successor to the A100, its H100 compute GPUs which are several times faster with roughly the same power. Twitter is likely using Nvidia’s Hopper H100 or similar hardware for its AI project, though this is purely speculation.
Given that the company has yet to determine what its AI project will be used for, it’s hard to estimate how many Hopper GPUs it may need. However, when large companies buy hardware, they do so at special prices, since they buy thousands of units. Meanwhile, when purchased separately from retailers like CDW, Nvidia’s H100 boards can cost upwards of $10,000 per unit, giving you an idea of how much the company could have spent on hardware for its AI initiative.