We live in tremendously interesting times. It has been a long time since we have witnessed the emergence of a technology as promising, and at the same time worrying, as artificial intelligence. Bill Gates, to cite just one example, speaks of the beginning of a “new era” and even points to a revolution that will affect various industries. Elon Musk, for his part, calls for a pause in the development of the most powerful systems.
During the last time we learned that NVIDIA became a key player in the world of artificial intelligence. The US manufacturer’s chips, which offer great performance, computational density and scalability, were used to train famous models such as DALL-E and GPT-4. But this company is not the only one in the sector. Google also has its chips, and ensures that they are better than those of NVIDIA.
Google also makes hardware for artificial intelligence
The Mountain View giant introduced its own processing units designed for artificial intelligence data centers in 2016. We are talking about the Google TPUs (Cloud Tensor Processing Unit), which have been improving over the years and are already used in 90% of the company’s machine learning tasks and are crucial for the operation of the search engine and YouTube.
Also, like Microsoft’s Azure (which is powered by NVIDIA hardware), Google Cloud allows outside companies and organizations to train their AI models using its cloud infrastructure. In other words, instead of setting up their own data centers with highly expensive and hard-to-find hardware, they rent the necessary computing power from those of these companies.
Without going any further, as explained in an official publication, the popular Midjourney text-based image generator, which in its most recent version has surprised us with its precision and realism, has been trained using the Google Cloud infrastructure. In other words, the chips from the company led by Sundar Pichai have played a leading role in one of the most famous AI models of the moment.
So are Google’s chips really any good? According to a scientific article published this Wednesday by the company, the Google TPUv4 launched in 2021 are up to 1.7 times faster and 1.9 times more efficient in terms of energy consumption than the NVIDIA A100 launched in 2020. The comparisons, according to the authors of the article, they correspond to training tasks of AI models of the same size.
It should be noted that while most data centers today are powered by NVIDIA A100 chips, companies are already migrating to the NVIDIA H100, which improves on its predecessor’s performance . Those of Mountain View have not said if they are working on a new version of their TPUs, although presumably they are doing so so as not to be left behind in the artificial intelligence race.