Google asked its employees to try Bard and some destroyed it, according to Bloomberg

google bard

Do you remember what you did on November 30, 2022? Possibly it was a day like any other in your life and nothing extraordinary happened that was worth remembering months later. Google can’t say the same. That day, by surprise and without warning, OpenAI came up with something called ChatGPT.

Some business movements are capable of generating earthquakes in the technology industry whose effects can set off alarm bells in the competition. And we say they can because it doesn’t always happen (just remember the reactive response of mobile phone leaders Nokia and BlackBerry to the appearance of the iPhone in 2007 ).

More than a “red code” in Google

The launch of the conversational chatbot from the company led by Sam Altman has sparked more than just a “code red” at Google. The Mountain View company, which currently leads the search market, finds itself for the first time in years facing a substantial threat that has forced it to take drastic measures. 

One of these measures has been the initial launch (at the moment only available in some countries by invitation ) of Bard, a direct competitor of ChatGPT and its improved version connected to the Internet through Bing Chat. The problem? Many Google employees believe that we are facing a too hasty launch.

According to inside information seen by Bloomberg, those led by Sundar Pichai were tasked with testing the artificial intelligence chatbot powered by LaMDA (Language Model for Dialogue Applications) prior to its deployment. The truth is that many of the comments were overwhelming, but the company continued with its plan anyway.

Negative messages between Google teams have not gone unnoticed. “Bard is worse than useless: please don’t release it,” an employee said in February of this year after evaluating the tool. “He is a pathological liar,” pointed out another in relation to his tendency to try information and throw imprecise answers.

The atmosphere of tension and disgust among certain employees of the company increased after a series of decisions made by the management after the launch of ChatGPT, according to the aforementioned US media. For years, Google had been working on strengthening its ethics team for its artificial intelligence initiatives.

The mission of this group of experts was to help develop products and services that were aligned with the company’s principles, with high security standards. These types of objectives not only required a lot of resources, which Google agreed to provide in 2021, but also time. The latter ended up being an inconvenience after the OpenAI move.

Although the company founded by Larry Page and Sergey Brin is not inexperienced in the field of artificial intelligence, many of its most ambitious projects were limited to living inside the laboratory. In 2022, the pace of development has accelerated considerably, leaving certain ethical questions in the background.

Google’s work dynamics establish that before a product reaches the market, it must achieve a high score in certain categories. In Bard’s case this changed. “Child Safety”, for example, has yet to reach a score of 100, but “fairness” can be admitted for a release with 80 or 85 points.

Those mentioned, as we say, correspond to internal information seen by Bloomberg. Publicly, Google’s chief executive has been optimistic about the possibilities offered by his search chatbot, although he has pointed out that all models have problems with hallucinations and has insisted that safety is a priority.

Leave a Reply