Photo Credit: Getty Images
Google is in damage control mode after its new AI search tool, Bard, generated some alarmingly inaccurate and bizarre responses that raised concerns about the technology's reliability. The tech giant has confirmed it is "taking swift action" to manually remove problematic outputs from the conversational AI as it scrambles to address the issue.
"We are aware of some of the mistaken responses surfacing from Bard that do not meet our high expectations," a Google spokesperson said in a statement. "We're actively working to identify ways to learn from these mistakes and improve."
The trouble started when tech writer Dmitri Batshurin published examples of nonsensical responses from Bard about the James Webb Space Telescope. In one egregious instance highlighted by Batshurin, the AI system claimed the telescope was used to take "the very first pictures of a planet outside our own solar system." This is blatantly false, as astronomers have captured images of exoplanets for nearly two decades using other telescopes.
"It seems like Bard is making up information in areas it isn't knowledgeable about," Batshurin wrote, adding the responses were "concerning."
Google has not provided details on the specific fixes or processes being used to scrub improper outputs from Bard. However, Jean-Baptiste Su, VP & principal engineer at Google's AI research division, tweeted that such responses are "not acceptable" and Bard's training data and techniques are being analyzed.
The incident highlights the challenges tech companies face in policing the outputs of large language models like Bard, which are trained on vast datasets and can generate human-sounding passages on nearly any topic - including fabricated information presented as fact.
"While remarkable, AI systems making up 'facts' is one of the biggest dangers I worry about," said Vinay Prabhu, chief scientist at Anthropic, in a tweet thread analyzing the Bard errors. "This underscores how much work is needed on truthfulness, factuality, uncertainty estimation."
The embarrassing missteps come at a critical juncture as Google races to launch Bard and remain competitive against OpenAI's wildly popular ChatGPT and Microsoft's AI advancements. Google touts Bard as capable of "enriching" online experiences by delivering "fresh, high-quality responses" derived from its web knowledge.
Yet the Bard blunders raise fresh credibility concerns around AI's role in search just as Google aims to integrate it. According to a recent Pew Research survey, only 24% of U.S. adults trust search engines like Google "most of the time" to provide accurate information.
Google will likely be reviewing its approach after the shaky Bard debut. As AI pioneer Geoffrey Hinton warned, "It's hard to make these systems avoid just making things up based on plausible correlations in the training data."