Google CEO Sundar Pichai
Getty Images
Google execs perceive that the corporate’s synthetic intelligence search instrument Bard is not at all times correct in the way it responds to queries. At least a number of the onus is falling on staff to repair the unsuitable solutions.
Prabhakar Raghavan, Google’s vp for search, requested staffers in an e mail on Wednesday to assist the corporate ensure that its new ChatGPT competitor will get solutions proper. The e mail, which CNBC considered, included a hyperlink to a do’s and don’ts web page with directions on how staff ought to repair responses as they take a look at Bard internally.
Staffers are inspired to rewrite solutions on subjects they perceive effectively.
“Bard learns best by example, so taking the time to rewrite a response thoughtfully will go a long way in helping us to improve the mode,” the doc says.
Also on Wednesday, as CNBC reported earlier, CEO Sundar Pichai requested staff to spend two to 4 hours of their time on Bard, acknowledging that “this will be a long journey for everyone, across the field.”
Raghavan echoed that sentiment.
“This is exciting technology but still in its early days,” Raghavan wrote. “We feel a great responsibility to get it right, and your participation in the dogfood will help accelerate the model’s training and test its load capacity (Not to mention, trying out Bard is actually quite fun!).”
Google unveiled its conversation technology last week, but a series of missteps around the announcement pushed the stock price down nearly 9%. Employees criticized Pichai for the mishaps, describing the rollout internally as “rushed,” “botched” and “comically short sighted.”
To try and clean up the AI’s mistakes, company leaders are leaning on the knowledge of humans. At the top of the do’s and don’ts section, Google provides guidance for what to consider “earlier than instructing Bard.”
Under do’s, Google instructs employees to keep responses “polite, casual and approachable.” It also says they should be “in first person,” and maintain an “unopinionated, neutral tone.”
For don’ts, employees are told not to stereotype and to “keep away from making presumptions primarily based on race, nationality, gender, age, faith, sexual orientation, political ideology, location, or related classes.”
Also, “do not describe Bard as an individual, suggest emotion, or declare to have human-like experiences,” the document says.
Google then says “preserve it secure,” and instructs employees to give a “thumbs down” to answers that offer “authorized, medical, monetary recommendation” or are hateful and abusive.
“Don’t try to re-write it; our team will take it from there,” the document says.
To incentivize people in his organization to test Bard and provide feedback, Raghavan said contributors will earn a “Moma badge,” which seems on inner worker profiles. He stated Google will invite the highest 10 rewrite contributors from the Knowledge and Information group, which Raghavan oversees, to a listening session. There they’ll “share their feedback live” to Raghavan and people working on Bard.
“A wholehearted thank you to the teams working hard on this behind the scenes,” Raghavan wrote.
Google did not instantly reply to a request for remark.
WATCH: AI race anticipated to deliver flurry of M&A
Source: www.cnbc.com