Google unveiled Bard, a chatbot driven by AI, in an effort to compete with the popular AI chatbot ChatGPT. However, a few days after its release, Bard came under fire for its inappropriate responses, factual blunders, etc. Google is now using human expertise to improve the chatbot’s responses and has instructed its staff to correct the bot’s errors.
According to a report in CNBC, Google’s vice president of search, Prabhakar Raghavan, asked staff members to assist with Bard and rework its comments in an email. According to the article, the email also contains a link to a page with advice for employees on how to interact with Bard.
Google enlists the assistance of its staff
“Bard learns best by example, so taking the time to carefully create an answer will go a long way in helping us to enhance the mode,” the document states.”
Despite being a “exciting technology,” Bard is still in its early stages, according to Raghavan in the document. Adds he, “Your assistance in the dogfood programme will hasten the model’s training and test its load capability (not to mention that using Bard is actually pretty enjoyable!). We take our obligation to get it right very seriously.
Do’s and Don’ts
Regarding the dos and don’ts, Google has instructed its staff to make sure that Bard’s responses are “polite, easygoing, and friendly.”
It further states that the responses must be given in the first person and in a tone that is impartial and unprejudiced. It appears that Google is attempting to make the responses more comparable to ChatGPT because the AI chatbot’s main goal is to respond in a human-like manner while remaining impartial.
The don’t list seems to have more items. According to company policy, employees should “avoid making assumptions based on ethnicity, nationality, gender, age, religion, sexual orientation, political ideology, location, or similar factors.” They are also instructed not to “imply emotion, or claim to have human-like experiences,” refer to Bard as a person, or characterise him as such.
Also, staff are expected to reject a response and flag it to the search team if they see Bard providing “legal, medical, or financial advice” or coming up with vile and offensive responses.
Employee incentive programmes
Also, Google has announced rewards for staff members who choose to contribute to the development of Bard. A “Moma Badge” will be awarded to those who assist in correcting the chatbot’s errors, and it will show up on their internal profile. Also, Raghavan will invite the top 10 contributors to a unique listening session.