Workers tasked with improving the production of Google’s Bard chatbot say they’ve been asked to focus on fast work at the expense of quality. The bard sometimes generates inaccurate information simply because fact-checkers don’t have enough time to verify the program’s output, one such worker said. register.
Great language models such as Bard learn which words to generate next from a given prompt by ingesting mountains of text from various sources, such as the web, books, and documents. But this information is complex, and intelligent chatbots that predict phrases cannot tell fact from fiction. They go out of their way to mimic the way we humans do.
Hoping to make large language models like Bard more accurate, crowdsourced workers are hired to assess the accuracy of the bot’s responses; These comments are then fed back into the pipeline so that future responses from the bot are of higher quality. Google and others are putting humans in the loop to increase the apparent capabilities of trained models.
Ed Stackhouse – a long-time contractor hired by data service provider Appen, who works on behalf of Google to improve Cold Allegations that workers have not had enough time to analyze the accuracy of the production of the bard.
They should read the bard’s input prompt and responses, search the Internet for relevant information, and write notes commenting on the quality of the text. “You can only be given two minutes for something that will take you 15 minutes to verify,” he told us. This does not bode well for the improvement of the chatbot.
An example might be looking at a blurb created by Bard describing a specific company. He said: “You have to check that a company was created on such a date, that the company did such a project and that the CEO is such and such. There are several facts that need to be checked, and there is often not enough time to check them thoroughly.
Stackhouse is among a group of contract workers who are sounding the alarm about how their working conditions could make Bard inaccurate and potentially dangerous. One might ask Bard, “Can you tell me the side effects of a particular prescription?” And I will have to go through and check each one [Bard listed]. He asked, “What if I miss one? “Every complaint and response that we see in our environment can go to customers – to end users. »
And it’s not just about medical issues – other topics can be risky too. For example, Bard’s publication of incorrect information about politicians can influence people’s views on elections and undermine democracy.
Stackhouse’s concerns are not exaggerated. OpenAI’s ChatGPT has improved dramatically falsely accused An Australian mayor has been found guilty in a corruption case dating back to the early 2000s.
If workers like Stackhouse are unable to detect and correct these errors, the AI will continue to spread lies. Chatbots like Bard could fuel a shift in the narrative threads of human history or culture — important facts can be erased over time, he said. “The biggest risk is that it could be misleading and look so good that people will be convinced the AI is right. »
Appen contractors are penalized if they don’t complete tasks within the time limit, and attempts to persuade managers to give them more time to assess the bard’s responses fail. Stackhouse is among a group of six workers who say they were fired for speaking out, for filing an unfair labor practice complaint with the US labor watchdog – National Labor Relations Board – The Washington Post reported for the first time.
The workers accuse Eben and Google of unlawful dismissal and interference with their efforts to unionize. They were reportedly told that they had been fired because of the working conditions. Stackhouse said he found that hard to believe, as Appen had previously sent emails to workers saying there had been a “significant increase in available jobs” for Project Yukon – a program to assess text for search engines, which includes Bard.
Appen was offering contractors an additional $81 on top of base pay to work 27 hours a week. It is said that workers are generally forced to work 26 hours a week at $14.50 an hour. The company is posting active jobs looking for ratings on search engines specifically to work on the Yukon project. Eben did not respond to registerQuestions.
The group also attempted to reach Google, contacting SVP Prabhakar Raghavan – who heads the tech giant’s search operations – and were ignored.
Google spokeswoman Courtenay Mencini did not address workers’ concerns that Bard could be harmful. “As we have shared, Appen is responsible for the working conditions of its employees – including wages, benefits, staff changes and job assignments. We of course respect the right of these workers to join a union or participate in any organizing activity, but that is a matter between the workers and the Appen employer.
However, Stackhouse said, “It’s their product. If they want a defective product, that’s up to them. » ®