Home tech Google, one of the biggest supporters of AI, warns its employees against chatbots, ETHRWorldSEA

Google, one of the biggest supporters of AI, warns its employees against chatbots, ETHRWorldSEA

by admin


<p>Chatbots, including Bard and ChatGPT, are human-like programs that use so-called generative artificial intelligence to have conversations with users and respond to countless prompts.  </p>
<p>« /><figcaption class=Chatbots, including Bard and ChatGPT, are human-like programs that use so-called generative artificial intelligence to have conversations with users and respond to countless prompts.

Written by Jeffrey Dustin and Anna Tong

SAN FRANCISCO – Alphabet Inc is warning employees about how they use chatbots, including its bard, at the same time it markets the software globally, four people familiar with the matter told Reuters.

Google’s parent company advised employees not to enter its confidential documents into smart chatbots, the people said and the company confirmed, citing a long-standing information protection policy.

Chatbots, including Bard and ChatGPT, are human-like programs that use so-called generative artificial intelligence to have conversations with users and respond to countless prompts. Chats can be read by examiners, and researchers have found that a similar AI can replicate data it sucked in during training, creating the risk of a leak.

Some people said Alphabet also warned its engineers to avoid direct use of computer code that chatbots might generate.

When asked for comment, the company said Bard may make unsolicited code suggestions, but it helps programmers nonetheless. Google also said it aims to be transparent about the limits of its technology.

The concerns show how Google wants to avoid harming companies from programs it has launched in competition with ChatGPT. At stake in Google’s race against ChatGPT backers OpenAI and Microsoft Corp are billions of dollars in investment and still untold advertising and cloud revenue from new AI software.

Google’s warning also reflects what has become a security standard for businesses, warning individuals against using publicly available chatbots.

A growing number of companies around the world have implemented firewalls on AI chatbots, the companies told Reuters, including Samsung, Amazon.com and Deutsche Bank. Apparently Apple, which did not respond to requests for comment, did as well.

About 43% of professionals used ChatGPT or other AI tools in January, often without telling their bosses, according to a survey of nearly 12,000 respondents, including large US-based companies, conducted by the Fishbowl Network website. .

In February, Google asked employees who were testing the Bard before its launch not to give it inside information, Insider reported. Google now deploys Bard in more than 180 countries and in 40 languages ​​as a springboard to creativity, and its caveats extend to its code suggestions.

Google told Reuters it had held detailed talks with the Irish Data Protection Commission as it answered questions from regulators, following Politico’s report on Tuesday that the company was delaying the launch of Bard in the Internet. EU this week pending more information on the privacy impact of the chatbot.

Concerns About Sensitive Information

Such technology can create emails, documents, and even the programs themselves, promising to speed up tasks exponentially. However, this content could include false information, sensitive data or even copyrighted passages from the novel “Harry Potter”.

Google’s privacy notice, updated June 1, also states, “Do not include confidential or sensitive information in your conversations with Bard. »

Some companies have developed software to address these concerns. For example, Cloudflare, which defends websites against cyberattacks and offers other cloud services, markets the ability for companies to mark and restrict certain data from output.

Google and Microsoft also offer conversational tools to business customers who will charge a higher price but refrain from ingesting data into generic AI models. The default setting in Bard and ChatGPT is to save users’ chat history, which users can choose to delete.

Yousef Mahdi, director of consumer marketing at Microsoft, said it “makes sense” that companies don’t want their employees to use public chatbots at work.

“Companies are duly taking a conservative stance,” Mehdi said, explaining how Microsoft’s free chatbot Bing compares to its enterprise software. “There, our policies are much stricter. »

Microsoft declined to say whether it had a blanket ban on employees entering confidential information into public AI programs, including its own, though another executive told Reuters he had personally restricted his use.

Writing confidential questions in chatbots was like “shoveling a bunch of PhD students all over your private records,” said Cloudflare CEO Matthew Prince.

(Reporting by Jeffrey Dustin and Anna Tong in San Francisco; Editing by Kenneth Lee and Nick Zieminski)

  • Posted June 20, 2023 7:00 AM EST

Join a community of over 2 million industry professionals

Subscribe to our newsletter for the latest insights and analysis.

Receive updates on your favorite social platform

Follow us for the latest news, insider access to events and more.

Related News

Leave a Comment