Wednesday, June 12, 2024
No menu items!
HomeGoogle NewsGoogle Said to Have Warned Employees Against Using Confidential Information on AI...

Google Said to Have Warned Employees Against Using Confidential Information on AI Chatbot

Alphabet is alerting employees to how they use chatbots, including its own Bard, while also marketing the program worldwide, four people familiar with the matter told Reuters.

Google’s parent company has advised employees not to enter its confidential documents into AI chatbots, the people said, and the company confirmed, citing its longstanding policy on information protection.

Chatbots, among them Bard and ChatGPT, are human-sounding programs that use so-called generalized artificial intelligence to hold conversations with users and respond to countless prompts. Human evaluators can read the chats, and the researchers found that the same AI could reproduce the data it absorbed during training, creating a leak risk.

Alphabet also warned its engineers to avoid directly using the computer code the chatbot can generate, some people said.

When asked for comment, the company said Bard can make unsolicited code suggestions, but it helps programmers nonetheless. Google also said it wants to be transparent about the limitations of its technology.

The concerns show how Google wants to avoid business harm from the software it released to compete with ChatGPT. At stake in Google’s race against ChatGPT backers are OpenAI and Microsoft which are billions of dollars in investment and not to mention the advertising and cloud revenue from new AI programs.

Google’s caution also reflects what is becoming the security standard for corporations, namely warning employees about the use of publicly available chat programs.

The companies told Reuters that an increasing number of businesses around the world are setting up barriers to AI chatbots, among them Samsung, Amazon.com and Deutsche Bank. Apple, which did not return a request for comment, is said to have also.

See also  Midea 2 Ton 3 Star Split AC (SANTIS NEO CLS)

About 43% of professionals are using ChatGPT or other AI tools as of January, often without telling their boss, according to a survey of nearly 12,000 respondents, including from leading companies with based in the United States, made possible by the Fishbowl website.

By February, Google asked employees to test Bard before launch to not provide inside information, Insider reported. Now, Google is rolling out Bard to more than 180 countries and in 40 languages ​​as a springboard for creativity, and its alerts extend to its code recommendations.

Google told Reuters it had detailed conversations with Ireland’s Data Protection Commission and was working on questions from the regulator, following a Politico report on Tuesday that the company had postponed Bard’s EU launch this week pending more information on chatbot’s impact on privacy.

Concerns about sensitive information

Such technology can compose emails, documents, even software, which promises to significantly speed up tasks. However, this content may contain false information, sensitive data or even copyrighted passages from the Harry Potter novels.

Google’s privacy notice updated on June 1 also states: “Do not include confidential or sensitive information in your Bard chats.”

Several companies have developed software to address such concerns. For instance, Cloudflare, which protects websites from cyberattacks and offers other cloud services, is marketing the ability to businesses to tag and limit some data from flowing out.

Google and Microsoft are also offering conversational tools to enterprise customers at a higher price point, but don’t absorb data into public AI models. The default setting in Bard and ChatGPT is to save the user’s conversation history, the user can choose to delete this history.

See also  How to Delete My Activity History on Google Search on PC and Mobile?

Yusuf Mehdi, Microsoft’s director of consumer marketing, said it “makes sense” that companies don’t want their employees to use public chatbots for work.

“Companies are taking a reasonably cautious stance when explaining how Microsoft’s free Bing chatbot compares to their enterprise software,” Mehdi said. “There, our policies are much more stringent.”

Microsoft declined to comment on whether it has an outright ban on employees entering confidential information into public AI programs, including its own, though another executive there told Reuters he had personally limited use.

Matthew Prince, CEO of Cloudflare, says that entering confidential matters into a chatbot is like “unmasking a bunch of PhD students in all your personal files.”

© Thomson Reuters 2023


Apple’s annual developer conference is coming up. From the company’s first mixed reality headset to new software updates, we discuss all of the things we expect to see at WWDC 2023 on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music, and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

HTC U24 Pro

HMD 110

HMD 105

Acer ALG Gaming Laptop

Realme GT 6

Huawei MatePad SE 11

Xiaomi 14 Civi