A Reuters/Ipsos poll indicated that many workers in the United States are using ChatGPT to assist with simple tasks, despite concerns that have caused companies such as Microsoft and Google to restrict its usage.
Companies all across the world are debating how to effectively employ ChatGPT, a chatbot software that uses generative AI to have conversations with users and respond to a variety of cues. However, security firms and businesses have expressed fear that it might lead to intellectual property and strategy breaches.
Anecdotal examples of people using ChatGPT to help with their day-to-day work include drafting emails, summarising documents and doing preliminary research.
Some 28% of respondents to the online poll on artificial intelligence (AI) between July 11 and 17 said they regularly use ChatGPT at work, while only 22% said their employers explicitly allowed such external tools.
The Reuters/Ipsos poll of 2,625 adults across the United States had a credibility interval, a measure of precision, of about 2 percentage points.
Some 10% of those polled said their bosses explicitly banned external AI tools, while about 25% did not know if their company permitted use of the technology.
ChatGPT became the fastest-growing app in history after its launch in November. It has created both excitement and alarm, bringing its developer OpenAI into conflict with regulators, particularly in Europe, where the company’s mass data-collecting has drawn criticism from privacy watchdogs.
Human reviewers from other companies may read any of the generated chats, and researchers found that similar artificial intelligence AI could reproduce data it absorbed during training, creating a potential risk for proprietary information.
“People do not understand how the data is used when they use generative AI services,” said Ben King, VP of customer trust at corporate security firm Okta (OKTA.O).
“For businesses this is critical, because users don’t have a contract with many AIs – because they are a free service – so corporates won’t have run the risk through their usual assessment process,” King said.
OpenAI declined to comment when asked about the implications of individual employees using ChatGPT, but highlighted a recent company blog post assuring corporate partners that their data would not be used to train the chatbot further, unless they gave explicit permission.
When people use Google’s Bard it collects data such as text, location, and other usage information. The company allows users to delete past activity from their accounts and request that content fed into the AI be removed. Alphabet-owned (GOOGL.O) Google declined to comment when asked for further detail.
Microsoft (MSFT.O) did not immediately respond to a request for comment.