E-Discovery » OpenAI’s Paper Analyzes ChatGPT’s Security Risks

OpenAI’s Paper Analyzes ChatGPT’s Security Risks

July 6, 2023

OpenAI’s Paper Analyzes ChatGPT’s Security Risks

With ChatGPT becoming more prevalent, it is essential to consider the reliability of the information it provides. A recent paper published by OpenAI, the developer of ChatGPT, describes the various safety, privacy and cybersecurity concerns associated with their AI tool, as well as the actions they have taken to mitigate potential harm. The paper states that ChatGPT tends to “hallucinate,” meaning it produces content that is nonsensical or untruthful relative to its sources. For example, when asked to provide a legal argument, ChatGPT provided complete citations to case law that did not exist. Even when given specific documents to draw information from, the appearance of hallucinations in the answer could not be ruled out.

The more ChatGPT is used, users will start to place more trust in the information it provides, while the risk of hallucinations increases. This opens the possibility of hallucinated “facts,” including wholly fabricated legal precedents, being treated as truths. Linked to these risks is the issue of overreliance. By giving users complex and detailed answers, ChatGPT becomes ever more believable, gaining authority in the process. Finally, ChatGPT can generate harmful content which can have an impact on individuals in the real world. These challenges highlight the need to ensure that any business deployment of ChatGPT or other generative AI services comes with adequate controls over data preparation, prompts and screening of ChatGPT’s responses.

Critical intelligence for general counsel

Stay on top of the latest news, solutions and best practices by reading Daily Updates from Today's General Counsel.

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top