What DeepSeek Can Teach Legal Teams About Creating Stronger GenAI Policies

Leonard J. Dietzen, III and Derek Dzwonkowski

April 2, 2025

What DeepSeek Can Teach Legal Teams About Creating Stronger GenAI Policies

Leonard J. Dietzen III is a partner at RumbergerKirk in Tallahassee, Florida, who concentrates his practice on all aspects of employment law for both private and public sector employers. He can be reached at [email protected].
Derek Dzwonkowski is an associate at RumbergerKirk in Tallahassee, Florida, who concentrates his practice on labor and employment law. He can be reached at [email protected].

The instant popularity of China’s DeepSeek-V3 generative artificial intelligence model underscores why companies should craft stronger GenAI policies that minimize the risks of employees exposing sensitive data, violating compliance regulations, and harming their companies’ brand images.

As noted in this Today’s General Counsel story in October 2024, GenAI tools can compromise trade secrets, data security, and regulatory compliance. Companies using any GenAI technology risk their confidential information becoming part of the data that trains the model itself, potentially leading to unauthorized access, misuse, or government surveillance.

DeepSeek is even riskier than many other platforms because the company and its servers are based in China, where data privacy laws were crafted to protect state surveillance, according to an interview with Georgetown University law professor Mark Jia in online magazine ChinaFile. 

“China’s privacy laws are meant to preserve a broad ‘exceptional zone’ for state surveillance in areas like intelligence collection, law enforcement, and domestic stability maintenance,” Jia said.

DeepSeek is open about its location in its online privacy policy: “We store the information we collect in secure servers located in the People’s Republic of China,” the company writes. “Where required, we will use appropriate safeguards for transferring personal information outside of certain countries, including for one or more of the purposes as set out in this Policy, we will do so in accordance with the requirements of applicable data protection laws.”

Many U.S. companies must adhere to stringent data protection laws, such as the California Consumer Privacy Act (CCPA) and the General Data Protection Regulation (GDPR) in Europe. But DeepSeek’s privacy policy “is clearly not compliant with GDPR,” according to the Center for Strategic and International Studies, and does not afford users any data protection. Therefore, companies using the tool may inadvertently violate legal requirements, leading to penalties, lawsuits, or reputational harm.

Additionally, under the 2022 CHIPS and Science Act, the Biden Administration flagged China as a “country of concern” to prevent the law’s technology incentives from being used “for malign purposes by adversarial countries against the United States.” While this designation could change under the Trump Administration, the U.S.’ fraught relationship with China suggests a heightened risk of exposing data to an AI system based there.

Crafting a Policy with These Risks in Mind

Until companies can fully assess the security and compliance measures of foreign-owned AI tools like DeepSeek, they should limit or prohibit employee use of such platforms. That said, the advent of a player like DeepSeek and the fast-evolving AI landscape remind us why AI policies require continuous monitoring and improvements. Key considerations policies must address:

  • Ethical guidelines, which establish standards for fairness, transparency, and accountability, are based on the possibility that an AI system could generate biased answers from biased data. Consider how a 2014 Amazon AI project to review applications for computer programming job openings “taught itself that male candidates were preferable” because its information was based on resumes submitted to the company over 10 years when males dominated the industry, according to Reuters. An AI system based in another country, especially one not politically aligned with the U.S., could experience a similar problem.  Ensure AI governance and policies address these ethical concerns by evaluating vendor policies and practices and considering best practices that can reduce the risk of bias, such as keeping a “human in the loop” for quality assurance.
  • Compliance and legal requirements are of particular concern given differing regulatory obligations governing AI use in China versus those in the U.S.—especially in California—and Europe. Companies should get clearance from compliance experts before using DeepSeek or other any other new platform.
  • Data privacy and security measures are another area for companies to watch closely when using GenAI products like DeepSeek. A typical AI policy includes implementing protocols to safeguard sensitive data, including storage and access controls, but as noted above, DeepSeek’s data is stored on Chinese servers where the security of the data is unclear and raises red flags. Policies must address what standards—GDPR, California or another area’s—a company commits to meeting before it engages new technologies and products on company servers.
  • IT and cybersecurity professionals should be tapped to review the risks associated with using any AI platforms, especially something like DeepSeek. Neil Sahota, an AI advisor to the United Nations, writes in Forbes that companies are investing in “AI security systems capable of detecting and neutralizing AI intrusions” in addition to more legal strategies and intellectual property management.

Read the latest thought leadership and analysis from legal experts

Ongoing Training of Employees is Key 

As new products come out quickly, some more dangerous than others, companies need to react quickly to update employees on the risk of new technologies. Employees need to understand best practices and protocols to work with any AI system and avoid exposing sensitive information to other AI users, especially critical trade secrets.

Companies must consistently monitor employee AI usage and regularly update their policies and frequency in training on use as the AI industry develops. Tasks should include regularly auditing their AI usage, drafting new best practices and compliance requirements, and establishing feedback loops to identify potential issues before they become liabilities.

Conclusion

The security and privacy issues raised by DeepSeek explain why several state and federal agencies have recently banned their employees from using this product. Companies have similar concerns, and until companies can fully assess the security and compliance measures of foreign-owned AI tools like DeepSeek, they should limit or prohibit employee use of such platforms.

The emergence of DeepSeek provides valuable lessons for legal teams. Crafting stronger GenAI polices and taking a precautionary approach prevents unintentional data exposure and regulatory violations. Companies should establish clear guidelines on which AI tools are approved for workplace use and enforce these policies through internal monitoring and regular communication.

Must read intelligence for general counsel

Subscribe to the Daily Updates newsletter to be at the forefront of best practices and the latest legal news.

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top