How Legal Ops Can Overcome Challenges to Develop Sound AI Policy

June 21, 2024

How Legal Ops Can Overcome Challenges to Develop Sound AI Policy

It’s critical for companies to develop a proactive and comprehensive AI policy, as legal experts discussed at the recent AI Strategy Summit in New York.

Panelists delving into the topic of artificial intelligence policy included Benesch Law Partner Aslam Rawoof, Exos General Counsel Marc Mandel, and Verizon Managing Associate Josh Dubin.

Benesch published a recap of the panel discussion on its website. Here are some key takeaways about the challenges of AI governance:

Employee-driven AI adoption: As AI tools enter organizations through multiple, sometimes unmonitored, channels, employees are adopting AI technologies faster than formal policies can keep up. There is thus a need for clear, accessible guidelines to ensure safe AI use within organizations.

Organizational awareness: Developing AI policies requires understanding use cases and conducting audits to assess tool adoption. Additionally, AI needs to be demystified and common misunderstandings addressed through education.

Policy development strategies: Companies should into their existing policies. Those policies should be written simply and reviewed quarterly to keep pace with technological changes. To mitigate potential AI-driven discrimination risks, diversity, equity, and inclusion (DEI) committees should be included in the review process.

Employee data and confidentiality: Policies should cover the use of employee data, not just consumer data. It will be critical to educate employees on the appropriate use of AI tools, such as avoiding the sharing of sensitive information.

Leadership and education: Using short educational formats like webcasts and AI resource channels can enhance understanding and compliance. Active leadership involvement is vital to balancing legal and technical considerations.

Monitoring and oversight: While monitoring tools can assist in overseeing AI usage, human oversight remains necessary. The goal of monitoring should be to guide proper use, promoting AI as a beneficial tool within defined parameters.

Developing practical AI policies: Begin with employee surveys and focus groups to gauge current AI usage. It will be essential to understand relevant laws and compliance obligations.

By referencing established frameworks like the National Institute of Standards and Technology (NIST) AI Risk Management Framework, involving diverse stakeholders, and continuously adapting their policies to match AI’s rapid progression, organizations can foster innovation while maintaining robust compliance.

Sign up for our weekly newsletters specifically curated to different practice areas: litigation, cybersecurity & data privacy, legal ops, and compliance.

Critical intelligence for general counsel

Stay on top of the latest news, solutions and best practices by reading Daily Updates from Today's General Counsel.

Daily Updates

Sign up for our free daily newsletter for the latest news and business legal developments.

Scroll to Top