How to Set Standards So AI Tools Don’t Replace Human Legal Judgment
By Noga Rosenthal
March 25, 2026
Noga Rosenthal is a seasoned privacy compliance and data ethics professional specializing in the technology sector. She has developed and managed global privacy programs for companies such as Xaxis, Epsilon, and Ampersand. Rosenthal serves as a trustee for the Practicing Law Institute and an adjunct professor at Fordham Law School. Her LinkedIn profile is here.
Many general counsel have started noticing a pattern. Their teams are using AI tools, but not always well. Some lawyers are copying and pasting AI-generated answers directly into advice they offer their executives or sending them up the chain without applying their own judgment. Others are running contracts through AI redlining tools and accepting the output without stopping to ask whether those redlines reflect the company’s risk tolerance or the reality of the deal. On the other end of the spectrum, some lawyers are avoiding AI entirely, even when it would help them work faster or learn more.
What these patterns have in common is not an adoption problem. Most in-house teams have moved past experimentation to approved tools. This is a behavioral issue. If a lawyer simply copies and pastes an AI-generated answer and shares it with a business partner, it won’t be long before the business team asks why it needs the lawyer. A legal department does not get value from AI because lawyers can generate longer answers, broader markups, or longer summaries in record time. It gets value when AI helps lawyers think more critically, work more efficiently, and deliver well-reasoned end products that reflect their judgment. To prevent that erosion, legal leaders need to set behavioral standards for how their teams use these tools.
A discreet risk of AI in a legal department is what it does to the development of junior lawyers. Learning to think like a lawyer requires wrestling with hard problems, sitting with uncertainty, and building the instinct to know when something is wrong before you can fully articulate why. If junior attorneys default to AI for first-pass analysis, they may get to the right answer without ever learning how to get there on their own. General counsel should set an expectation that AI is used to test thinking, not replace the effort of developing it, and that the standard for good work is still a lawyer who can explain their reasoning.
The need for a policy based on judgment, verification, and calibration
General counsel should address these risks directly in a set of clear expectations for how lawyers should and should not be using AI. Addressing that behavioral gap starts with setting clear expectations around three areas: judgment, verification, and calibration.
First, attorneys should not copy and paste AI-generated content into advice, notes, or agreements without their review and revisions. Similarly, they should not assume that more edits or more redlines to a contract reflect better legal work. These failures do not come from bad tools. They come from treating AI as a substitute for thinking rather than a tool to improve it.
Second, attorneys should always verify any response from an AI tool. Legal teams should assume that anything generated by AI may be incomplete, overbroad, or wrong in a material way. That means checking cases, statutes, contract provisions, citations, quotes, and factual assertions before relying on them. Legal teams may often find that the legal AI tool changes its output or stance when an attorney asks questions differently or with additional nuances. Attorneys should use their instincts to push back on the AI-generated answer within the legal tool if they intuitively think the response is “off.”
The over-redlining risk
The third principle is calibration. One of the most common problems in legal AI use is over-redlining. AI tools tend to identify every arguable issue and mark every possible change. That may be useful as an issue-spotting exercise, but it is not the same as giving business-ready legal advice. Good in-house lawyers know the difference and can provide a practical markup. Sometimes the right instruction is to redline comprehensively. Other times the right instruction is to mark only the real risk points and keep the deal moving. Legal teams need to understand that AI does not independently decide this threshold. A good lawyer does by calibrating the tool.
Consider a common scenario where the opposing side has said that they do not accept redlines and it has the market power to hold this line. In that case, the attorney can use AI to highlight the business risks, such as no terminations for convenience, and have the businessperson sign off on those risks. However, it’s a waste of everyone’s time if the attorney tries to redline that agreement using their standard vendor playbook within an AI tool.
Tied to this, legal teams should always calibrate their AI tool by providing the tool with context around their company’s industry, their risk tolerance and ownership structure. A hospital or a bank may take a more conservative stance on a contract than a start-up. Lawyers should also calibrate based on who will receive the work product. A summary for a time-pressed CEO should be concise and focused on key points, while a memo for the deal team may need more detail on specific terms and risks. The user needs to instruct the AI tool accordingly so that the output matches the audience and purpose.
Use AI before escalating, but bring your own judgment
Legal teams should also not rely on AI to make judgment calls that belong to the lawyer. For example, an AI tool may suggest removing an indemnity cap or broadening a limitation of liability clause based on general legal risk. Whether that change makes sense depends on the company’s risk tolerance, the commercial context, and the importance of the deal. That is a judgment call the lawyer must make. The AI tool cannot make it.
Once guardrails are clear, legal teams should operate under defined standards for when they have to use AI in their work. These are not suggestions. These are baseline expectations for how legal professionals should perform.
Lawyers should be expected to use AI to pressure-test issues or research questions before escalating them. They should ask the AI follow-up questions, explore different approaches, and identify possible paths. They can then escalate the issue to their manager, but they must include a point of view. “Here is the issue, here is what I checked, here is my recommendation, and here is where I want input” should be the standard.
AI can be a great second check on legal analysis
That leads to one of the best uses of AI in a legal department, which is as a second check on someone’s own legal analysis. Used properly, AI can help lawyers test a conclusion they have already reached. “I think the answer is X. Am I missing anything?” is a strong prompt. “What is the decision tree for this policy?” is another. A flawed decision tree may reveal that the underlying policy itself needs revision. Also helpful is asking the AI to “Give me the strongest counterargument to this interpretation.” Those uses are valuable because they sharpen the lawyer’s reasoning rather than replace it.
AI is also well-suited to drafting. It can produce first drafts of clauses, fallback language, templates, amendments, issue lists, and post-signing summaries. For in-house teams handling a high volume of commercial work, the technology can save substantial time. A deal summary generated by AI immediately after signing is another good example. An attorney can verify the information quickly while the details of the deal are fresh. The summary can then be used to ensure ongoing contract compliance.
The same is true for templates and amendments. Many legal teams waste time recreating standard forms from scratch or making repetitive edits that AI can handle well on the first pass. Used with supervision, AI can shorten that cycle and let lawyers spend more time on business judgment, negotiation strategy, and stakeholder advice.
A practical checklist for internal rules:
Do:
- Verify every legal citation, factual statement, quote, and contract reference.
- Bring your own recommendation, not just an AI-generated output.
- Use AI to ask first-pass questions before escalating.
- Use AI to teach yourself about basic legal concepts such as various standard contractual provisions, then use it to test yourself on those concepts.
- Use AI to challenge your initial answer and identify gaps.
- Use AI to draft clauses, templates, amendments, and first-pass redlines.
- Use AI to prepare post-close summaries for contract compliance while the details are still fresh.
Don’t:
- Paste AI content into advice, notes, or agreements without review.
- Assume that more edits mean better legal work.
- Use AI to avoid judgment calls that belong to the lawyer.
- Input sensitive or privileged information except through approved workflows.
Conclusion
For general counsel, the management issue is not whether lawyers are using AI. They already are. The management issue is whether the department has taught them to use it like a disciplined legal team rather than like an autocomplete engine. That discipline requires structure, not just culture. AI use should be governed like any other enterprise risk, with a defined process for approving tools, clear data boundaries that protect privileged and sensitive information, and explicit accountability for how the technology is used. Someone in the department needs to own that list, and lawyers need to know what falls outside approved workflows.
A good legal AI policy should leave the team with one standard: use AI to accelerate your work, challenge your thinking, and improve your draft. AI weakens your legal function and skills if it replaces your judgment. But, it can become your competitive edge if it helps sharpen your judgment.
Must read intelligence for general counsel
Subscribe to the Daily Updates newsletter to be at the forefront of best practices and the latest legal news.
Daily Updates
Sign up for our free daily newsletter for the latest news and business legal developments.