This website uses cookies

Read our Privacy policy and Terms of use for more information.

Most AI advice for legal teams is built for big departments. That misses the point for the lawyer who is the entire department, or one of two or three. The constraints are different. The leverage is different. Even the questions are different. That was the focus of a How to Contract webinar hosted by Laura Frederick and featuring Michelle Fleming, Chief Legal Officer at a managed IT services company, and Ben Kiekel, Senior Counsel at Aptos Labs, where he heads the commercial legal department. The conversation was practical, opinionated, and specific to the working life of a lawyer who has no second chair to pass things to.

Strategy #1 – Start With Your Bottlenecks, Not With Features

Anyone who has demoed a legal AI tool knows the trap. The vendor walks you through a feature list. You start matching features to things you might want to do. By the time you finish the demo, you have spent an hour on a tool that may or may not solve a problem you actually have.

Michelle said her first reaction to AI in 2022 was the same one most lawyers have. She thought about features. She thought about what the product did. That framing is backwards for a small team. The right question is where you are slow. Where you are the bottleneck. Where the work piles up. Once you know that, you go looking for tools that fit those bottlenecks instead of trying to retrofit your work to a tool's feature set.

This reframe matters more than it sounds. Small teams have no slack. Spending two weeks setting up a tool that addresses something that was not your real constraint is a real cost, not a small one. The bottleneck question forces honesty about where the time actually goes.

Ben pushed the same point from the other side. The risk for a solo lawyer is not that AI is too hard to use. It is that AI is too fun to use. He warned about the trap of going deep on small tweaks because the tool is new and exciting, when those tweaks were never the bottleneck. His fix is time-boxing. Thirty minutes of play, then thirty minutes of implementation, with goals attached. Without the box, the play time eats the implementation time, and nothing in the workflow actually changes.

The combined point is simple. Diagnose first, then deploy. Tool selection is a downstream decision. The upstream decision is knowing what is actually slowing you down.

Strategy #2 – Build With the Business Team, Not For Them

Most legal AI projects fail in the same way. Legal builds something in a silo, hands it to the business, and the business resents it. The intake form gets ignored. The new memo template sits unused. The shared drive fills with documents nobody reads.

Ben's fix is to build with the business from the start. He has had real success collaborating with the BD team on a shared intake bot, where the business team has a conversation with the AI rather than going through a back-and-forth with legal. The business prefers that interaction because it skips the cycle of legal asking clarifying questions. The AI then produces a memo that legal can actually use without having to refactor it. The reason it works is buy-in. The business helped build the thing, so they use the thing.

Michelle made the same point about jointly trained AI versus a static memo template. You can put a new memo template in front of a business team and they will follow it begrudgingly. Train an AI on the five things legal needs to see in every memo, and the business team gets a tool that nudges them toward those answers without legal having to chase them. If exclusivity is one of the questions and the memo does not address it, that is two or three days of follow-up legal does not have to do.

The deeper point is about the relationship, not the tool. Small legal teams cannot afford a corrosive dynamic with the business. AI built collaboratively becomes a place where legal and the business meet. AI built in a silo becomes another thing legal imposes that the business works around.

Strategy #3 – Treat AI as an Extension of Your Judgment, Not a Replacement

The discipline that has to sit underneath every AI workflow is simple to state and hard to hold. Use the tool as an extension of yourself. Do not send it into battle without review. Ben put it that way, and the framing is worth keeping in front of you because the tool will work hard to make you forget it.

The reason this discipline matters is that AI is built to please. It will give you advice until your bill runs out, as Ben put it, and the advice it gives is what it thinks you want to hear. That is a different thing from correct advice. Treat every output that way and you start catching the problems that matter.

Michelle was direct about her own AI use. AI does roughly 60 to 70 percent of the hands-on work. The remaining 30 to 40 percent is judgment, and that part does not get delegated. The split is useful because it tells you what to look for when you review AI output. You are checking the thing the AI cannot do, not rereading the thing it did fine.

Ben told a story about an IP assignment clause where the AI randomly switched the party names in one section. He compared it to the meme of the AI saying yes, the infected kidney was on the left side, sorry about that. The errors that matter most are not typos. They are the catastrophic, hard-to-catch errors that look fine on a casual read. That is what burns you, and it is why a casual read is not a review.

A workflow that helps. Run a draft through one model, then paste the output into a different model and ask it to find errors. Models seem more aggressive when they are reviewing another model's work than when they are reviewing their own. Ben uses Claude as his primary writing tool and has Gemini or another model audit the output. He also uses a related trick to keep the AI honest. Ask the AI to grade its own faithfulness to the source document, section by section. Sometimes it catches its own hallucinations. Sometimes it rubber-stamps. The rubber-stamping is the warning that you cannot fully outsource the audit.

Michelle put the broader caution simply. AI lacks context. It does not know that this is the third escalation to a particular party. It does not know that a clause was already negotiated and locked. So it defaults to reasonable assumptions that may not fit your situation. The fix is to give it the context out loud, in the prompt. Be more pedantic than you would be with a human, to the point you would be embarrassed to talk to a real person that way.

Strategy #4 – Use AI to Buy Yourself Time on the Things That Matter

Some of the highest-leverage AI uses for small teams are not flashy. They are about getting the right partial answer in front of the business team fast, so legal's full review does not become a blocker.

Michelle's example is one a lot of solo lawyers will recognize. A 50-page agreement lands in the morning. A meeting with the business team is set for two hours later. Pre-AI, the choice was skim it badly or postpone the meeting. With AI, neither happens. She runs the contract through an AI tool with a prompt asking it to surface the financial terms, term and termination, and the other items the business will want to talk about. She walks into the meeting with a working summary the business can react to. The business gets something to chew on. Legal keeps the meeting. The full review still happens, but on her own time.

Ben's parallel example is the final sanity check. After a contract is otherwise done, run it through AI one more time and ask it to check for logic errors, typos, and missed cross-references. Keep the prompt narrow. He uses something close to "this is a final draft, check it for logic errors and spelling, do not engage with the substance." That last instruction matters because if you let the AI opine on substance at the sanity check stage, it will rewrite things you already negotiated. Scoped properly, the sanity check catches the section number that did not get updated when paragraphs got moved, or the cross-reference that points to a deleted clause. Those are the errors that survive a tired final read.

The pattern in both examples is the same. AI is at its best when you scope it tightly. The prompt does the work. A loose prompt produces output that needs as much editing as it saved.

Strategy #5 – Layer Personas, Examples, and Context Into Your Prompts

Better prompting is the highest-leverage skill on the list. Michelle and Ben both kept coming back to a few patterns that consistently produce better first-draft output.

The first is personas. Michelle takes the CFO's bio and tells the AI to review a contract through that persona. The output anticipates what the CFO will flag, which speeds up the CFO's actual review and pre-empts the back-and-forth. She has also created a manager persona, a "commercial legal guru" she named Jenny, kind but firm, who plays the role of the senior colleague a small team does not have. That is the water cooler chat the solo lawyer does not get to have, recreated in the prompt. Ben said the same idea applies for the venting and commiseration that happens in big-team kitchens. Set up a persona who knows your job and knows you, and use it for the moments when a solo lawyer would otherwise be talking to nobody.

The second is examples over descriptions. If you ask AI for a 12-month limitation of liability, you get many versions of it. If you give it a clause you wrote for another client and ask it to adapt that clause to this agreement's context, you get something much closer to what you would have drafted yourself. Show, do not tell. Ben extended the pattern by feeding the AI his existing forms as part of the context. Once the AI sees the granularity and structure you actually use, it produces output that looks like yours.

The third is context, repeated relentlessly. Risk profile. Business model. Writing style. Drafting conventions. The things you would tell a new associate on day one are the things the AI needs to see before every task. Michelle keeps a profile she updates as her business evolves, and feeds that profile to AI for context-heavy work. The discipline of writing down what you know is itself useful. The lawyer who has built a context document has done the harder work of articulating their own preferences.

Strategy #6 – Use AI to Scale Into the Work You Used to Outsource

Solo and small-team lawyers have always faced the same trade-off. You can be deep in a few areas and outsource the rest, or you can be wide and shallow across everything. AI bends that trade-off.

Michelle's framing is the cleanest version of the trade-off. She is not an export compliance expert. With AI, she can get confident on a niche topic fast enough to make a real judgment call. Either the matter is something she can handle internally with AI to fact-check her thinking, or it is genuinely outside what she should touch. AI does not eliminate the need for outside counsel. It clarifies which matters need outside counsel and which do not. When she does send something out, she can send a memo with her own analysis and ask outside counsel to vet it, instead of paying outside counsel to start from a blank page. The bill comes down because the work she is buying is narrower.

The same logic applies to the "someday list," the projects every solo lawyer has parked indefinitely. Updating the code of conduct. Refreshing the code of ethics. Mapping a new state privacy law against your contract templates. AI compresses the time these projects take from weeks to days, which is what makes them actually happen instead of staying on the list.

Ben pulled a useful frame from Michelle that is worth repeating. The traditional lawyer move on a complex agreement is to disappear, read everything in detail, then come back days later with a comprehensive response. That looks productive. It is the wrong move. The better move is the quick AI-assisted first pass that surfaces the high points, sent to the business team within hours of the request. The substantive review still happens. But the business sees the touchpoint up front, the relationship stays collaborative, and the questions that need business input start getting answered while legal is still working. Visibility is part of how a one-person team scales without burning out.

Strategy #7 – Set Real Guardrails Before You Build or Adopt Anything Risky

The risk conversation around AI is where the small-team context bites hardest. A lawyer at a Fortune 500 has IT and security to call. A solo lawyer often does not.

Michelle was honest about her own caution. She has learned the hard way to keep AI inside platforms that have their own agent capability and to keep firewalls between systems. A while back she let Gemini reformat a spreadsheet without making a copy first. The spreadsheet got destroyed. She had to revert to a prior version and lost work. The lesson stuck. She does not let AI loose on data she has not backed up.

The agent question came up directly, and both speakers were candid that they have not built their own agents yet, mostly because they do not have the technical depth to limit the blast radius if something goes wrong. Ben's advice on agents was straightforward. Use them surgically. Run a redline. Do a discrete task. Do not turn the AI loose to comprehensively update an agreement, because you lose track of which changes were yours and which came from the AI. He floated treating the AI's output as if it were an outside counsel draft. Run a redline against your version, and review the AI's edits the same way you would review another lawyer's. That preserves the reviewability that goes away when the AI just rewrites in place.

On consent and confidentiality, Michelle's view is pragmatic. We do not ask permission to use Outlook or SharePoint. The same logic applies to AI tools, with the same underlying duty to vet the security, training behavior, and data handling of the tool before you use it. The right move is usually to strip client-identifying details out of anything you feed an AI, especially in a law firm context where a single data breach across multiple clients is a different problem than a breach inside an in-house team. Replace the counterparty's name with a placeholder. Replace the financial terms with generics. The output you get back is still useful, and the worst-case downside shrinks meaningfully.

A small-team lawyer who does not have IT support is not stuck. The move is to ask. Ask the technical people you do have. Ask peers. Get educated before you build, not after. The lawyer's own risk-management instinct should apply to AI too.

Subscribe to Stay in the Loop

Whether you caught this webinar live or you are reading the recap, our weekly newsletter keeps you current on upcoming How to Contract events and recaps of the ones that already aired. Subscribe now so the practical insights land in your inbox without you having to track us down.

Keep Reading