
TL;DR Laura Frederick, Michelle Fleming, and Ben Kiekel discussed how solo and small legal teams can use AI to get more done without adding headcount. They covered identifying your bottlenecks before picking a tool, using AI to get past the blank page, building personas to simulate stakeholders and missing colleagues, avoiding over-engineered solutions, running final sanity checks on every agreement, and using AI to triage outside counsel spend and scale across practice areas you would otherwise have to outsource.
Solo and small legal teams carry a different kind of workload. You are expected to be an individual contributor and a strategic advisor at the same time, often covering contracts, compliance, privacy, and ethics with no one to delegate to. That was the focus of a How to Contract webinar hosted by Laura Frederick and featuring Michelle Fleming, Chief Legal Officer of Bell TechLogix, a managed IT services company, with 20 years of experience as a legal team of one, and Ben Kiekel, Senior Counsel and head of commercial legal at Aptos Labs. Michelle and Ben shared specific AI strategies and workflows that make a real difference when your legal department is you or just a few of you.
What AI Leverage Looks Like for Small Teams
Everyone talks about AI making legal teams more efficient. But what does that actually mean when you are the entire department?
Michelle estimated that AI now handles roughly 60 to 70 percent of her hands-on daily work, with the remaining 40 percent being her judgment. That is a significant shift from when she had no time to learn a new tool when generative AI first appeared in 2022. The leverage comes from freeing up time for strategic work that solo lawyers rarely get to touch because they are buried in the day-to-day.
The key insight, Michelle said, is she’d started in the wrong place. She looked at AI tools and asked what features they offered. She thinks that is backwards. Now she asks where she was slow, where she was the bottleneck, and what tasks a tool could handle. That reframe changes everything. It means you are fitting the tool to your workflow instead of reorganizing your workflow around somebody else's product.
Ben drew a useful line between play time and implementation time. It is easy to spend hours tweaking prompts on things that are fun but are not actually bottlenecks. His recommendation was to time-box it. Spend 30 minutes exploring and 30 minutes implementing, then set specific goals for what you want to automate or augment. That discipline matters a lot when your time is the only resource you have.
Strategy #1 – Use AI to Get Past the Blank Page
Anyone who has had to create a contract template, policy, or playbook from nothing knows how much time is wasted staring at pages. You draft, you rewrite, you second-guess the wording. AI eliminates a lot of that.
Michelle said AI gets her about 80 percent of the way to a workable first draft. The other 20 percent is her judgment. She also uses it when an existing template does not fit the deal and she would otherwise be pulling pieces from three or four other agreements to cobble something together. AI handles that assembly faster and often produces a more coherent starting point.
Michelle also uses AI to capture institutional knowledge. She highlighted how we all carry risk tolerances and preferred wording in our heads, but those can vary depending on how much is going on or how tired we are. Building that knowledge into a playbook or a set of prompts makes the output more consistent. It does not just help you scale. It makes your work stronger.
You do have to find the right use cases. Ben described how his team uses business memos. Those memos are quick summaries from the original requester. They often arrive in rough form, maybe just bullet points, as they are written for the business approver and not for legal. They are over-broad in ways we do not need and missing details we do. Ben uses AI to parse the memo and map it to the structure of the relevant agreement. That instantly reveals gaps. You can see the three things you need answers on before you can draft, without spending hours manually refactoring the memo into something workable. That is a meaningful time save, and it gets the right questions to the business team earlier.
Strategy #2 – Build Personas to Fill the Gaps on Your Team
On a larger legal team, you have colleagues to bounce ideas off. You have a manager who helps you think through a tricky issue. Solo lawyers do not have any of that. AI personas are one way to fill those gaps.
Michelle builds a CFO persona and layers it on top of her playbook review, so the AI checks the contract against her default language and also runs it through the CFO's likely concerns. The result anticipates the CFO's questions before they come up, which speeds up his review time and reduces back-and-forth. That layering concept is worth paying attention to. Stacking a playbook review with a persona review produces better output than either one alone.
Ben took it in a different direction. He created a fictional manager persona, a "commercial legal guru" who is kind but firm. He uses it daily for the kind of brainstorming, venting, and gut-checking that lawyers on bigger teams get at the water cooler. That sounding board does not exist naturally on a small team, and Ben said it genuinely helps him get back on track. Laura mentioned that some teams build a persona for the counterparty and run the draft past it to surface likely objections before the negotiation even starts.
Strategy #3 – Keep Your Judgment in the Driver’s Seat
One important theme of the discussion was that AI is an extension of your judgment, not a replacement for it. The speakers shared stories that illustrate why.
Ben shared a recent example where AI generated an IP assignment clause and randomly swapped the party names in one section. Think about that for a moment. It assigned IP to the wrong party. That is the worst possible error the tool could make, and it is also the hardest to catch on a quick read. Ben said he treats AI output as if it is out to embarrass him at the worst possible moment. He goes section by section and avoids generating entire long-form documents in a single pass.
Michelle told a story about asking AI to convert a scanned PDF into a Word document. The tool reproduced some sections accurately but then inserted entire pages of language that did not exist in the original. When she asked where the language came from, the AI said, "You're right. I made that up." She also pointed out something important about context. AI does not know this is the third communication you have sent to a party. It does not know the tone is escalating. It substitutes what it thinks is reasonable, and that default does not always fit your situation.
Michelle uses a self-grading technique where she has the AI grade its own faithfulness to the source document, section by section. It catches problems most of the time. But even that fails occasionally. She has had AI give itself a passing grade on sections it completely fabricated. The lesson is that no single check is sufficient. You need multiple layers of verification, and your own judgment has to be the final one.
Strategy #4 – Build Workflows That Catch Errors Instead of Creating Them
The difference between AI that helps and AI that creates more work usually comes down to the workflow around it.
Ben warned against trying to build an end-to-end solution all at once. He has seen teams try to handle intake, routing, and redlining in a single AI workflow, often inspired by a YouTube video. AI makes it feel easy to design that kind of system in theory, but the implementation is far more complex. Build in stages. Automate one step at a time. That advice is especially important for small teams that cannot afford a failed implementation.
One workflow that both speakers use is playing AI models off each other. Ben generates a draft in one tool and then sends it to a different one for review. The second model tends to be aggressive about catching errors the first one missed. Asking a model to review its own work is less reliable. Ben noted that the models seem to have a competitive streak when they know the output came from a rival.
Ben also shared his simplest and most actionable workflow. He runs every agreement through AI at the very end for a final sanity check. He keeps the prompt narrow and tells the tool it is a final draft and to check for logic errors, typos, and section numbering problems without engaging with the substance. Michelle added that this final check also catches provisions that were acceptable earlier in the negotiation but no longer work in light of changes made later in the deal. A contract is a series of variables expressed in words, as Ben put it. AI is good at catching when the logic does not balance.
Both speakers also found that when AI output goes off the rails, it is better to kill the conversation and start fresh rather than correcting in the same thread. Trying to fix a bad output with a chain of corrections often sends the tool into a tailspin. It fixes one thing and breaks everything it got right. Ben saves the bad output, opens a new conversation, passes in his own rough take, and asks the tool to refactor with clearer guidance. Laura agreed and said she catches herself arguing with the AI when she should just delete the whole thing and start over.
Strategy #5 – Scale Yourself Without Adding Headcount
When you are the legal department, you are not just the contracts lawyer. You are also the data protection officer, the ethics hotline, and the person everyone calls about export compliance, labor law, and whatever else comes up. Before AI, you might go deep in a couple of areas and outsource the rest. That dynamic has changed for many small and solo legal departments.
Michelle uses AI to get confident faster in niche areas she would have previously sent straight to outside counsel. She is not relying on AI as a final answer. She uses it to triage. Can she handle this issue herself and confirm her analysis with AI, or does she need outside counsel? And when she does send work out, she prepares a memo for outside counsel to review and refine instead of paying them to start from scratch. That changes the economics of outside counsel significantly.
She also uses AI to clear out her someday list. Updating the code of conduct, refreshing the code of ethics, building out policies that never rise to urgent but always need doing. AI helps her move through those faster and actually get them done instead of carrying them indefinitely.
One of the most practical ideas from the session was Michelle's approach to incoming contracts. When a 50-page agreement arrives with a call scheduled two hours later, she uses AI to extract the material commercial terms, financial provisions, and term and termination language. She shares those key issues with the business team immediately so they can start working while she finishes her full review. Ben loved this idea and said it transforms how stakeholders perceive legal. Instead of "it's with legal" meaning a black hole, it means "legal gave us some things to chew on while they finish their review." That kind of visibility scales you as one person because stakeholders always know what you are doing and where things stand.
Top 10 Takeaways for Small Legal Teams
Here are the top 10 takeaways from this webinar.
Start with your bottlenecks, not the tool's features. It is tempting to look at an AI product and ask what it does. The better question is where are we slow. Where are we the bottleneck. What could a tool handle. That reframe matters because it keeps you focused on workflows that actually save time instead of features that sound impressive but do not address your biggest constraints. Michelle spent 20 years as a legal team of one. She learned this the hard way.
Use AI to kill the blank page. Creating a template, policy, or playbook from scratch used to eat hours. AI gets you roughly 80 percent of the way to a workable draft. Your judgment handles the last 20. That shift alone frees up significant time for solo lawyers who previously had to build everything from nothing or Frankenstein together pieces from other agreements.
Capture institutional knowledge before it walks out the door or varies with your mood. We all carry risk tolerances and preferred wording in our heads. But those can shift depending on how tired we are, how much is going on, or whether we are rushing to hit a deadline. Building that knowledge into an AI playbook or a set of prompts makes your output more consistent. It does not just help you move faster. It makes the work product stronger and more reliable.
Build personas to simulate the colleagues and stakeholders you do not have. Michelle layers a CFO persona on top of her playbook review so the output anticipates executive-level questions before they come up. Ben created a fictional manager for the brainstorming and gut-checking we get from colleagues on larger teams. These are not gimmicks. They fill real gaps in the solo lawyer's workflow and produce noticeably better output than either a playbook or a prompt alone.
Treat AI like it is out to embarrass you at the worst possible moment. Ben's IP assignment clause that swapped the party names is the cautionary tale here. That kind of error is catastrophic and nearly invisible on a quick read. Go section by section. Do not generate long-form documents in a single pass. And never assume the output is correct just because 95 percent of it looks right. The remaining 5 percent is where the damage lives.
Play AI models off each other for better error detection. Generate a draft in one model and send it to a different one for review. The second model tends to catch errors the first one missed. Asking a model to spot its own mistakes is far less reliable. This is a low-effort, high-value addition to any AI workflow. It does not replace your own review, but it adds a useful layer.
Run a final sanity check on every agreement before it goes out. This was Ben's simplest and most immediately actionable suggestion. Tell the tool it is a final draft and ask it to check for logic errors, typos, and section numbering problems without engaging with the substance. Keep the scope narrow. A contract is a series of variables expressed in words. AI is good at catching when the logic does not balance. Michelle pointed out that this step also catches provisions that made sense earlier but no longer work given changes made later in the negotiation.
Use AI to extract key issues from incoming contracts before your full review. When a long agreement arrives with a call scheduled hours later, you can use AI to pull out the material commercial terms and share them with the business team immediately. They start working. You keep reviewing. It is genuinely responsive, not just the appearance of it. And it changes the dynamic from "it's with legal" as a black hole to "legal gave us some things to chew on." That kind of visibility is how you scale as one person.
Build in stages, not end-to-end. It is tempting to build a single AI workflow that handles intake, routing, and redlining all at once. AI makes that feel achievable in theory, but the implementation is far more complex than it looks. Over-engineering is one of the most common mistakes Ben sees small teams make. Break the problem into steps. Automate one at a time. Get that step working reliably before adding the next one.
Use AI to triage your outside counsel spend. Solo lawyers cover practice areas they would never handle on a bigger team. AI lets you get confident faster in niche areas and decide whether you can handle something yourself or need to send it out. And when you do send work to outside counsel, you can prepare a memo for them to review and refine instead of paying them to start from scratch. That changes the cost equation significantly. Michelle described using AI this way for questions about privacy laws, and it let her redirect outside counsel spend toward genuinely complex issues rather than basic research.
What These Insights Mean for Your Contracts
Here are four practical strategies you can apply to your AI workflow today.
Audit your bottlenecks before you pick a tool. Spend a week tracking where you lose time. Blank-page drafting? Chasing business teams for missing deal details? NDA volume? Compliance mapping? Pick the biggest time sink and point your AI effort there first. You will see faster returns than trying to automate everything at once.
Add a final AI sanity check to your signing workflow. Before any agreement goes out the door, run it through AI with a narrow prompt focused on logic errors, typos, and cross-reference problems. Keep the scope on mechanics, not substance. It takes minutes to set up and catches things we miss when reading under pressure.
Build at least one persona this week. Pick a stakeholder whose questions you frequently anticipate. Your CFO, your head of sales, a tough counterparty. Create a persona with their background and priorities, and run your next contract review through both your standard playbook and that persona's lens. The output will anticipate questions you would have fielded later, saving a round of back-and-forth.
Share key issues from incoming contracts before your full review is done. The next time a long agreement arrives with a deadline, use AI to extract the material commercial terms and send them to the business team right away. Let them start working on the business questions while you finish your review. It changes how your stakeholders experience your responsiveness and keeps you from becoming the bottleneck.
Subscribe to Stay in the Loop
These webinar recaps are one of the most popular things we publish, and for good reason. The practical insights do not come from theory. They come from lawyers doing the work. Our weekly newsletter delivers recaps like this one along with links to upcoming How to Contract events. Subscribe now so you do not miss the next one.

