This website uses cookies

Read our Privacy policy and Terms of use for more information.

TL;DR AI agents are handling more first-pass legal work, but the time saved gets absorbed by new responsibilities like prompting, oversight, and tool maintenance. The panel pushed back on the idea that faster means better and walked through governance gaps around tech competence, agent ownership, and confidentiality. Their practical advice was to start with the problem instead of the tool, treat orphan agents as a real risk worth tracking, and recognize that small teams may have an edge over enterprise legal departments.

Agentic AI is reshaping in-house legal work faster than most teams can keep up with. Three lawyers came together on this webinar to make sense of what's actually changing. Natalie Kim, VP of Legal at Omnidian, hosted the conversation. Colin Levy, GC at Malbek, and Danish Butt of Swiftwater and Company joined her. They worked through how AI agents are showing up in daily legal tasks, the new responsibilities accumulating on lawyer desks, and the harder questions about judgment, governance, and ownership that no one has fully answered yet.

How Agentic AI Is Changing Daily Legal Work

The panel opened with what's happening on the ground. AI now handles first-pass contract review, basic drafting, and quick research faster than lawyers could a few years ago. But the time saved rarely stays saved. New work fills the gap. Lawyers now write prompts, review outputs, build internal tools, and maintain agents after launch.

Colin pushed back hard on the idea that faster means better. Input quality drives output quality, and a confident-sounding answer from a lazy prompt can do real damage. Natalie raised the rise of “AI slop,” meaning the output that piles up when nobody slows down to read it carefully. As Colin put it, “If the output you get is fast, but not any better than anything you would have come up with on your own slower, then there is no real benefit from having used AI to make that decision or perform that task.”

Danish framed the deployment question around trust and stakes. High-trust, high-stakes work still needs a human. Lower-stakes, repeatable work fits agents well. He also pushed back on the old “department of no” mindset that some legal teams default to. The more progressive legal departments are embedding with the business and taking a risk-adjusted approach instead.

Governance, Ethics, and the Build vs. Buy Question

The second half turned to governance and ethics. The panel walked through tech competence and confidentiality duties under the rules of professional responsibility. Colin tied this directly to the duty of competence and the duty of confidentiality, both of which now require lawyers to understand what AI tools do and how they handle data.

The build vs. buy debate got real attention. Natalie pointed out that legal is starting to look more like engineering, with lawyers designing agents, codifying playbooks into systems, and managing tools through full lifecycles. Danish warned against tool FOMO. Start with the problem, not the tool. Identify what's actually broken before you evaluate whether ChatGPT, Claude, Gemini, Copilot, or a vertical solution fits the need.

The ownership question generated some of the most useful discussion. Natalie raised the concept of orphan agents, where an employee builds an agent and then leaves the company, and nobody knows who owns it anymore. The agent keeps running, sometimes poorly, with no clear maintenance plan. Natalie closed with a point worth sitting with. Solo lawyers and small teams may actually have an edge here, because they move faster than enterprise legal departments stuck in procurement.

  1. Agentic AI creates invisible work. Lawyers now spend hours prompting, checking, and reconciling outputs. Nobody documents this work or assigns ownership. As Natalie put it, “that invisible layer of labor is accumulating quietly and it's fragmenting the most important and scarce resource of the legal team,” which is lawyer attention. She also noted that almost none of this work is written down or formally owned.

  2. Trust and stakes drive the call. High-stakes work still needs a human in the loop. That includes bet-the-company matters, negotiation strategy, and anything relationship-driven. Lower-stakes, repeatable work fits agents well. Danish framed it as a sliding scale that varies by organization, and noted there is no simple answer to where the line falls.

  3. Garbage in, garbage out is real. A lazy prompt produces a confident but unreliable answer. Lawyers should treat prompt writing as a real skill and read every output carefully, even when it sounds good. Colin made the point that intention is the underlying key. Be deliberate about what tool you use, how you use it, and what output you actually want.

  4. Speed does not equal leverage. Cycle time is easy to measure, but quality matters more. Colin warned that teams which optimize purely for speed metrics often skip careful review of what the AI produced. The harder-to-measure quality metrics are usually the ones that actually drive value.

  5. Lawyers now build tools. The job no longer stops at advice. Colin made the point that a larger part of being a lawyer now involves building things, whether agents or interactive resources. Natalie shared an NDA agent she built that took her sales team's wait time from two weeks down to five minutes. The four hours she spent on a Friday afternoon designing the guardrails paid back many times over in distractions she avoided going forward.

  6. Agents need ongoing care. Natalie noted she puts in quarterly reminders to make sure her NDA agent is still doing its job. Playbooks change, models update, and agents that worked at launch can drift over time. Danish reinforced the point that AI tools are unique because they keep evolving after deployment. New model releases, outages, and feature changes all affect how an agent behaves.

  7. Tech competence is now an ethics issue. Colin pointed to the ethical duty of tech competence, which now requires lawyers to understand what tools exist and what each one actually does. He tied this to the duty of confidentiality too, since lawyers need to be mindful about what data they share with these tools and how the tools handle it.

  8. Orphan agents create real risk. Natalie raised the problem directly. An employee builds an agent, leaves the company, and the agent keeps running with no clear owner. Her team uses a legal ops person to systematize this and is building an agent list to keep track. Even a simple inventory beats nothing.

  9. Start with the problem, not the tool. Danish warned that FOMO drives bad tool decisions. Start with the problem first. Then look at whether ChatGPT, Claude, Gemini, or Copilot solves it. He compared buying the wrong tool to buying the biggest house and not knowing what to do with all the rooms.

  10. Small teams move faster. Natalie made the point that you don't need a $5,000 AI tool. A $20 Cowork subscription can get you bespoke tools tailored to your practice without going through enterprise procurement. She sees solo lawyers and small teams as best placed to build a tech stack suited to their actual needs.

What This Means for Your Team

The panel's discussion adds up to a few practical moves any in-house legal team can make this quarter.

  1. Build an agent inventory. Natalie's team is already doing this with a legal ops person systematizing the work. Even a simple list of agents, owners, and last-reviewed dates beats letting orphan agents run unsupervised.

  2. Name the invisible work. Natalie pointed out that almost none of the new AI-related work is written down or formally owned. Pick a recurring legal team meeting and dedicate 15 minutes to listing the new tasks lawyers are absorbing. Prompting, reviewing outputs, building tools, training colleagues. Once it's named, you can decide what to do about it.

  3. Get deliberate about prompt quality. Colin's point about intention applies here. Treat prompts as work product worth thinking about. A short, low-quality prompt produces a confident but useless answer, and the lawyer is still on the hook for the output.

  4. Start small with builds. Natalie's NDA agent example is the model. Pick one painful, repetitive task and spend a Friday afternoon building an agent for it. Use whatever AI platform your company already pays for. The four hours up front can save you from being distracted a thousand times later.

Subscribe to Stay in the Loop

Our weekly newsletter includes links to future webinars, plus these kinds of recaps of past ones. Subscribe now to get the insights even if you didn't attend.

Reply

Avatar

or to participate

Keep Reading