TL;DR Laura Frederick and Laurie Ehrlich worked through why AI data use is harder to negotiate than traditional SaaS, the reverse engineering and anti-competitive problems created by everyone converging on the same models, why suspension rights and audit rights need rethinking when prompts contain trade secrets, and how to draw lines around customer data, usage data, and outputs before vendors quietly absorb them all into their training sets.
AI data use provisions sit at the center of almost every commercial AI contract right now, and most lawyers are still figuring out how to negotiate them. That was the focus of a How to Contract webinar hosted by Laura Frederick and featuring Laurie Ehrlich, a commercial contracts attorney whose career has run from big law and litigation to running commercial contracts at Datadog, to AI legal tech startup Dioptra, which Icertis acquired in late 2025. The conversation worked through three provisions that come up in nearly every AI deal and where the standard language quietly fails the customer, the vendor, or both.
Why AI Data Use Is Harder Than Traditional SaaS
Two forces make AI contracts different from anything we have negotiated before. The first is speed. Outsourcing contracts and SaaS deals had failure modes you could see coming, and a ten year contract just needed flexibility to adjust. AI is not that. The product can be different tomorrow than it is today, and we are headed toward products that change within an hour. The second force is fear. Laurie said it directly. We do not know the risks of today, we do not know the risks of the future, we do not know what the product will be in the future, and we do not know what our own needs will be. That is hard to negotiate against.
Anyone who has ever felt the pull of the bar oath knows what that combination produces. We are trained to protect clients and serve them, and when we cannot do that with confidence, the anxiety is real. The point is worth making out loud because it explains why so many AI provisions get accepted in their default form. Lawyers freeze, the deal moves, and the language nobody fully understood yesterday becomes the contract that governs tomorrow.
The Anthropic and OpenAI Department of Defense contracts make the broader point. Anthropic publicly carved out two prohibited uses, mass surveillance and autonomous weapons that can kill people, even where the underlying use would be legal. OpenAI signed with similar language. We do not have the full text of either contract, but Laurie made an observation worth holding onto. The headline disagreement may not have been about the limitation itself. It may have been about how to police the limitation. That distinction shows up everywhere in AI contracts, and it is the through line for the three provisions below.
Strategy #1 – Track the Hyperlinked Terms Like a Moving Target
Almost every AI contract incorporates the vendor's terms of use or acceptable use policy by hyperlink, and almost none of them give the customer notice when those terms change. This is not a new problem. It is the same SaaS issue we have lived with for years. The difference is that AI products are iterating so fast that the hyperlinked terms may genuinely look different in three months than they do today, and the vendor needs that flexibility to keep up with what the product actually does.
Laurie made the point that the speed of product development is the reason these hyperlink incorporations exist in the first place. Vendors cannot keep amending the contract every time something ships. So the pull is real on both sides. The customer wants stable obligations. The vendor needs to iterate.
The practical move is to refuse to be bound by changes you do not see. If the vendor wants to incorporate the AUP by reference, push for notice every time it changes. Without that, you can be in violation of obligations you did not know existed, and the suspension and termination triggers downstream of those obligations become much scarier. Refusing to track a moving target you cannot see is not unreasonable. It is table stakes.
Strategy #2 – Pressure Test What "Competitive" and "Benchmarking" Actually Mean
The AI slot provision Laura put up on screen prohibited any use of the services for "competitive purposes," including benchmarking and competitive analysis, whether internal or with a third party, without prior written consent. That language is everywhere. It sounds reasonable until you sit on the customer side and realize what it forbids.
Customers run RFPs. They run POCs. They run casual demos of competing tools. They share notes with peer general counsel networks about what worked and what did not. Laurie pointed out that all of those normal procurement behaviors are arguably benchmarking. A blanket prohibition turns ordinary diligence into a contract breach.
The customer move here is to limit the prohibition to public benchmarking and comparative analysis. Internal evaluation needs to be carved out. The peer networks where in-house counsel actually share intelligence about products need to keep functioning. The real risk for the vendor is not any individual customer running a side-by-side comparison. The real risk, as Laurie noted, is the procurement-side analysts and the Gartner-type firms that build their businesses on comparative data. That is what the clause should target.
The harder problem is the word "competitive" itself. Every AI vendor is converging on agentic workflows, agentic chat, and document plug-ins. Anthropic's word plug-in competes directly with what every AI CLM vendor offers. Vendors are intentionally entering each other's markets. So a customer who was clearly not competitive at signing may become arguably competitive a year in, simply because the vendor moved into the customer's space. The customer needs language that ties "competitive" to the customer's actual products at the time of signing, not a moving definition controlled by where the vendor decides to expand.
The gotcha problem is also worth flagging. Most violations of these clauses will never be discovered. The risk shows up in litigation, where the other side starts pulling logs in discovery and uses any benchmarking or comparison as additional breadcrumbs to support a bigger claim. That is exactly the scenario the customer should be drafting against.
Strategy #3 – Build Notice, Cure, and Realistic Audit Rights into Suspension
Suspension is the single scariest remedy in any AI commercial contract. If your business depends on a product hour to hour, a suspension is just as damaging as a termination. The default vendor language is brutal. The slot provision Laura read out gave the vendor the right to suspend without prior written notice and to terminate immediately upon written notice, with no liability to the customer for any losses. That is the customer signing up to lose its product on the vendor's reasonable determination, with no chance to fix anything.
The balanced version of this provision has three pieces. First, notice. The vendor tells the customer what behavior is the problem and gives the customer a chance to stop it. Second, suspension only if the issue is not resolved. Third, a narrow carve-out for genuine emergencies, which Laurie defined as situations where not acting immediately would put one or both parties at significant immediate risk, like an unauthorized user logged into the system. Both sides actually want that user stopped. Even where emergency suspension applies, the customer needs the right to investigate, cooperate, and unwind the suspension once the issue is resolved.
The audit problem is where AI contracts diverge from SaaS. With a traditional SaaS audit, the vendor checked usage to confirm license compliance and that was the end of it. With AI tools embedded in computers and taking action on sensitive systems, an audit means the vendor potentially seeing prompts that contain confidential information, PII, or trade secrets. Laurie's point about the Department of Defense was that the agency would never let an LLM vendor walk into top secret applications to verify compliance. That same logic applies to any customer running AI on sensitive data. The vendor's audit right needs to be scoped so the vendor can verify what it needs to verify without seeing what it must not see.
There is also a hard truth for smaller customers. A hundred-person company negotiating against a major AI vendor is probably not going to win significant changes to suspension language. Laura's framing on that is the right one. When you cannot get the language you want, your job shifts. Counsel the team on the residual risk. Build operational backstops. Make sure you have a backup platform. Set up processes to download prompts and other data on a regular cadence. The job of contract counsel is not just to revise words. It is to help clients understand the risk that survives whatever language you actually got.
Strategy #4 – Define Customer Data Broadly Before Conceding on Usage Data
The data provisions in AI contracts are where the most negotiation happens and where the most risk hides. The traditional SaaS framework split data into three buckets. Vendor data, which the vendor brought to the relationship. Customer data, which the customer uploaded or stored. And usage data, which described how the customer interacted with the product. Those buckets used to be clean. Click trails, user counts, error logs. They are not clean anymore.
The reason is the prompt. A prompt to an AI tool is an input the customer provided, which sounds like customer data. It is also a record of what the customer did with the product, which sounds like usage data. The same string of text falls into both buckets at once. That is a real definitional problem and not just a drafting one. Vendors legitimately need to see what failed. If a tool is supposed to output a table when asked, and it returns no table, the vendor cannot debug without knowing what the prompt was. But the customer's prompt may contain trade secrets, strategic positioning, confidential information, or PII. The customer cannot just hand that over.
The practical move is to fight the definition battle first. Before agreeing to anything about what the vendor can do with usage data, get customer data defined as broadly as possible. Inputs, outputs, anything stored, anything generated through the customer's use of the service. If the prompt is captured inside customer data, the usage data bucket becomes much narrower and the downstream rights the vendor takes against usage data become much less dangerous. If you concede a narrow customer data definition first, you have already lost the negotiation on usage data before it starts.
The vendor side of this is real too. Vendors need usage data to operate the product. They need it to fix broken features, monitor performance, and improve the service. The win condition is not denying that. It is making sure that the things the customer cares about, the prompts that contain its IP, never leave the customer data bucket and never become fuel for purposes the customer did not agree to.
The output question deserves a separate note. Most vendors start from "the output is yours" and that is the right starting point for the customer. But the customer does not own generic concepts that happen to appear in an output. A coding tool's output may contain general coding practices the customer cannot claim to own. A drafting tool's output may contain provisions like "no termination for convenience" that nobody owns. Laurie's framing was helpful. The customer owns the specific output for its specific use, not the underlying generic concepts. That is worth saying explicitly in the contract so the ownership claim survives the inevitable argument later.
Strategy #5 – Do Not Concede Aggregation and De-Identification Without Hard Limits
The second sentence of the slot provision was the bigger problem. It let the vendor use customer data and outputs in aggregated or de-identified form for any lawful purpose, including product development and training of vendor models. That is the sentence that should make the customer-side lawyer pause.
The fundamental issue is that you cannot really aggregate and de-identify prompt and output data the way you can aggregate clickstream data. LLMs have repeatedly produced outputs that closely resemble their training inputs. If the technology cannot reliably anonymize the data at the model level, the contract clause that promises anonymization is writing a check that cannot be cashed. As Laurie said, this kind of language is enough on its own to make her want to walk away from a vendor.
There are narrower versions that can work. If the data is genuinely homogenous, like resumes flowing through a sorting tool, there are ways to extract specific facts without using the underlying records to train a model. A market risk tool that reports how often a particular provision shows up in deals as a percentage, without exposing any actual contract language, is doing something different from training. That is a query against a data set, not absorption of the data set into a model. The contract should distinguish between the two.
The customer move here is to refuse generic "aggregated or de-identified" language for any purpose. If the vendor wants to train, that needs to be its own provision, with its own limits, its own opt-out, and its own consequences if the training accidentally surfaces customer data. If the vendor only needs to query the data set to derive specific aggregate signals, the contract should say that and only that.
The downstream point matters too. As a vendor of AI tools to your own customers, the data use provisions in your upstream contracts directly determine what you can promise your customers. Laurie made this concrete. Without a zero data retention agreement with the model provider, her business could not operate, because she could not in turn make the commitments her own downstream customers required. The data use provision is not just about your risk. It is about the risk you are taking on for everyone you serve.
Subscribe to Stay in the Loop
Whether you joined this webinar live or are catching up after the fact, our weekly newsletter is the easiest way to stay current on upcoming How to Contract events and recaps like this one. Subscribe now so the practical takeaways land in your inbox even when you cannot attend live.

