This website uses cookies

Read our Privacy policy and Terms of use for more information.

TL;DR Laura Frederick, Kay Lee, and Carly Penner worked through three AI-drafted provisions that touch the third-party foundation model layer of AI product contracts. They covered why this layer keeps getting short shrift, how to think about feedback and user interaction data, what to do about prompt disclosure restrictions to model providers, and how to handle training data provenance and disclosure. The takeaway throughout was that customers and vendors are both at the mercy of model provider terms they cannot really negotiate, and that contract language has to do real work between the two parties because the upstream provider often will not.

The conversation that does not happen often enough in AI product contracting is the one about the third-party foundation model provider sitting behind the vendor. Most contracts focus on the customer-vendor relationship and treat the model provider as an invisible layer. That layer does not stay invisible when something goes wrong. In a recent How to Contract webinar, Laura Frederick was joined by Kay Lee, Head of Commercial at Recharge, and Carly Penner, Assistant General Counsel, Global Commercial, at Forter, a global commercial legal team at an e-commerce and fraud detection company, to work through three AI-drafted provisions and pressure test what good language actually looks like. Kay took the customer perspective and Carly took the vendor perspective, with both able to weigh in on each side since they each see both sides in their day jobs.

Foundation Model Terms: Why They Are Overlooked

When we are negotiating an AI product contract, the natural focus is the relationship in front of us. We worry about how our vendor will handle our data and what their obligations look like. The foundation model provider sitting upstream is operating under its own terms of service, usually click-through. We may never see those terms and restrictions we impose on our vendor do not automatically flow upstream to the model provider.

Kay made the point that this is partly a deal-velocity problem. The lawyer working a contract wants to get the deal through, so the upstream layer falls off the radar. Vendors building on top of foundation models often agreed to the provider's terms long before the customer in front of them showed up, when they didn’t know what restrictions a downstream customer was going to need. The risk profile changes dramatically depending on the customer's industry. Government, financial services, and healthcare buyers carry regulatory and compliance exposure that does not stop at the vendor's edge.

Carly highlighted that vendors cannot really negotiate with the foundation model provider, so when a customer asks for IP indemnity that the vendor never got upstream, the vendor has to decide every single deal whether to absorb that gap. Some vendors try to disclaim the upstream provider quietly, hoping the customer is not sophisticated enough to push. That is a short-term move with long-term costs. The better path is to treat the gap as something to quantify and price into the deal, not something to hide. At the very least, it is a risk that should be highlighted for the business term deciding if and on what terms to proceed.

That is worth pausing on. The mismatch between what a vendor commits to its customers and what its own upstream providers commit to it is not new. We have lived with that gap for a long in subcontractor and vendor chains. What is different here is the magnitude. AI risk is not bounded the way a typical operational risk is. Output liability, IP exposure, and regulatory enforcement can all reach scales that a normal sub-vendor mismatch does not.

Big Picture Advice: Make Sure You Are Checking AI Requirements Before the Contract Reaches Legal

One preliminary issue the speakers covered was making sure that the company is making some decisions before the terms reach Legal. Carly described an AI governance policy at her company that sets a threshold for new vendors. Before anything reaches the standard vendor flow, a preliminary set of security and legal questions has to be answered, and an answer outside the policy stops the process. She made the point that this is not exotic. Any team can run a spreadsheet of preliminary questions even if they do not yet have a formal program. The point is to catch the AI red flags and resolve them before commercial counsel ends up reviewing a contract that should never have gotten that far.

Kay said the same pattern at her company. The security and privacy assessment is done as part of the data-use review, and vendors get sorted into review tiers based on how the answers come back. The benefit is that big deal-breaker issues surface before lawyers are deep in redlines. The takeaway is that the contract language we are about to discuss only works if the right contracts are reaching us. Build the front door first. Prepare some steps and questions to ask.

Drafting Issues With Feedback and User Interaction AI Data Provisions (Provision Review #1)

The first AI-drafted provision the panel worked through was about feedback and user interaction data. Laura framed why this category matters. Customers are not just inputting data into an AI tool. They are interacting with it, and that interaction data carries real value and potentially competitive information.

Carly opened with the definitional problem. Interaction data is hard to restrict because it is hard to define. Does it include metadata, query phrasing, anonymized data, aggregated data? These terms mean different things to different people, and a contract that uses them without nailing them down is doing very little work. On top of that, the vendor is again at the whim of the model provider's terms, which makes it tempting to stay vague rather than dig into something the vendor cannot fully control.

Here’s the AI-drafted provision the speakers reviewed:

"Vendor shall implement reasonable technical and administrative controls to limit Foundation Model Provider's use of Customer Data, including Feedback Data and User Interaction Data submitted through the Services, to purposes directly related to providing the Services to Customer. Vendor shall include contractual restrictions in its agreement with Foundation Model Provider that prohibit use of Customer Data for model training, benchmarking, or product improvement without Customer's prior written consent, except as may be required by Foundation Model Provider's standard terms and conditions in effect as of the Effective Date."

- AI-Generated Slop Provision (Do Not Use)

The provision Laura pulled up was the kind of thing that looks fine on a quick read. Carly's reaction was the right one. The clause buried an "except as necessary to provide the services" exception that swallows whatever it touches. Kay called the catchall a trick, because it lets the model provider's terms govern what looks like a vendor-customer commitment. She suggested having the vendor represent that the model provider's then-current terms do not permit any use outside what the contract allows, and put a notification obligation on the vendor if upstream terms change. Ten or fifteen days notice gives the customer time to renegotiate or walk.

Another suggested edit was to the language that listed a few narrow concepts the vendor would not do "without customer's prior written consent." That kind of term sounds protective until you ask who is ever going to grant it. Instead, replace the vague restriction-with-loophole structure with an active list of permitted uses, then a closed list of prohibited uses, with the gray middle left to specific carve-outs.

Laura pulled out the underlying theme. When you see a list of restrictions, pair it with a list of permissions. When you see a list of permissions, pair it with a list of restrictions. AI product contracts are not the right place for an open-ended definition with narrow exclusions, because nobody knows yet how this technology will be used or constrained over the contract term. The discipline of being specific on both sides forces you to think about what you actually need rather than leaning on language that says nothing.

One more drafting note worth carrying forward. Carly flagged the verb "ensure" when it appears in a clause requiring the vendor to make a third party do something. As a vendor you cannot ensure that a foundation model provider does anything. You can take reasonable steps, you can have policies designed to drive an outcome, you can require contractual flow-down where you have leverage.

Drafting Issues With Vendor Restrictions on Disclosing Customer Prompts to Foundation Model Providers (Provision Review #2)

The second AI-drafted provision was about restrictions on the vendor's disclosure of customer prompts to the foundation model provider. Every AI product built on a foundation model has to disclose customer prompts to that provider at some point. The drafting question is what restrictions you put around it.

Kay highlighted that a confidentiality clause names what is confidential and limits who can see it. A prompt disclosure restriction is different in kind, because the disclosure is the necessary evil that lets the service work. The interesting question is not whether the prompt gets to the model provider. It is what happens to the prompt after the inference is complete. Shift the analysis away from confidentiality concepts and into data flow and retention concepts. Treat it like a data processing question. Map the flow, get specificity on retention, and require updates the way you would with subprocessor changes under a privacy program.

Here’s the AI-drafted prompt disclosure provision the speakers reviewed:

"Vendor shall not disclose Customer Prompts or Outputs to any third party except as necessary to provide the Services, including disclosure to Foundation Model Providers engaged by Vendor in connection with the Services. Vendor shall ensure that any Foundation Model Provider receiving Customer Prompts processes such data solely for the purpose of generating Outputs in response to Customer Prompts and shall not retain Customer Prompts beyond the period reasonably necessary to complete the applicable inference request, subject to Foundation Model Provider's applicable data retention policies."

- AI-Generated Slop Provision (Do Not Use)

Carly highlighted several issues. The first sentence was structurally unclear about whether disclosure to the foundation model provider was actually permitted, which on its own is a problem in a clause whose entire job is to permit a specific disclosure. The "except as necessary to provide the services" carveout repeated the same trick from the first provision and would let any data use the vendor wanted to claim slip through. The clause forced the vendor to "ensure" that the foundation model provider followed a narrow purpose limitation, a commitment the vendor cannot make. And the restriction limiting use to "generating outputs in response to customer prompts" is the kind of clause a vendor cannot accept because it puts the vendor in breach on day one. Foundation model providers legitimately use prompt data for debugging, abuse detection, log retention, safety reviews, and compliance. Pretending otherwise just creates breach risk on the vendor's side without giving the customer real protection.

Carly's advice was to avoid agreeing to something you are violating on day one. Be honest about what the vendor can really do, and to make the vendor's commitments enforceable rather than aspirational.

The retention piece in this clause also deserves attention. The provision deferred to the foundation model provider's applicable data retention policies, which makes the customer's protections only as strong as a third party's policy that the customer never agreed to. Kay suggested change it to either replace that deference outright or add an active clause that the model provider's retention policy does not limit the contract's restrictions, plus a vendor commitment not to agree to upstream terms inconsistent with what the vendor owes the customer. The contract is between the vendor and the customer, so the customer's enforcement rights have to live there, not float upstream to a provider the customer cannot reach.

Drafting Issues With Training Data Provenance Documentation and Disclosure Provisions (Provision Review #3)

The third AI-drafted provision dealt with training data provenance documentation and disclosure obligations.

Carly walked through what training data provenance is supposed to do. It tells you what data was used, where it came from, and under what rights. In theory, that lets you assess upfront whether you have personal information exposure, unlicensed copyrighted content exposure, or other initial red flags. In practice, the information is incomplete. There is no causal link between the training data input and a specific output, which matters when a customer comes back claiming an output was caused by a particular training set. The vendor cannot prove what is in the foundation model's training data because the vendor did not assemble it. The foundation model provider may have bought parts of the training data from someone else, and significant pieces of the answer get withheld under IP and trade secret protections. Provenance disclosure can help on the big-flag risks, but it does not eliminate the risk and it cannot pretend to.

Here’s the AI-drafted prompt disclosure provision the speakers reviewed:

"Upon Customer's reasonable written request, Vendor shall provide Customer with a general description of the categories of data used to train the Foundation Model underlying the Services, to the extent such information is reasonably available to Vendor and not subject to confidentiality restrictions imposed by Foundation Model Provider. Vendor makes no representation or warranty regarding the accuracy, completeness, or fitness for purpose of the Foundation Model's training data, and Customer's sole remedy for any claim arising from training data provenance shall be termination of the applicable Order Form."

- AI-Generated Slop Provision (Do Not Use)

The AI-drafted clause limited disclosure to "general description of categories of data," made it conditional on what was reasonably available and not subject to confidentiality restrictions, and made termination of the order form the customer's sole remedy. Carly's reaction as a vendor was that this is the version she would want if she had to agree to something. Sole remedy of termination is unusually friendly to her. She would tighten the request frequency to once every twelve months, which lines up with how audit provisions usually work, and she would broaden the carveout from confidentiality alone to confidentiality, security, and IP, plus a fallback for what the foundation model provider itself may not be aware of.

Kay's reaction as the customer was the oppostive. Sole remedy of termination of the order form could eliminates damages for a clause where that’s a problem. Damages can run far broader than a typical privacy breach because the exposure spans IP rights, regulatory compliance, and operational use of the model output. She suggested either push for an uncapped damages position on this issue or to find leverage in the indemnification carveouts to keep it out of the limitation. She also flagged that "general description" is a check-the-box standard. It needs to be specific enough to let the customer make a real assessment about training data exposure given the customer's regulatory profile.

Laura raised a useful point on the vendor's behalf. Sole-remedy clauses that strip out damages too aggressively can be unenforceable under US law,because contracts have to leave a reasonable remedy in place. A vendor that gets too clever with damages exclusions can find the whole limitation thrown out, which leaves the vendor without any cap at all. The drafting move Carly suggested add an acknowledgment that foundation model providers are not fully transparent about training data and that the parties are aligned on that limitation. That language documents the gap, which both supports enforceability and gives the parties a shared understanding to work with.

Kay made the point that "to the extent reasonable available to the vendor" is a vendor cushion that a customer should not accept without specifics. Reasonable to whom? The customer's redline is to put some obligation on the vendor that simply pinging support at the model provider does not satisfy "reasonable." Identify the steps the vendor has to take, acknowledge the IP and confidentiality limits where they apply, and require the vendor to do real work before claiming the information is unavailable.

Subscribe to Stay in the Loop

If this kind of practical AI contracting recap is useful, our weekly newsletter is built for that. Each issue points to upcoming How to Contract webinars and writeups like this one for past sessions. Subscribe now so the next one lands in your inbox.

Keep Reading