AI now sits at the heart of how modern businesses operate. It drafts documents, designs visuals, analyses data, forecasts sales, and writes marketing copy before you’ve finished your coffee. But for all its productivity gains, it also brings a new breed of contractual risk.
When an AI tool produces something infringing, biased, or just plain wrong, the familiar questions arrive: who owns it, who’s liable, and who pays the bill when it goes public?
Traditional contracts – written long before the era of prompts and models – often can’t keep up.
Know Who The “Author” is Before the AI Does
Under section 9(3) of the Copyright, Designs and Patents Act 1988, the “author” of a computer-generated work is the person who arranges its creation. Logical enough, though drafted at a time when the cleverest thing a computer did was play solitaire.
In today’s landscape, “arrangements” can mean anything from prompting and training to integrating AI into workflows. That could involve internal teams, third-party developers, SaaS providers, or external partners. To stay ahead:
Spell out who owns the AI outputs and who can adapt, license, or monetise them.
Deal with moral rights early. Will they be waived, shared, or retained?
Address joint ownership when several contributors shape the input or model.
Read the fine print. Many AI platforms limit commercial use, require attribution, or change their licence terms faster than a patch update. Ignoring that can turn into a copyright claim overnight.
Consider adding a clause to address model drift when an AI vendor retrains its model and your previously safe outputs start resembling someone else’s IP.
For businesses co-developing AI systems or datasets with partners, Joint Development Agreements (JDAs) should clarify ownership, licensing, and commercialisation rights before any code or content is created.
Managing Brand and Reputational Risks
AI can supercharge performance or damage reputations. Whether it’s a chatbot with attitude, an AI that fabricates customer data, or an automated design tool that borrows too much “inspiration”, brand damage can be swift.
Regulators have made it clear that automation does not dilute accountability. If an AI system misleads consumers, publishes false information, or mishandles data, the business deploying it remains responsible.
Contracts should therefore:
Require human review of AI outputs before publication or implementation.
Include brand-protection clauses and indemnities for reputational harm.
Set out crisis-management obligations: who acts, who apologises, who fixes it.
The Advertising Standards Authority (ASA) and CAP Code continue to apply across industries. Misleading AI-generated materials, from social posts to automated product recommendations, can breach advertising and consumer protection law regardless of intent.
Transparency: The New Differentiator
AI-driven operations rely on vast datasets. With the UK GDPR and the Data Use and Access Act 2025 (DUAA), which governs AI-driven data sharing and model transparency, tightening controls, businesses can’t hide behind “the algorithm did it.”
Articles 13–15 and 22 of the GDPR still apply, demanding transparency whenever personal data drives automated decision-making. The ICO’s 2025 AI Guidance underscores key principles: document your logic, minimise your data, and keep a human in the loop.
Contracts should require that:
Vendors comply with current ICO AI Guidance.
Data minimisation, pseudonymisation, and deletion protocols are implemented.
Special category data is excluded from AI profiling unless strictly necessary.
Humans review all automated outputs with material impact.
Beyond compliance, transparency is now a selling point. Customers, investors, and regulators increasingly prefer companies that can explain how their AI works and why it’s fair.
Future-Proofing AI Contracts
The biggest contracting mistake is treating the agreement as static. AI evolves monthly, not annually, and your paperwork should keep pace.
Build in:
Annual AI risk and compliance reviews.
Automatic reassessment when vendors retrain or replace models.
Internal governance connecting legal, technical, and data teams.
You wouldn’t let a financial forecast go unreviewed for a year, so don’t let your AI contracts gather dust.
The CMA’s 2024 Foundation Models Report also urges companies to monitor supplier transparency and competition risks in AI procurement. It’s another reason to keep clauses flexible and forward-looking.
The Bottom Line
AI has redrawn the commercial landscape. To keep innovation lawful, ethical, and profitable, contracts must evolve in step. The savviest businesses aren’t just using AI; they’re contracting for it – allocating ownership, defining liability, protecting data, and planning exits before the system misbehaves.
At Glaisyers ETL, our Creative, Digital & Marketing team works with businesses across sectors, from tech start-ups to global brands, to future-proof their contracts and manage AI-driven risk. We help ensure your innovation stays bold, compliant, and commercially sound.
Because in the age of AI, the most innovative strategy is still a well-drafted clause.

