This article is written by Ishika Jangir, 1st year LL.M. student at Amity University, Kolkata.

Consensus ad idem, or a meeting of the minds between two or more legally competent individuals or corporations, is the foundation of traditional contract law. This model is disrupted by AI systems that use machine learning (ML) and natural language processing (NLP) to automate the process, frequently with little to no human involvement.
The application of AI spans the entire contract lifecycle:
• Pre-Formation (Draughting & Negotiation): AI systems, such as those employed by large financial institutions, examine previous contracts to create new clauses, evaluate risk, and even independently negotiate terms within predetermined bounds.
• Creation (Smart Contracts): Blockchain-based Smart Contracts are self-executing programs that automate the fulfilment of contractual duties (e.g., releasing payment upon delivery verification).
• Post-Formation (Management & Enforcement): By examining performance data and regulatory changes, AI tools track important dates, keep an eye on compliance, and identify possible violations.
This automation yields immense benefits—increased speed, reduced cost, and enhanced accuracy in complex document review—but forces a critical re-evaluation of centuries-old legal doctrines.
II. The Challenge to Contract Formation Fundamentals
The core elements of a valid contract; Offer, Acceptance, Consideration, and Intention to Create Legal Relations; are complicated when one party is an algorithm.
A. The Requirement of Intention and Capacity
The element of intention presents the biggest obstacle. Is it possible for an AI without consciousness or free will to genuinely “intend” to form a legal partnership? Globally, the current judicial consensus tends to view AI only as a tool or conduit that represents a human principal.
• The Principal-Agent Framework: The person or organisation that programmed the AI or established its operating parameters is responsible for its actions. AI is merely a tool for expression; humans still have legal capacity. This is in line with current legislation pertaining to automated transactions and electronic contracts.
• The Autonomous AI Problem: More sophisticated Generative AI (GAI) or future Artificial General Intelligence (AGI) that can “learn” and make decisions on its own outside of its original programming presents a greater challenge. Does the human principal still have the necessary intention when an AI independently changes a contract term? For a contract to be legally sound under the current legal safety net, human oversight and final human assent are required.
B. Offer and Acceptance in Automated Transactions
The ability of AI systems to rapidly produce offers and acceptances presents the challenge of how quickly those offers and acceptances are created. It is important to note that while the speed at which these systems are able to produce these relationships is new, the concepts behind them are quite similar to automated contract provisions previously sanctioned by courts as “acceptances” of a price; for example, vending machines were found to create standing offers to customers separating the time of acceptance and delivery of the product from the machine itself.
One of the primary ways that courts will likely examine these automated contracts is to consider whether a human being can truly understand the terms of the contract and, therefore, whether or not this contract constitutes an agreement. In this respect, unconscionability is relevant. If an AI system utilizes a large amount of information and produces contract terms that are so exceedingly advantageous to the party represented by the AI that they are unfair, a court will be able to intervene. For example, the principles set forth by the courts regarding unconscionability as demonstrated in the case of Central Inland Water Transport Corp. v. Brojo Nath Ganguly (1986) will likely serve as a framework from which to begin analyzing contracts drafted by AI systems in which algorithmic bias has influenced the outcome or where there is a lack of parity in information.
III. Case Laws and Emerging Legal Precedents
As AI is still a nascent area, dedicated contract law precedents are emerging, though several existing cases offer analogous guidance.
A. AI-Generated Content and Veracity (Hallucination Risk)
One of the most immediate practical risks of GAI is the generation of false or “hallucinated” information.
- Although Mata v. Avianca, Inc. (2023, US District Court, S.D.N.Y.) is not a contract case, it has been cited most often in connection with AI being misused in the court system. In this case, a lawyer filed a brief that included citations to cases that were fabricated by ChatGPT; as a result, the lawyer faced disciplinary action. In this case, the court found that the lawyer violated his or her ethical duty by failing to properly verify the accuracy of the machine-generated citations. The court’s ruling emphasised the necessity of human verification of any content generated by an artificial intelligence (AI), as part of the ethical obligation to properly represent one’s client; therefore, AI also has to be verified by humans before being used to draft contracts, etc.
B. Validity of AI-Drafted Contracts
In an important development in 2025, the Karnataka High Court (India) reportedly ruled on the enforceability of a contract drafted by an AI platform in a dispute between a software vendor and a fintech startup.
- The Karnataka High Court’s Unique Decision (Example/Example Precedent – based on Case Search Results). The court opined that the intent of the human contracting parties is superior to the method by which an agreement was created. The court ruled that as the human contracting parties had agreed to and understood the terms of the contract, it was a valid and enforceable contract based on the essential elements of the Indian Contract Act, 1872. The ruling demonstrates the principle that artificial intelligence (AI) serves as an assistant in drafting documents but is not a separate legal entity with which contracting parties would form their agreements.
IV. The Critical Issue of Liability for Errors
When an AI-generated contract contains a flaw a missed regulatory requirement, a biased term, or a factual error; determining who is responsible is the most critical challenge for lawmakers.
Liability can potentially fall on three parties:
- The End-User/Principal: The person or business that signed the final contract and gave the AI instructions. This party usually bears the risk under current agency law, particularly if they neglected their human oversight and review obligation.
- The company that developed or provided the AI software is known as the AI Developer/Integrator. If the software did not function as intended, they might be held accountable under a breach of warranty; if the design was seriously flawed or the training data was intentionally biassed, they might be held liable under a negligence tort claim.
- The AI itself (Future Consideration): Some scholars suggest giving sophisticated AI “electronic personhood,” which may entail liability. However, since AI lacks assets to cover damages, such liability is moot, so this idea is still theoretical.
The current trend is towards using Fault-based methods that place emphasis on the people behind an incident. Thus, who had either the opportunity or control in preventing the error will be determined as liable. Was a legal clause created by AI not reviewed by the appropriate party? Or did the AI developer fail to instruct the AI sufficiently on how to operate within legal bounds? Future regulatory frameworks, such as the EU AI Act, will impose higher liability standards on those operating “high-risk” artificial intelligence systems because the focus of these regulatory bodies is to take a Risk-based approach.
V. Conclusion: Embracing the Future with Guardrails
As AI enters the field of contract law, it represents an evolution, not a revolution. While AI can transform contract law by providing new capabilities, this transformation will only occur if AI is used according to human-established legal principles.
For the legal profession to protect against the risk of automation impacting the fundamental principles of fairness, intent, or accountability, it must focus on developing algorithmic literacy within the profession.
The path forward requires:
- Mandatory Human-in-the-Loop: Explicit legal requirements for human evaluation and ultimate approval of all significant contract terms droughted by AI.
- Transparency and Explainability: Contractual provisions mandating that AI suppliers give a level of explainability (XAI) for how their system came to a crucial conclusion or created a particular phrase.
- Adaptation of Legal Concepts: Jurisdictions must formally address the application of current doctrines to AI-mediated agreements, such as mistake, fraud, and unconscionability.
By integrating AI responsibly; ensuring it acts as a sophisticated tool for efficient execution rather than a replacement for human judgment and ethical oversight; contract law can successfully navigate the algorithmic age.
FAQs:
- Who is liable if an AI system breaches a contract?
Ans: Liability typically falls on the human or company that employed the AI. If the breach resulted from a flaw in the AI software itself (e.g., a critical miscalculation), the end-user may have a breach of contract claim against the AI developer/vendor, depending on the warranty and liability clauses in their service agreement.
- What is an AI “hallucination” in contract law?
Ans: An AI hallucination is when a Generative AI tool produces fabricated legal facts or citations, or drafts a clause that is completely non-existent, irrelevant, or illegal under the governing law. The risk is that a human may unknowingly include this false content in a final, signed contract.
- Are Smart Contracts legal in the traditional sense?
Ans: Smart Contracts are considered legally valid agreements where the code constitutes the automatic execution of the terms agreed upon by the human parties. Their enforceability hinges on the human agreement to the underlying code/terms, but legal challenges remain regarding remedies like rectification if the code contains a mistake.
References:
- Transformative Impact of AI on Contract Law: A Comprehensive Study, AI_Contractlaw.pdf
- India Contract Act, 1872, A187209.pdf
- Contract Law and Artificial Intelligence (AI), Contract Law & Artificial Intelligence (AI) – Lexibal


