Redefining Confidentiality Contracts as Risk Mitigants for AI-Driven Disclosure
Abstract
This paper discusses the evolving landscape of confidentiality risks associated with the widespread adoption of artificial intelligence (AI) tools in corporate environments. As organisations increasingly leverage AI, traditional confidentiality mechanisms, such as non-disclosure agreements (NDAs) and standard confidentiality provisions in substantive contracts, are proving inadequate to address the unique confidentiality challenges posed by AI systems. Therefore, legal counsel play an important role as the first line of defence when drafting or negotiating confidentiality provisions in any contract. This paper identifies four critical areas for legal teams to address in a practical manner: (1) redefining confidential information to encompass AI-specific categories of information; (2) establishing clear contractual parameters for the use, training, and storage of confidential information within AI models; (3) regulating the role of third-party AI tools as sub-processors; and (4) strengthening contractual remedies, indemnities, and exit protocols. The analysis highlights the need for a risk-based, pragmatic approach to drafting confidentiality provisions that balances operational efficiency with robust safeguards. This paper concludes that effective AI governance requires flexible, tailored contractual frameworks that anticipate the complexities of AI-driven data processing and mitigate the risk of unauthorised disclosure or misuse of sensitive information.





