JavaScript is disabled
To continue reading this content, please enable JavaScript in your browser settings and refresh this page.
Large language model AI systems are revolutionizing how businesses operate. From automating customer service to drafting contracts, AI tools promise efficiency and innovation. But this rapid adoption comes with a spooky side: cybercriminals leverage the same technology to perpetrate sophisticated fraud schemes, exposing businesses to financial loss and legal liability. For example, the Federal Trade Commission (FTC) reported that impersonation scams perpetrated by fraudsters posing as trusted executives or vendors were among the top reported frauds in 2024, costing U.S. businesses and consumers a staggering $2.95 billion. Increasingly, these scams can use AI to generate ‘deep-fake’ audio, video and images to deceive employees into authorizing wire transfers or sharing sensitive data. These attacks are even easier to execute in a remote or hybrid workforce, where in-person interaction is rare.
The AI Governance Gap
As businesses struggle to keep up with these growing threats, threat actors see opportunities to leverage the technology for nefarious ends. IBM’s 2025 Cost of a Data Breach Report underscores the urgency: 13 percent of the organizations surveyed by IBM reported breaches involving AI models or applications, and 97 percent of those lacked proper controls around internal use and access of AI systems. Even more concerning, 63 percent of all surveyed organizations had no AI governance policy whatsoever. The result? Higher breach costs and operational disruption. AI governance isn’t optional—it’s foundational. And it requires collaboration across the enterprise. Legal, HR, IT and the C-suite must work together to craft policies that address both the promise and peril of AI.
Regulation: Sleepy, but not Quite Headless
With federal regulators focused on home-grown AI dominance (or unfocused because of a government shutdown), businesses can look overseas or to states tiptoeing into regulating these systems to see where AI regulation may be headed. The European Union’s AI Act, which is slowly going into effect, imposes a risk-based framework for AI systems, imposing stricter regulations for ‘high-risk’ AI applications. Colorado’s AI law, effective June 2026, adopts a similar risk-based approach, focusing on consumer protection and transparency. Other states, like California, with its recent passage of SB 53, are focusing on regulating the safety mechanisms employed by the developers of AI systems. These laws signal that more regulation is coming. To comply, companies should intentionally assess and mitigate AI-related risks. Failure to do so can inflict upon the company operational and reputational harm, regulatory penalties and exposure to costly litigation.
This lack of comprehensive AI regulation serves to underscore the need for internal AI governance at the organizational level. Below are five practical steps companies can take to help reduce risk and avoid liability.
Develop a Comprehensive AI Governance Policy – Define how AI tools can be used internally and by vendors. Address data privacy, intellectual property and security obligations; align your policy with emerging risk-based regulatory frameworks. Audit for unsanctioned ‘shadow AI’ use by your employees and vendors.
Strengthen Vendor Contracts – Require vendors to disclose their use of AI and implement security measures. Include indemnification clauses for AI-related breaches and mandate compliance with your governance standards.
Train Employees to Spot AI-Driven Threats – Deepfake technology can mimic voices and faces convincingly. Train staff to verify unusual requests through secondary channels and discourage reliance on email or chat for high-risk approvals.
Implement Multi-Factor Authentication and Access Controls – Many AI-related breaches occur because of weak identity management. Adopt phishing-resistant authentication methods, like device-based multi-factor authentication, and limit access to sensitive systems.
Plan for Incident Response and Litigation – Even the best defenses can fail. Maintain an updated incident response plan and rehearse it. Document your governance efforts—these records can be critical in limiting liability in litigation over who bears responsibility for a breach.
Why This Matters for Business Leaders
AI-enabled fraud is here. Regulators, investors and courts will expect companies to demonstrate proactive risk management. A robust AI governance framework protects your bottom line and positions your organization as a trustworthy partner in an era of digital uncertainty. If an incident occurs, litigation may follow, including complex questions of contract interpretation, negligence and regulatory compliance. Planning ahead for potential litigation can help you preserve evidence, manage communications and position your company for a favorable outcome.
For questions or more information on how this could affect your business, readers are encouraged to contact Reinhart Shareholder Michael Gentry or another member of Reinhart’s Artificial Intelligence Group.
Reinhart Boerner Van Deuren is a full-service, business-oriented law firm with offices in Milwaukee, Maidson, Waukesha and Wausau, Wisconsin; Chicago and Rockford, Illinois; Minneapolis, Minnesota; Denver, Colorado; and Phoenix Arizona. With more than 200 attorneys, the firm serves clients throughout the United States and internationally with a combination of legal advice, industry understanding and superior client service.
Michael J. Gentry is a shareholder in Reinhart’s Labor and Employment Practice and is a member of the firm’s Artificial Intelligence Group. He regularly represents clients in litigation on technology and workforce related issues including disputes involving wire fraud, computer crimes and under the Computer Fraud and Abuse Act, as well as tailoring policies and employee-facing agreements to assist clients meet their employment, proprietary data and security and artificial intelligence needs.