FirstCover
Go back

AI Disclosure Risk: What Public Companies Must Know

How statements about artificial intelligence in SEC filings and investor communications can trigger regulatory scrutiny, securities litigation, and D&O exposure.
Capital Markets Mar 19, 2026
hero-image

 

 

 

Executive Summary

AI disclosure risk has become a material governance issue for public companies. As companies race to signal AI adoption to investors, regulators and plaintiffs’ attorneys are scrutinizing whether those claims hold up. The SEC has made AI-related disclosures an examination priority, and securities class actions citing AI misstatements grew sharply in 2024. Meanwhile, D&O insurers are starting to ask harder questions about how boards oversee AI strategy.

The core problem is a credibility gap: companies often talk about AI in broad, optimistic terms, while the underlying reality is more limited. When that gap becomes public, the legal and reputational fallout can be significant. This article examines why that gap exists, what happens when it closes, and how boards and executives can get ahead of it.

 

 

Why Companies Talk About AI — And Why It Creates Risk

Investor enthusiasm for AI has been extraordinary. Announcements tied to AI adoption have driven sharp stock price moves, and analysts routinely ask about AI strategy on earnings calls. That attention creates pressure: boards and executives feel compelled to demonstrate they are not falling behind on what many consider the most consequential technology shift in a generation.

As a result, AI references in 10-Ks, proxy statements, and investor presentations have proliferated. Companies describe AI as central to their operations, their competitive positioning, and their growth outlook. Some of those descriptions are well-grounded. Others are aspirational at best, misleading at worst.

Regulators noticed. Former SEC Chair Gary Gensler, who left office in January 2025, frequently warned that companies must be honest about AI’s actual role and not exaggerate capabilities. His speeches drew a direct comparison to “greenwashing” in the ESG context — coining the term “AI washing” for companies that overstate their AI use. The SEC under subsequent leadership has maintained that existing disclosure principles are sufficient to address AI claims, meaning no safe harbor and no new carve-outs.

 

 

Regulatory Scrutiny: What the SEC Is Watching

The SEC’s Division of Corporation Finance has been reviewing AI-related disclosures through its standard comment letter process. Companies have received comments questioning vague AI references, asking for specifics on how AI is actually used, and pushing back on forward-looking AI claims that lack adequate risk disclosure.

Under existing rules, companies must disclose material business developments, risks, and management discussion items that could affect financial performance. AI initiatives can fall into all three categories. If a company promotes its AI capabilities as a revenue driver but fails to disclose that the technology is still in development, or relies on third-party tools rather than proprietary systems, regulators may view that as a material omission.

The SEC has also signaled that it views AI disclosure requirements as consistent with the existing principles-based framework — not a gap requiring new rulemaking. That means the same standards that apply to any material business claim apply to AI: claims must be accurate, supported, and balanced with appropriate risk disclosure. Vague or boilerplate AI language is unlikely to satisfy that standard.

 

Key Point

The SEC has declined to create AI-specific disclosure rules, which means existing fraud and omission standards apply in full. There is no “AI safe harbor.”

 

 

Securities Litigation: When AI Claims Lead to Lawsuits

Securities class actions involving AI misstatements have increased materially. One industry analysis tracked 53 AI-related class actions filed between early 2020 and mid-2025, making it the fastest-growing category of event-driven securities litigation during that period. The pattern is consistent: a company makes bold AI claims, valuation rises, reality disappoints, and plaintiffs allege fraud.

Typical allegations include overstating the maturity or capability of an AI product, concealing the extent to which human labor substitutes for claimed automation, and making material omissions about competitive or operational limitations. Courts have shown willingness to find these claims actionable. In one case, a CEO’s public statements about AI capabilities were found to be misleading because they materially overstated the technology’s actual performance.

The damages potential in these cases is amplified by the market dynamics of AI hype. If a company’s stock price rises significantly on AI announcements, and then falls sharply when the reality is revealed, the alleged damages figure can be large — making these cases attractive to plaintiffs’ firms and harder to dismiss early.

 

 

Real-World Cases

The following cases illustrate how AI disclosure risk materializes in practice.

 

Presto Automation (SEC Settlement, 2025)

Presto, a restaurant technology company, marketed its drive-through ordering system as AI-powered. The SEC found that the product relied heavily on a third-party voice tool and that the majority of orders required human intervention to complete. Presto settled the enforcement action, paid a civil penalty, and agreed to improve its disclosures. Notably, the company’s cooperation with the SEC investigation was a factor in the settlement terms — but cooperation did not prevent a penalty.

 

Joonko (SEC and DOJ, 2024)

The founder of Joonko, a diversity-hiring platform, was charged by both the SEC (civil) and the DOJ (criminal) after allegedly misrepresenting the company’s AI capabilities to investors. The platform was marketed as AI-driven, but much of the work was performed manually. The case underscores that AI washing is not just a disclosure compliance issue — it can cross into criminal fraud territory for individuals.

 

Nate, Inc. (DOJ Criminal Charges, 2025)

The former CEO of Nate, an e-commerce app, was criminally charged with wire fraud after marketing the product as powered by AI automation. Federal prosecutors alleged the app’s actual automation rate was effectively zero — offshore workers processed transactions that were presented to investors as AI-handled. The case is notable for the severity of the charges and the directness of the alleged fraud.

 

Broader Pattern

Companies across healthcare, security technology, and enterprise software have faced civil suits making similar allegations: that AI-related investor statements were materially misleading, and that stock price declines following corrective disclosures caused investor losses. In several cases, short-seller reports served as the corrective disclosure that triggered litigation.

 

 

Board and Management Oversight

Boards are increasingly aware that AI strategy carries disclosure risk, but formal governance structures have not kept pace with the rhetoric. Data from ISS Governance found that only around 8 to 9 percent of U.S. public companies disclose any board-level AI oversight framework — a significant gap given how prominently AI features in investor communications.

Among larger companies, the picture is somewhat better. Harvard Law School’s corporate governance survey found that nearly half of leading companies now reference AI specifically in board risk oversight discussions. Disclosure of director AI experience also increased sharply — from 26 percent to 44 percent of companies in one year, per the same survey. Committee-level AI oversight has been assigned in roughly 40 percent of surveyed firms, often to audit or technology committees.

The gap between these numbers and the broader public company universe reflects a governance problem. Companies that talk about AI in investor communications but lack internal review processes for those statements are exposed on multiple fronts: regulatory, litigation, and insurance.

Boards should consider whether AI strategy and related disclosures are subject to adequate review by management and legal counsel before they reach investors. That means assigning clear responsibility, building in review processes for AI-related statements in filings and earnings materials, and making sure directors have sufficient context to ask meaningful questions.

 

 

Practical Disclosure Guidance

How companies describe AI to investors matters. The following practices reflect both regulatory expectations and the lessons of recent enforcement actions.

 

Be specific about what “AI” means in your business.  Generic references to AI are not meaningful disclosure. Describe what the technology does, where it is deployed, and at what stage of maturity. A product in beta testing is not an operational AI system.

Support claims with evidence.  Statements about AI capabilities should be grounded in testing, data, or third-party validation. If regulators or plaintiffs ask what the basis was for a claim, the answer needs to exist.

Disclose limitations alongside capabilities.  Risk factors should be tailored to your actual AI initiatives — not boilerplate. If performance depends on data quality, vendor relationships, or regulatory approvals, say so.

Maintain consistency across channels.  Discrepancies between what executives say on earnings calls and what appears in SEC filings are red flags for both regulators and plaintiffs.

Update disclosures as facts change.  If an AI initiative fails a pilot, a vendor relationship changes, or regulatory requirements shift, evaluate whether updated disclosure is warranted. Proactive transparency is almost always better than reactive correction.

Involve legal and technical review.  AI disclosures should not be drafted by marketing teams alone. Legal counsel and, where appropriate, technical advisors should review AI-related statements before they reach investors.

 

 

The D&O Insurance Perspective

Standard D&O policies cover claims arising from alleged misstatements in public disclosures, which means AI-related securities suits and SEC investigations generally trigger coverage — subject to policy terms, exclusions, and retention. Defense costs for early-stage AI litigation have, in many cases, been covered without major coverage disputes.

That said, the underwriting environment is shifting. Companies with significant AI exposure, particularly those in industries where AI claims drive valuation, may face more detailed application questions and closer scrutiny at renewal. Underwriters are asking how AI is developed and deployed, how disclosures are reviewed internally, and what board-level oversight exists. These are not theoretical questions — they reflect the same governance gaps that regulators and plaintiffs are focused on.

There is also a systemic risk dimension that insurers are monitoring. If AI hype becomes broadly embedded in a sector and then corrects sharply, multiple policyholders could face suits simultaneously. That kind of correlated loss scenario is a concern for carriers writing significant AI-exposed D&O books.

The practical implication for companies is straightforward: the same governance and disclosure discipline that reduces regulatory and litigation risk also supports favorable D&O underwriting. Documented internal review processes, clear board oversight, and accurate filings are not just compliance measures — they are risk management tools that affect insurance outcomes.

 

 

What to Watch Going Forward

The regulatory landscape around AI disclosure is unsettled. The SEC’s Investor Advisory Committee has recommended that the agency issue formal guidance requiring companies to define AI, describe board oversight, and explain how AI affects operations. The Commission has so far declined to issue AI-specific rules, maintaining that existing principles apply. That position could change, and companies should monitor SEC speeches, interpretive releases, and comment letter trends for signals.

At the state level, Colorado’s AI Act (SB 205, signed 2024) addresses algorithmic accountability in certain contexts. California’s legislative activity on AI has been more fragmented — Governor Newsom vetoed SB 1047 in 2024, though other AI-related bills have passed. These state-level developments may eventually shape what companies must disclose about AI systems, particularly in regulated industries.

Internationally, the EU AI Act entered into force in August 2024, with phased implementation through 2027. U.S. companies with EU operations or customers will face disclosure and compliance obligations under that framework that may need to be reflected in SEC filings.

The trajectory is clear: AI disclosure expectations will become more specific, not less. Companies that build rigorous internal review processes now — rather than waiting for mandated standards — will be better positioned when those standards arrive.

 

 

FAQ

What is AI disclosure risk, and why does it matter for public companies?

AI disclosure risk is the legal and regulatory exposure that results from making inaccurate or unsupported statements about artificial intelligence in SEC filings, earnings calls, or investor materials. Because AI claims can move stock prices, misstatements in this area attract both SEC scrutiny and securities class actions. The risk is not hypothetical — enforcement actions and litigation are already happening.

 

How are regulators responding to AI claims?

The SEC has made AI disclosures an examination priority and has brought enforcement actions against companies and individuals for AI washing — falsely claiming AI capabilities that did not exist or were materially overstated. The DOJ has also pursued criminal charges in cases involving fraudulent AI representations to investors. Both agencies treat AI claims under the same legal standards as any other material business statement.

 

What does responsible AI disclosure look like in practice?

Responsible AI disclosure is specific, supported, and balanced. It describes what the technology actually does today, not what it might do in the future. It acknowledges limitations alongside capabilities and is consistent across all investor communications. It is reviewed by legal counsel before publication and updated when material facts change.

 

Does D&O insurance cover AI-related claims?

Generally, yes — D&O policies cover claims arising from alleged misstatements in public disclosures, including AI-related statements. Defense costs for AI litigation have largely been covered under existing policies. However, underwriters are increasingly scrutinizing AI exposure at renewal, and companies with weak governance or a history of aggressive AI claims may face higher retentions or premiums. Strong internal disclosure controls support both legal compliance and favorable insurance outcomes.

 

 

Conclusion

AI disclosure risk sits at the intersection of investor relations, legal compliance, and corporate governance. The companies that manage it well are not necessarily those that talk about AI least — they are the ones that talk about it accurately. That requires internal discipline: clear processes for vetting AI-related statements, board-level awareness of what is being said to investors, and legal review before claims reach the market.

As enforcement activity increases and litigation continues to develop, the cost of getting this wrong will rise. The standards being applied today — by the SEC, by plaintiffs’ firms, and by D&O underwriters — reflect a simple expectation: that what companies say about AI is true, supported, and complete. Meeting that standard is not a competitive disadvantage. It is the baseline.

 

 

Sources

Reuters (May 2025). “AI washing: regulatory and private actions to stop overstating claims.” https://www.reuters.com/legal/legalindustry/ai-washing-regulatory-private-actions-stop-overstating-claims-2025-05-30/

SEC.gov. Gary Gensler remarks on AI washing. https://www.sec.gov/newsroom/speeches-statements/sec-chair-gary-gensler-ai-washing

SEC.gov. Erik Gerding, “The State of Disclosure Review” (June 2024). https://www.sec.gov/newsroom/speeches-statements/gerding-statement-state-disclosure-review-062424

Harvard Law School Forum on Corporate Governance (January 2025). “SEC Comment Letter Trend: AI-Related Disclosures.” https://corpgov.law.harvard.edu/2025/01/16/sec-comment-letter-trend-ai-related-disclosures/

Global Investigations Review (2026). “US enforcement agencies intensify scrutiny of AI washing.” https://globalinvestigationsreview.com/review/the-investigations-review-of-the-americas/2026/article/us-enforcement-agencies-intensify-scrutiny-of-ai-washing

Risk & Insurance. “AI Litigation and Its Impact on D&O Insurance.” https://riskandinsurance.com/ai-litigation-and-its-impact-on-do-insurance/

U.S. DOJ, SDNY (2025). “Tech CEO Charged in Artificial Intelligence Investment Fraud Scheme.” https://www.justice.gov/usao-sdny/pr/tech-ceo-charged-artificial-intelligence-investment-fraud-scheme

Harvard Law School Forum on Corporate Governance (October 2025). “Cyber and AI Oversight Disclosures: What Companies Shared in 2025.” https://corpgov.law.harvard.edu/2025/10/28/cyber-and-ai-oversight-disclosures-what-companies-shared-in-2025/

ISS Governance Insights. “Mind the Governance Gap: The State of Board Oversight and AI Policy in U.S. Companies.” https://insights.issgovernance.com/posts/mind-the-governance-gap-the-state-of-board-oversight-and-ai-policy-in-u-s-companies/

Nelson Mullins (2025). “SEC Advisory Committee Joins Public Companies in Seeking AI-Related Disclosure Guidance.” https://www.nelsonmullins.com/insights/blogs/corporate-governance-insights/all/sec-advisory-committee-joins-public-companies-in-seeking-ai-related-disclosure-guidance-from-the-commission

More posts

blog image 1 Capital Markets Understanding Your D&O Insurance Cost: Are You Paying Too Much?
Understanding Your D&O Insurance Cost: Are You Paying Too Much?
blog image 1 Capital Markets Revolutionary AI Chatbot ChatGPT Poses New Threats to Directors and Officers' Liability
Artificial Intelligence is not a new concept, but in early 2023, ChatGPT, a cutting-edge AI tool, took the world by storm. It wowed the customers and inspired heated back-and-forths.
blog image 1 Capital Markets Best Practices for D&O Insurance Renewals
D&O insurance provides essential safeguards for a company's board members and executives by acting as a financial safety net for their indemnification rights and personal assets in cases where they face claims or investigations related to their service to the company.