
A user asks ChatGPT about a wearable device cleared for atrial fibrillation detection. The AI tells them it can also "help detect early signs of stroke," a claim that goes well beyond the device's FDA-cleared intended use. If regulators find that claim attributed to the product, the company faces enforcement action for off-label promotion they never made.
This isn't hypothetical. It's happening every day.
AI systems like ChatGPT, Perplexity, Claude, and Gemini are answering millions of questions about products. They're making claims about what products do, who should use them, and why they work. Sometimes they cite sources that got it wrong. Other times they hallucinate entirely, generating false claims with complete confidence.
This is becoming a problem for everyone. But for companies in regulated industries like healthcare, financial services, and insurance, it's a liability with real consequences.
When an AI system tells patients that a medical device treats a condition it's not approved for, that's an FDA violation. When it provides unauthorized financial advice about an investment product, that's a FINRA compliance failure. When it fabricates product capabilities or safety claims, that's regulatory exposure and legal risk.
AI hallucinations cost businesses $67.4 billion in 2024. Over 200 lawsuits have been filed globally for AI-generated misinformation. The FDA has issued warnings for AI-related medical claims. FINRA now requires supervision of AI communications. The EU AI Act imposes fines up to €35 million or 7% of global turnover.
For companies in regulated industries, "the AI made it up" isn't a legal defense.
Right now, AI systems are answering questions about your products and making claims about efficacy, safety, and use cases. Most companies have no idea what's being said until a customer asks, a competitor notices, or a regulator does.
The volume is staggering. ChatGPT now has 800 million weekly active users. 70% of searches now end without a click because people get their answers straight from AI. These systems have become the primary way people research products, and the claims they make about your brand are growing every day.
When regulatory enforcement catches up, the companies that waited will be scrambling to prove they were managing the risk all along.
Regulators are drafting the rules. The FDA released draft guidance on AI in medical devices. FINRA issued requirements for supervising AI communications. The EU AI Act carries penalties that can end companies. The regulatory framework is forming now, while most companies remain unaware of their exposure.
AI has become the primary way people research products. People now ask ChatGPT and Perplexity instead of searching Google. They trust the answers these systems provide. And those systems are making definitive claims about what your products do, who should use them, and whether they're safe.
The smartest teams are asking questions now. Not because there's a mandate yet, but because they see where this is going. What happens when an auditor asks how we monitor AI-generated claims? What happens when a customer makes a decision based on false information? What happens when the first major enforcement action hits our industry?
The companies that answer those questions now will be prepared when everyone else is scrambling.
AI-generated misinformation about your products is a liability that's invisible until it isn't. The question isn't whether AI systems are making claims about your products (they are). The question is whether you know what those claims are, where they come from, and what you're doing about it.
The companies that take this seriously today will be the ones that don't have to explain themselves tomorrow.
This is exactly the problem we're building Tovio to solve. If you're in a regulated industry and want to understand what AI systems are saying about your products, reach out. We'd love to show you what we're working on.
