Under the previous administration, FTC Chair Lina Khan made a statement on February 27, 2024, that sent shockwaves through every healthcare boardroom in America. She declared that "some data, particularly peoples' sensitive health data... is simply off limits for model training."
Let me be crystal clear about what this meant: Using patient health data to train AI models was explicitly prohibited by the FTC under Khan's leadership.
Not "proceed with caution." Not "consider the implications." Off limits. Period.
This wasn't just regulatory posturing. The FTC launched "Operation AI Comply" in September 2024, actively investigating companies using AI in ways that violated consumer protection laws. Chair Khan stated bluntly: "Using AI tools to trick, mislead, or defraud people is illegal. There is no AI exemption from the laws on the books."
The Khan administration created a clear enforcement priority that most healthcare organizations scrambled to understand and comply with. They were operating under strict guidelines that patient data was simply not available for AI training purposes.
Now, under new FTC Chair Andrew Ferguson, we're seeing a dramatic shift. Ferguson has told lawmakers that the agency won't regulate AI until after problems occur, emphasizing the importance of not regulating "ahead of abuses." More significantly, Ferguson has pledged to "dramatically scale back or halt the commission's health care privacy and artificial intelligence enforcement"—the exact areas where Khan had established major new regulatory frameworks.
Ferguson's approach prioritizes avoiding "stifling artificial intelligence innovation" and favors a reactive regulatory stance over the proactive enforcement that characterized the previous administration.
If you spent the last year restructuring your AI programs to comply with Khan's "off limits" directive, you're probably asking: What now?
Here's the reality: The regulatory landscape has fundamentally shifted, but the legal risks haven't disappeared.
Organizations that:
- Halted AI training programs using patient data
- Restructured their AI development strategies
- Invested in alternative approaches to avoid patient data use
...are now facing a completely different regulatory environment where the FTC appears less likely to pursue aggressive enforcement actions.
This creates a dangerous new challenge: regulatory uncertainty. While Ferguson's administration appears more permissive, existing healthcare privacy laws like HIPAA haven't changed. The fundamental legal framework still exists - it's just the enforcement priorities that have shifted.
Organizations now face three critical questions:
- Should they resume AI training activities that were considered "off limits" under Khan?
- How do they navigate the gap between what's technically legal and what was previously enforced?
- What happens if enforcement priorities shift again in the future?
Regardless of enforcement philosophy, the potential penalties under existing laws haven't changed:
- Civil penalties: Up to $2.1 million per violation under HIPAA
- Criminal charges: Up to $250,000 in fines and 10 years imprisonment
- Annual maximums: $1.5 million per violation category, per year
The penalties still stack. Every patient record used inappropriately could still be a separate violation. The difference is the likelihood of enforcement, not the severity of potential consequences.
First, don't make hasty decisions. The temptation to immediately resume previously prohibited activities could create significant liability if enforcement priorities shift again.
Second, conduct a comprehensive legal review. The intersection of changing FTC enforcement, existing HIPAA requirements, and state privacy laws requires expert analysis specific to your situation.
Third, develop a flexible compliance strategy. Given the regulatory uncertainty, your approach needs to be adaptable to potential future changes in enforcement priorities.
Fourth, document everything. If you decide to resume AI training with patient data, ensure you have robust documentation of your legal basis and compliance measures.
The healthcare AI landscape has shifted from clear prohibition to regulatory uncertainty. Organizations that were compliant under Khan's strict enforcement may now have more flexibility, but they also face new risks if they move too aggressively.
After 25 years in this field, I can tell you: When regulators change direction this dramatically, the organizations that survive are those that move deliberately, not reactively.
The question isn't whether you should immediately resume all AI training activities. The question is how to navigate this new regulatory environment while protecting your organization from future enforcement actions.
I'm providing FREE 15-minute emergency risk assessment consultations this week for healthcare organizations facing urgent AI implementation decisions. This exclusive consultation will help you identify critical vulnerabilities before they become costly problems.
Don't let your organization become another cautionary tale. With proper expert guidance, your AI implementation can enhance patient care while protecting your organization from preventable risks and regulatory violations.
✅ Download our comprehensive AI Ethics & Data Safety Risk Assessment Checklist
✅ Contact me directly for your FREE emergency consultation - only 5 spots available this week
✅ Get expert guidance before your AI deployment puts patients and your organization at risk
