Trending Topics

Learn from Our Healthcare Technology Experts

red flag icon pin

Major AI Risks in Healthcare: What Growing MSOs Need to Know About Securing Patient Data

Your providers are saving 2-3 hours per day with AI documentation. Burnout’s dropping. Patient satisfaction is climbing. Everything looks great on the surface.

But here’s what’s keeping security experts up at night: that same AI tool recording patient conversations is creating vulnerabilities across your entire MSO network that you probably haven’t even considered.

Do you think your practice is protected? Think again.

These cybersecurity myths are putting healthcare practices at serious risk.

We’re talking about real risks here. Audio recordings floating through unsecured channels. AI-generated notes becoming permanent medical records without proper validation. Vendors using your patient data to train their models. And your own providers secretly using ChatGPT because they don’t realize the compliance nightmare they’re creating.

These aren’t theoretical problems. The bottom line is this: if you’re scaling an MSO with AI tools, you need to understand these four critical security risks before they become your next breach notification.

Why AI documentation tools are everywhere in healthcare now

Healthcare practices everywhere are jumping on the AI bandwagon. Voice-based ambient listening tools are “proliferating all over the place,” according to the experts from our webinar,

“How To Optimize Cybersecurity As A Healthcare Company”. The major vendors include AI solutions integrated with EMRs, standalone ambient scribes, and virtual assistants promising to revolutionize clinical documentation.

The business case is rock solid. Providers spend less time typing and more time actually talking with patients. Documentation happens during the visit instead of three hours later at home when details get fuzzy.

For MSOs managing multiple sites, these tools promise something even better: operational consistency. Deploy one documentation platform across every location and you’ve standardized quality without endless training sessions at each new acquisition. Smart move, right?

Not so fast. When you standardize, you’re also standardizing risk. One security gap in your AI setup means that gap exists at every single site.

The four AI security risks keeping MSO leaders up at night

broken lock in blue background

1. Audio data in transit and at rest

Think about what happens when your AI tool captures a patient-provider conversation. That audio exists in two vulnerable states: while it’s traveling to processing servers and while it sits on vendor systems.

Tim Grelling put it plainly in the webinar: healthcare organizations need to worry about “every step of the way” where that data goes. This includes how secure the transmission is, where exactly the audio travels, what the vendor’s storage practices look like, and their data retention policies.

Here’s where it gets worse for MSOs. One unsecured transmission point affects every site using that tool. One vendor with weak encryption exposes patient conversations from all your locations, not just one.

2. AI-generated content as part of the medical record

Audio transcription is just the start. Today’s AI tools generate clinical summaries, suggest diagnoses, and draft treatment plans. When this becomes part of the permanent medical record, you’ve got documentation with serious legal, clinical, and compliance implications.

Start asking yourself these questions:

  • Is the AI catching clinical nuance or introducing errors through misinterpretation?
  • How do you validate AI documentation before it becomes permanent?
  • If AI documents something that never happened (yes, this is a real thing called hallucination), who takes the liability?
  • What’s your process for finding and fixing AI errors already in records?

For MSOs handling thousands of daily patient encounters across multiple sites, the potential for documentation errors grows exponentially without proper validation and monitoring.

3. Third-party data sharing and model training

Here’s something that might surprise you. Jeffery Daigrepont highlighted during the webinar that many practices need help “parsing through those business associate agreements” because vendors often bury concerning language about data usage.

You need straight answers from every AI vendor about:

  • How long patient data is kept and where it’s stored
  • Whether they use your data to train their AI (and if yes, how they de-identify it)
  • Which subcontractors see your data and what their security standards are
  • Where servers are physically located and if data ever goes international
  • Whether they create aggregated datasets from your patients’ information

Many vendors claim the right to use “de-identified” data to improve their systems. But de-identification standards vary wildly. What one vendor considers safe might not meet your compliance standards or what your patients expect.

This gets extra complicated for MSOs. After acquisitions, you might have different AI tools across your platform, each with different data practices. That patchwork creates risk you can’t easily track.

4. Is HIPAA compliance enough for AI documentation tools?

Traditional HIPAA frameworks cover static data storage and transmission. But AI throws in complications that existing regulations never anticipated.

Take consent, for example. If a provider tells a patient “I’m using AI to help with documentation,” is that enough? Tim emphasized in the webinar that verbal consent won’t cut it anymore. You need “documented, written consent that clearly explains what’s happening with patient data.”

Then there’s real-time processing. Traditional HIPAA audits look at stored data, but how do you audit data being actively processed by AI algorithms during a visit?

Jeffery’s advice was direct: “Really parse through those Business Associate Agreements, understand where the liability is being shared, and make sure that you’re protected from a documentation standpoint.”

For PE-backed MSOs with board-level oversight requirements, this ambiguity is a ticking time bomb.

The shadow AI problem: When providers use ChatGPT

small yellow robot pointing upward

During the webinar, one expert noted with a knowing laugh: “Not that anybody, I’m sure, has ever put any patient data in ChatGPT. But maybe some of our employees have had that idea.”

Let’s be honest. Your providers are already doing this.

Common scenarios driving unauthorized AI use

Providers turn to public AI for what seem like innocent reasons:

  • Quick help with phrasing a differential diagnosis
  • Running a patient summary past AI to check clinical reasoning
  • Translating discharge instructions for Spanish-speaking patients
  • Drafting appeal letters or prior authorizations

Why does this happen more in scaling organizations

Shadow AI use accelerates during growth phases for several reasons:

New providers from academic settings: Recent hires come from environments where AI use is normalized and encouraged, unaware of HIPAA implications in practice settings.

Inconsistent training: Rapid onboarding across acquisitions creates gaps in policy communication. Not every provider receives the same guidance about approved versus prohibited tools.

Pressure to maintain throughput: As you scale, productivity pressures increase.

Providers look for any tool that helps them keep up with volume, sometimes taking shortcuts that create risk.

Generational differences: Younger physicians view AI as a natural workflow tool, not recognizing the compliance violations they’re creating by using consumer-grade platforms.

The risk matrix you can’t ignore

Data retention and model training

When providers put PHI into public AI tools, several things happen that you can’t undo.

Data leaves your control permanently. Many AI platforms keep conversations to improve their models. Your patient data could be training the next generation of public AI. You can’t “un-share” data once it’s processed.

ChatGPT and similar tools don’t have Business Associate Agreements. They’re not HIPAA-compliant. If a breach happens through unauthorized AI use, you carry 100% of the liability with zero recourse.

Imagine a malpractice case where it comes out that the physician used ChatGPT for the treatment plan. Even if the AI was right, the unauthorized data sharing creates separate legal exposure that complicates your defense.

Where does vendor responsibility end and yours begin?

The Business Associate Agreement divides security responsibilities, but the reality is messier than most MSO leaders realize.

What vendors are responsible for

Your AI vendors must handle:

  • Data encryption during transmission and storage
  • Access controls limiting who in their organization sees patient data
  • Security infrastructure including firewalls and intrusion detection
  • Breach notification within specified timeframes (usually 24-72 hours)
  • Subcontractor management ensuring their partners also comply with HIPAA

What remains YOUR responsibility (and liability)

You still own:

  • Choosing vendors with adequate security (if you pick a vendor with known gaps, that’s on you)
  • How the AI tool deploys in your environment
  • Creating and enforcing policies around AI usage
  • Ongoing monitoring to ensure proper use
  • Getting and managing patient consent
  • Patient notification and regulatory reporting when breaches occur

The grey areas create the most risk. A provider uses the tool correctly but accidentally includes another patient’s information. The AI generates inaccurate documentation that affects care. Your IT team’s integration creates an exposure point. Who’s liable in each case?

Building a security framework for AI documentation tools

woman planning security measures using his laptop

Moving from awareness to action requires systematic implementation across your organization.

Conduct comprehensive AI vendor assessments

Before deploying any AI tool, evaluate:

Security certifications: Does the vendor maintain SOC 2 Type II, HITRUST, or similar certifications? These aren’t just checkboxes. They demonstrate ongoing commitment to security controls and regular third-party audits.

Data handling practices: Where is data processed and stored? Does the vendor use subcontractors? Are there any geographic or jurisdictional issues with data storage?

Breach history: Has the vendor experienced security incidents? How did they respond? Transparency about past incidents often indicates mature security practices.

Insurance and indemnification: What cyber insurance does the vendor carry? What indemnification provisions exist in the BAA if a breach occurs?

Vendor security roadmap: How does the vendor plan to evolve security practices as threats change? AI vendors should demonstrate forward-thinking security investment, not just current compliance.

For PE-backed MSOs, involve your board’s IT or security committee in this evaluation process. Their oversight strengthens your defensibility if problems emerge later.

Create clear policies for AI usage

Your AI usage policy should address:

Scope of approved tools: Specifically list which AI platforms are approved for clinical use at your organization. Include the name, vendor, and approved use cases for each tool.

Prohibited activities: Clear examples of what providers cannot do, such as “Never input patient identifiers or clinical information into ChatGPT, Claude, or other public AI tools.”

Acceptable use cases: Scenarios where AI can be used, like “Approved: AI clinical decision support built into our EHR system.”

Don’t rely on policy alone. Instead, implement these technical controls: 

Network filtering: Block access to unapproved AI platforms from your clinical networks at the firewall level.

Data Loss Prevention (DLP): Monitor for PHI being copied into web forms or chat interfaces, with real-time alerts when violations occur.

User education: Regular training that explains the “why” behind the rules, not just the “what”. Helping providers understand how shadow AI use threatens patient privacy.

Reporting mechanisms: Make it easy for staff to ask, “Is this tool safe to use?” before they start using it, removing barriers to compliance.

Standardize vendor management across your platform

As you add sites through acquisition, you inherit different vendor relationships. Your consolidation strategy should map all AI tools currently in use, evaluate which should become platform standards, phase out redundant or risky tools, and negotiate enterprise BAAs for better protection.

Build continuous monitoring into operations

Shift from annual assessments to continuous oversight:

Quarterly vendor security questionnaires: Regular check-ins ensure vendors maintain security controls and notify you of changes in data handling practices.

Monthly review of AI tool usage logs: Identify unauthorized AI usage before it becomes a breach, tracking which tools are being accessed and by whom.

Real-time alerts for shadow AI detection: Automated systems that flag when providers access prohibited AI platforms or attempt to upload files to unauthorized services.

Board-ready dashboards: Clear metrics showing AI risk across your platform, documented evidence of systematic oversight, and measurable progress on risk reduction over time.

For CFOs reporting to PE sponsors, these dashboards provide the audit-ready visibility that transforms AI from IT project to strategic investment.

This visibility transforms AI from a blind spot into a calculated risk you can actually manage.

Create an incident response plan specific to AI tools

Scenario planning should cover:

  • Audio data breach at an AI vendor
  • Discovery that a provider has been using unapproved AI tools with patient data
  • Vendor changes their data handling practices without proper notice
  • AI-generated documentation contains errors that affect patient care

Response procedures must address:

  • Who do you notify and when (internally and externally)?
  • How do you assess scope across multiple sites quickly?
  • What’s your patient notification strategy if PHI was exposed?
  • How do you document the incident for OCR investigations?

Why MSOs need unified IT, security, and data management

group of people holding hands

The complexity of securing AI tools highlights a larger truth—you can’t bolt security onto operations. It needs to be integrated from the start.

This is particularly challenging for growing MSOs because:

Fragmented IT across acquisitions: Each acquired site comes with its own IT systems, vendors, and security postures. Bringing these into a unified security framework while maintaining operations is resource-intensive.

Competing priorities: Clinical operations, provider satisfaction, and patient care always feel more urgent than security assessments—until a breach occurs.

Resource constraints: Smaller MSOs often lack dedicated security staff, forcing IT generalists to handle security alongside everything else.

The solution isn’t just hiring more IT staff. It’s implementing a unified approach where IT infrastructure, network security, and data management work together as integrated systems rather than separate projects.

What unified security looks like in practice

Single source of truth for security monitoring. One view of your security status across all locations and vendors. No more juggling multiple reports or wondering which site has which vulnerabilities.

Standardized vendor assessments that actually get done. Every new AI tool gets the same security review, whether it’s requested by your flagship location or your newest acquisition. Same questions, same standards, same protection.

Real-time threat detection across your entire network. When something suspicious happens at one site, you know immediately if other locations are at risk. No waiting for Monday’s report to find out Friday’s attack spread to three other clinics.

Finding problems before they find you. Regular vulnerability scanning, security assessments, and monitoring catch issues while they’re still small. You fix the broken window before someone climbs through it.

For MSOs backed by private equity, this approach gives you something concrete to show the board. Not promises or policies, but actual security metrics and documented improvements over time.

The Focus Solutions approach: Your AI security partner

At Focus Solutions, we understand that AI security isn’t a one-time project. It’s an ongoing operational requirement that needs to scale with your growth.

Our approach combines three integrated services:

Managed IT: We ensure your infrastructure can support secure AI implementation across all sites, with consistent configuration and monitoring.

Managed Security: We assess AI vendor security, implement technical controls to prevent shadow AI usage, and provide ongoing monitoring for unauthorized data sharing.

Managed Data: We help you understand data flows across AI tools, vendor relationships, and your platform, creating the visibility you need to manage risk proactively.

This unified approach means you’re not coordinating between multiple vendors or reconciling conflicting advice. One partner, one strategy, one security framework that grows with you.

Turn AI from risk into a strategic advantage

AI documentation tools offer real innovation that can improve provider satisfaction and clinical quality. But only if you implement them with security frameworks that match the technology’s sophistication.

The MSOs that will thrive aren’t avoiding AI. They’re deploying it strategically with risk management built in from day one.

Your risk assessment needs to cover all AI tools across your platform, vendor security practices, patient consent alignment, technical controls preventing shadow AI, and monitoring that scales with growth.

Schedule a risk assessment with Focus Solutions. 

As the Unified Partner for growing healthcare organizations, we help you implement AI strategically, balancing innovation with the security frameworks that protect your platform, your patients, and your growth trajectory.

Share on Social: