Your AI Journey, Your Pace: AI Solutions Built for Flexibility |
agentic ai systems

Why HITRUST Certification Isn’t Enough for Agentic AI Systems: Insights from Artera’s SVP of Technical Operations

Written By: Darin Moore, SVP of Technical Operations, Artera

As the SVP of Technical Operations at Artera, my mission is to uphold the highest standards of security while fostering a culture deeply rooted in data protection. Given the dynamic nature and rapid change of the agentic AI landscape, we have a unique opportunity today to ensure that our security protocols remain agile and resilient in the face of new challenges. If this past year has taught us anything, it’s that as AI agents become more advanced and independent, the risks of data breaches, hallucinations and leaks can escalate quickly. 

So, what does this mean from a security standpoint? Data security today – in this new era of agentic AI – requires a fundamental shift in strategy, and can no longer rely on static, point-in-time assessments. Instead, it demands continuous monitoring, multi-layered security frameworks and the integration of human oversight with AI-powered validation. 

Healthcare providers seeking agentic AI solutions need partners who truly understand this and have built robust security systems designed specifically with agentic AI in mind. 

Why HITRUST Alone Falls Short in the AI Era of Healthcare

Traditional frameworks like HITRUST are a solid starting point for protecting healthcare data, but they just can’t keep up with how fast agentic AI systems evolve. While HITRUST shows a commitment to safeguarding PHI, securing agentic AI requires a whole new approach.

Here’s the thing: agentic AI doesn’t play by the same rules. These systems are constantly learning, adapting and making decisions on their own. What worked yesterday might not work today, and something secure this morning could have vulnerabilities by the afternoon. A one-time security assessment just doesn’t cut it anymore – we have to be vigilantly guarding the way that AI is using our data. 

It gets trickier when you factor in how AI models get updated, retrained or tweaked between security reviews. Every change can bring new risks or behaviors that weren’t there before. Traditional frameworks simply don’t have the flexibility to keep up with these rapid changes, leaving organizations open to threats that didn’t even exist during their last compliance check.

Beyond Compliance: A Comprehensive Multi-Pillar Approach to Security

Just relying on HITRUST isn’t enough anymore. Working with vendors with multiple certifications gives you stronger, layered protection. That’s why leading health tech companies are choosing a mix of certifications to handle the dynamic nature of AI security.

I like to think of it as a jigsaw puzzle—each certification is a piece that shows how committed an organization is to keeping its systems safe and secure. Here are my “cliffnotes” on the different certifications we prioritize at Artera:

  • HITRUST: the foundational layer for healthcare; demonstrates a commitment to safeguarding PHI 
  • SOC 2 Type 2: third-party audit that highlights strong internal controls around data and systems – it’s a key signal of operational maturity for the business as a whole
  • ISO 27001: general framework that provides the foundation for information security management systems in place
  • ISO 27017: certification that specifically addresses cloud service security
  • ISO 27018: certification that focuses on personally identifiable information (PII) protection in an organization’s environment
  • ISO 27701: certification that covers privacy management and an organization’s commitment to keeping any privacy-related information confidential 

As you can see, each certification plays a different role. When these pieces come together, they create a multi-pillar approach to security. 

At Artera, we’re not just meeting these standards—we’re also pursuing FedRAMP High authorization, which is the Federal Risk and Authorization Management Program’s most rigorous security baseline for cloud services handling highly sensitive government data (in fact, Artera recently achieved “in process” FedRAMP High designation). 

So why does this matter? Pursuing FedRAMP High status reflects our commitment to the highest level of security protocols, elevating our approach to data protection and enhancing our understanding of the evolving security landscape. 

Security Considerations for Evaluating Agentic AI Partners

So, what security certifications should health system leaders focus on in this rapidly evolving agentic AI landscape? What questions should they ask their potential partners? Where should they focus their time? 

Beyond those certifications listed above, health system leaders should focus on three fundamental areas when assessing potential agentic AI vendors: data containment, spillage prevention and hallucination mitigation. 

These represent the most significant risks unique to AI systems, and require specialized approaches that traditional security frameworks don’t address.

What It IsWhy It’s ImportantReal-World Example: One Way Artera is Addressing It
Data ContainmentInvolves ensuring that PHI and PII remain within secure, controlled environments, rather than being exposed to publicly accessible large language models (LLMs).
Safeguarding patient privacy and confidentiality is absolutely critical, given the high value of medical data and severe consequences related to data breaches. 
DLP & Employee Training: Our robust Data Loss Prevention (DLP) measures are the first line of defense, but the human element is just as crucial. Together, our technology and a well-trained staff create a secure environment where sensitive data stays separate from AI processing.
Spillage PreventionAddresses the risk of information crossing between different patient sessions or unauthorized data access. 
Breaches of PHI can violate HIPAA, leading to hefty fines, legal fees, and increased regulatory scrutiny.
Model Context Protocol: creates strict boundaries around what information each AI agent can access and process (conversations with one patient never inadvertently access another patient’s data). 
Hallucination Mitigation
Reduces or eliminates the generation of false, misleading, or nonsensical information by artificial intelligence models, particularly large language models (LLMs).
Healthcare applications cannot tolerate made-up information, whether it’s appointment times, medication dosages or treatment recommendations.
Judge LLMs: simulate conversations with AI agents in real-world scenarios, identifying security issues or inappropriate behavior. Test agents, analyze interactions and score performance to ensure accuracy.

In addition to the preventive measures mentioned, continuous monitoring and real-time alerts are essential while agents are active. 

Building a Culture of Security, Not Just Compliance

While no system is ever 100% secure, we can do a lot to protect ourselves by using every available safeguard and holding ourselves accountable. The goal is to keep both internal and external threats from compromising our systems. Just as important is having a clear audit trail so we can handle any issues that come up. Above all, we need to protect the healthcare data with all we’ve got. This includes fostering a culture of security and continuous improvement. 

At Artera, I’m proud to say that security isn’t just a checkbox or a compliance exercise. It’s a core business principle and vital investment. Over the past few years, I’ve witnessed a remarkable cultural shift within our organization. Security has become a collective effort embedded in everything we do.

I’ve observed a growing interest in security across teams, functions, and employees. Colleagues are asking insightful questions, actively expanding their knowledge, and sharing valuable security insights throughout the company. What stands out most is the heightened curiosity and engagement. It’s both inspiring and encouraging to witness this level of commitment.

Preparing for the Future of Agentic AI Security

As AI continues to play a bigger role in healthcare, keeping systems secure is only going to get more complicated and more important. The organizations that prioritize strong security partnerships now will be better positioned to take full advantage of AI’s benefits while keeping patients’ trust intact.

When choosing an agentic AI partner, it’s a good idea to focus on vendors who not only have solid security measures in place today but are also committed to staying ahead of future challenges. I encourage providers to look for vendors who stay on top of AI security trends, invest in research and innovation, and can quickly adapt to new threats with effective solutions.


Today’s healthcare market is saturated with AI agent solutions, making vendor evaluation difficult for healthcare providers amidst similar claims and significant costs.

To simplify your evaluation, we’ve identified the top five factors that distinguish Artera’s AI agents today. Whether you’re new to AI agents or well into your research for a partner, we hope this distillation proves valuable.


Artera’s blog posts and press releases are for informational purposes only and are not legal advice. Artera assumes no responsibility for the accuracy, completeness, or timeliness of blogs and non-legally required press releases. Claims for damages arising from decisions based on this release are expressly disclaimed, to the extent permitted by law.

Related Posts

Patient expectations for seamless digital experiences are rising, while administrative burdens strain staff. The solution: intelligent automation that maintains the...
As a health system executive, you’re likely at a crossroads that could define your company’s competitive advantage for the next...
By: Cassie Pena, Senior Director, Product Management, and Simon Williams, Manager, Integration Engineering Organizations are quickly adopting AI agents for...
Connect with Us