Connecting AI Risk to Real-Time Data Decisions


Enkrypt AI + NetApp: Bringing AI Risk Enforcement to the Data Layer
Enterprise AI adoption is reshaping how organizations access and use data. AI training pipelines and autonomous agents are increasingly reading directly from enterprise file systems and object stores, often using broad service identities and operating at machine scale.
In many environments, this access bypasses the application control layers that security teams historically relied on. When sensitive data is ingested into a model or embedded into a vector index, the exposure occurs immediately and cannot simply be reversed.
At NetApp, we believe security must operate where the data lives. Access decisions should reflect context and intent rather than rely solely on static access controls. Shifting data toward centralized inspection layers places the burden of enforcement on every application and pipeline in the environment. In an AI-driven enterprise operating at machine scale, that model is no longer reliable.
Enkrypt AI and NetApp are collaborating to address this shift directly.
Enkrypt AI provides deep visibility into AI system risk, evaluating model behavior, unsafe prompts, policy violations, and governance posture. NetApp is investing in a data-centric security architecture that brings together data sensitivity, identity, activity, and lineage into a unified security graph.
Together, these capabilities help organizations evaluate AI activity in context and automatically approve or block data access based on real-time signals, allowing teams to move faster while reducing the risk of data leakage or compliance violations.
Unified AI and Data Risk Visibility
AI risk and data risk are typically evaluated in parallel but rarely in combination.
Security teams assess model behavior and AI governance posture, while data protection teams track sensitive datasets and regulatory exposure. Without a unified view, organizations lack clear visibility into where AI workloads intersect with their most sensitive information.
In a forward-looking collaboration, AI workload posture from Enkrypt AI can be correlated with NetApp’s data context, including:
.png)
These signals can be brought together within a unified security graph, enabling organizations to understand precisely which AI systems are interacting with regulated or high-value datasets across hybrid environments.
This consolidated view allows teams to:
.png)
Turning Context into Action
Visibility alone does not reduce risk. Enforcement must happen at the moment data is accessed.
When AI workload posture from Enkrypt AI is correlated with NetApp’s data sensitivity and regulatory attributes, the combined context can inform real-time access evaluation.
For example, Enkrypt AI may identify an AI training workload as high risk based on governance policy, model configuration, or observed usage behavior. If that workload attempts to bulk-read a dataset containing regulated or high-value information stored on NetApp-managed infrastructure, the access decision should reflect more than static permissions.
In a forward-looking architecture, the request would be evaluated using the full context available in the security graph, including:
.png)
If the combined context violates defined policy intent, enforcement can occur at the storage I/O layer before the data is consumed.
This is where AI governance moves beyond advisory insight and becomes enforceable control at the data layer.
Continuous Enforcement in Dynamic AI Environments
AI environments do not operate in static cycles, and governance cannot either.
Access decisions must reflect the live intersection of AI workload posture, data sensitivity, lineage, identity context, and observed access behavior. Policies cannot rely solely on conditions that existed when they were originally written.
As models retrain, pipelines evolve, and datasets are copied or repurposed, risk evaluation must occur continuously at the time of access. Traditional offline scans quickly become outdated and often miss changes in workload behavior or data context.
In a forward-looking collaboration, AI risk intelligence from Enkrypt AI would feed directly into NetApp’s data-centric security architecture. Access-time decisions could adapt dynamically as AI posture or data characteristics evolve.
By applying continuously updated context at the storage layer, enforcement remains consistent even as workloads shift across hybrid environments. Governance intent no longer depends on periodic reviews or manual recalibration; it is upheld each time sensitive data is accessed.
Governing AI at the Point of Data
AI is now embedded in core business workflows. Models retrain continuously, agents operate autonomously, and non-human identities interact with enterprise data across hybrid environments.
The question facing security leaders is no longer whether to adopt AI. The challenge is how to maintain control as adoption scales.
Sustainable AI governance requires more than monitoring models or classifying data independently. It requires aligning AI risk intelligence with the layer where data is accessed.
When decisions reflect both workload posture and data sensitivity, and are enforced at the point of data access, governance intent holds even as systems evolve.
As NetApp and Enkrypt AI continue to explore this integration, the goal is to connect AI risk insights with data-layer decisioning. The result is an architecture where innovation and control are not trade-offs but complementary outcomes.

.png)
.avif)
%20(1).png)
