Home | Community Forum | Blog

Safeguarding AI with Zero Trust Architecture and Data-Centric Security

In this article, we will explore the three key pillars of safeguarding AI, and how two powerful approaches, Zero Trust Architecture (ZTA) and Data-Centric Security, can be applied to protect AI systems.

First, let us break down what it means to safeguard AI.

An AI system is a machine-based system that infers, from the input it receives, how to generate outputs. This involves using algorithms and computational models, such as Machine Learning (ML) and Large Language Models (LLMs) to analyze data and its patterns to produce content, predictions, recommendations, or decisions.

With the exponential growth of data collection capabilities, we are witnessing the creation of massive datasets used to train AI systems. Unauthorized access to these systems could exploit their vulnerabilities, leading to the leakage of personal, financial, or business data. Techniques like prompt injection and inference attacks can manipulate AI models to reveal confidential data by embedding malicious prompts or inferring details from the model’s responses. Moreover, some AI-generated outputs could contain sensitive information such as intellectual property that needs to be secured.

In addition to the risk of unauthorized data access, ML and LLMs face the threat of being compromised if the models themselves or the training data are altered. By modifying either the models or the training data, an AI system can be manipulated to produce invalid or even harmful results. This type of manipulation may be hard to detect due to the complexity of such models and the opacity of decision-making in AI systems.

Therefore, safeguarding AI means protecting the AI system, the data used by the system, and the outputs produced by the system. ZTA and Data-Centric Security are fitting approaches to address these requirements by continuously verifying all users and data interactions and ensuring that data is secured at rest, in transit and in use, regardless of where it is stored or processed.

1. Controlling Access to AI Systems and User Actions with ABAC

Attribute-Based Access Control (ABAC) is a dynamic method of managing access and authorizations to controlled resources. Unlike traditional access control which relies on static roles, ABAC uses attributes, which can include user roles, environmental conditions, and the context of access requests, to grant permissions. This granularity is crucial in a ZTA environment, where trust is never assumed, and every access request is verified.

Implementing ABAC involves defining policies that specify which attributes must be present for access to be granted. For example, only users with a specific security clearance level, on a secure network during working hours, can access the AI system. Once granted access, any user actions that deviate from authorized tasks, such as tampering with training data, would trigger ABAC policies, blocking the user from making improper changes.

2. Safeguarding ML and LLM models and business data

As AI systems ingest vast amounts of information, it becomes essential to ensure the security of sensitive data used in AI training and operations. One effective method to achieve this is to use data obfuscation and segregation to safeguard data at rest and data in use. By segregating and obfuscating data, organizations can enforce stricter data-centric controls. Data segregation involves categorizing data based on its sensitivity and characteristic to grant access. Data obfuscation transforms data into a format that is unreadable without the proper authorization, ensuring that even if data is intercepted, it cannot be understood. Masking hides specific parts, such as personal identifiers, while preserving data usability for analysis and training purposes.

Protecting ML and LLM models, along with the data they use, involves encrypting data at rest to prevent malicious users from altering data files and compromising AI system outputs. By encrypting data at rest and only decrypting it upon access requests from an authorized AI system, the integrity of the AI models, their underlying data sets, and the system outputs can be maintained.

For example, in a healthcare AI system, patient data from training datasets can be segregated, obfuscated or masked to protect privacy while still allowing the AI model to learn from the data. These techniques ensure sensitive information is always safeguarded, whether the data is at rest, in transit, or in use.

3. Protecting AI Outputs with ABAC and DRM

AI system outputs can be as valuable and sensitive as their inputs and processes. Protecting these results requires controlling access through ABAC and applying persistent file protection with Enterprise Digital Rights Management (DRM).

Using ABAC to manage access to AI outputs ensures that only authorized users can view or manipulate the results. For example, an AI system generating financial reports should only make those reports accessible to individuals with the appropriate financial and managerial roles.

Additionally, DRM technologies can be applied to persistently protect the files containing AI outputs, such as sensitive reports and proprietary models. DRM can control how a file is used, including restrictions on copying, sharing, and printing, thus preventing unauthorized distribution and use. By integrating DRM with ABAC, organizations can ensure that the protection travels with the data, maintaining security regardless of where the file goes.

Summary

Safeguarding AI requires a multifaceted approach that combines ZTA principles with Data-Centric Security practices. By controlling access with ABAC, securing data through segregation, obfuscation and masking, and protecting AI outputs with ABAC and DRM, organizations can effectively mitigate risks and enhance the security and integrity of their AI systems.

To comment on this post
Login to NextLabs Community

NextLabs seeks to provide helpful resources and easy to digest information on data-centric security related topics. To discuss and share insights on this resource with peers in the data security field, join the NextLabs community.