We are in a new global AI race to unlock new, knowledge-based capabilities at a scale never before seen. Companies like OpenAI have proven that early movers reap outsized rewards, so speed is of the essence.

The fuel for this new AI race? Data.

Whilst innovation requires moving at speed, it is clear that we need to strike a balance when it comes to ensuring the safety and security of these new AI capabilities.

Whether a business’ chosen AI strategy is to use a vendor “co-pilot,” retrieval augmented generation (RAG), or fine-tuned LLMs; large quantities of a company’s proprietary information will be transiting the cloud, becoming broadly accessible, and potentially embedding into models that can’t be controlled.

In this data-driven landscape, safeguarding sensitive information is paramount. As organizations increasingly adopt cloud technologies and delve into multi-cloud environments, ensuring robust data security becomes a critical imperative. Enter Data Security Posture Management (DSPM)—a powerful approach to not only protect your data but also align with responsible AI practices.

Unlike traditional security measures, DSPM focuses on multi-cloud environments, where data is distributed across various services, including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Database as a Service (DBaaS), and even SaaS applications. Uniquely, Rubrik recently announced our DSPM everywhere approach, designed to safely unlock another critical data repository, on-premises data, for use in AI applications.

Rubrik Laminar DSPM provides Data Security for AI by providing much-needed visibility and control over data consumed by these AI systems, and helps prevent inadvertent data exposure during model training and deployment.

Provide visibility into the datasets leveraged by LLM models.

Rubrik Laminar DSPM provides data discovery for structured, unstructured, and semi-structured data, and classification used for various frameworks like PCI, HIPAA, GDPR, etc. Additionally, it can map which users and roles have access to this type of data, allowing you to understand whether these models use this data.

AI Data Lineage.

Rubrik Laminar can identify the source of data assets and copies of data that can ultimately become the input for these models. This way, you can trace the origin of the original information and provide more visibility into the model input.

Prevent model contamination with unintended data.

Once these models are trained, they often become black boxes, making it difficult to understand if sensitive or unintended data was used as input. By monitoring data being moved to a data asset known for input, we can prevent model contamination and help ensure responsible AI.

Data Access Governance for AI Data.

By mapping data access for internal and external identities, we can safeguard data against tampering by non-sanctioned employees or third party entities, ensuring only pre-approved personnel can access and work with the datasets that will ultimately end up as a baseline for these models.

Spotting unsanctioned models and systems.

By identifying shadow data and shadow models running on unmanaged cloud infrastructure, including databases running on unmanaged VMs, we can provide security teams with alerts on sensitive data stored or moved into these systems. We can also detect when a VM is used to deploy an AI model on the basis of these datasets.

To safely and reliably gain the benefits of Generative AI for the enterprise, we need to ensure secure use of the data used as its basis. Data Security Posture Management and Data Detection and Response capabilities like those found in Rubrik Laminar DSPM bring a much needed level of security to elevate LLMs and Generative AI systems to a level acceptable for enterprise usage. Focusing on the security of the most crucial parts of the AI equation, the data and models, we can keep the pace of innovation high, whilst not compromising on trust and reliability.