We’re on the cusp of a synthetic intelligence revolution, and the generative AI development doesn’t appear to be slowing down anytime quickly. Analysis by McKinsey discovered that 72% of organizations used generative AI in a number of enterprise features in 2024—up from 56% in 2021.
As companies discover how generative AI can streamline workflows and unlock new operational efficiencies, safety groups are actively evaluating the easiest way to guard the know-how. One main hole in lots of AI safety methods at this time? Generative AI workloads.
Whereas many are accustomed to the mechanisms used to safe AI fashions like OpenAI, ChatGPT, or Anthropic, AI workloads are a unique beast altogether. Not solely do safety groups must assess how the underlying mannequin was developed and educated however additionally they have to contemplate the encircling structure and the way customers work together with the workload. As well as, AI safety operates underneath a shared accountability mannequin that’s just like the cloud. Workload tasks range relying on whether or not the AI integration is predicated on Software program as a Service (SaaS), Platform as a Service (PaaS), or Infrastructure as a Service (IaaS).
By solely contemplating AI model-related dangers, safety groups miss the larger image and fail to holistically tackle all points of the workload. As an alternative, cyber defenders should take a multilayered strategy by utilizing cloud-native safety options to securely configure and function multicloud generative AI workloads.
How layered protection secures generative AI workloads
By leveraging a number of safety methods throughout all levels of the AI lifecycle, safety groups can add a number of redundancies to raised defend AI workloads—plus the information and techniques they contact. It begins by evaluating how your chosen mannequin was developed and educated. Due to generative AI’s potential to create dangerous or damaging outputs, it have to be responsibly and ethically developed to protect towards bias, function transparently, and defend privateness. Within the case of corporations that floor business AI workloads in proprietary information, you will need to additionally guarantee the information is of a excessive sufficient high quality and ample amount to supply sturdy outputs.
Subsequent, defenders should perceive their workload tasks underneath the AI shared accountability mannequin. Is it a SaaS-style mannequin the place the supplier secures the whole lot from the AI infrastructure and plugins to defending information from entry exterior of the tip buyer’s id? Or (extra possible) is it a PaaS-style association the place the interior safety crew controls the whole lot from constructing a safe information infrastructure and mapping id and entry controls to the workload configuration, deployment, and AI output controls?
If these generative AI workloads function in extremely related, extremely dynamic multicloud environments, safety groups should additionally monitor and defend each different part the workload touches in runtime. This consists of the pipeline used to deploy AI workloads, the entry controls that defend storage accounts the place delicate information lives, the APIs that decision on the AI, and extra.
Cloud-native safety instruments like cloud safety posture administration (CSPM) and prolonged detection and response (XDR) are particularly helpful right here as a result of they’ll scan the underlying code and broader multicloud infrastructure for misconfigurations and different posture vulnerabilities whereas additionally monitoring and responding to threats in runtime. As a result of multicloud environments are so dynamic and interconnected, safety groups must also combine their cloud safety suite underneath a cloud-native software safety platform (CNAPP) to raised correlate and contextualize alerts.
Holistically securing generative AI for multicloud deployments
Finally, the precise parts of your layered protection technique are closely influenced by the setting itself. In any case, defending generative AI workloads in a conventional on-premises setting is vastly completely different than defending those self same workloads in a hybrid or multicloud area. However by analyzing all layers that the AI workload touches, safety groups can extra holistically defend their multicloud property whereas nonetheless maximizing generative AI’s transformative potential.
For extra perception into securing generative AI workloads, take a look at our collection, “Safety utilizing Azure Native companies.”