Skip to main content
Arrow Electronics, Inc.
Steam_Turbine_Engine_Closeup
Article

The Expanding Attack Surface of AI in the Cloud

April 08, 2025

"AI Workloads Pose Greater Security Risks in the Cloud."

The cloud provides the scalability and flexibility that AI systems require, but this also widens the attack surface. As AI models interact with multiple cloud services, APIs, and data sources, each integration becomes a potential entry point for threat actors. 

The risks aren’t just theoretical. Tenable’s 2025 Cloud AI Risk Report highlights a sharp increase in AI-related cloud vulnerabilities, particularly in industries with large-scale AI deployments like finance, healthcare, and tech.  

1. AI Workloads Pose Greater Security Risks in the Cloud 

A significant 70% of cloud workloads with AI packages have at least one critical vulnerability—substantially higher than the 50% of non-AI workloads. A notable example is CVE-2023-38545, a critical curl flaw found in over a third of AI workloads, which remained unpatched a year after disclosure. 

This higher risk is likely due to AI workloads often running on Unix systems that rely on numerous, sometimes vulnerable, open-source libraries. Exploiting these vulnerabilities can lead to serious consequences like AI model manipulation or data leakage. The risk escalates when these workloads are also publicly exposed. 

Given the potential presence of sensitive data (e.g., personal or customer information), it’s crucial that security teams prioritise and strategically mitigate vulnerabilities in AI workloads. 

2. Jenga concept meets AI 

A study found that 77% of organisations using Google Cloud’s Vertex AI Workbench have the default overprivileged Compute Engine service account attached to their notebook instances. This service account, by default, has broad access to project resources, creating a security risk. The issue arises because, when users create Vertex AI notebook instances, GCP automatically creates a Compute Engine instance with this risky configuration. While users tend to follow best practices for virtual machines, the default service account is commonly left unchanged in notebook setups. This poses a significant risk, particularly for AI systems that handle sensitive data. 

3. Amazon Bedrock training bucket without public access blocked 

14.3% of organisations using Amazon Bedrock have training buckets without Amazon S3 Block Public Access enabled, a key security measure to prevent unauthorised access. This oversight increases the risk of accidental data exposure and vulnerability to tampering, especially concerning for AI training data. Such weaknesses are critical, as data poisoning is a top security threat for machine learning systems, according to the OWASP Top 10. 

4. Amazon Bedrock training buckets are overly permissive  

5% of organisations using Amazon Bedrock have overly-permissive training buckets, a common cloud misconfiguration. In AI environments, this risk is amplified if the buckets contain sensitive training data. Improperly secured buckets can be exploited by attackers to steal or modify data, disrupting the training process. This exposes organisations to significant reputational and financial damage, especially if proprietary AI data is compromised, potentially losing competitive advantage. The issue stems from bucket policies that don’t align with least privilege best practices. 

5. Amazon SageMaker with root access enabled  

90.5% of organisations using Amazon SageMaker have root access enabled in at least one notebook instance, creating significant security risks. Root access grants users administrator privileges, allowing them to edit system files, install unauthorised software, and modify critical components, increasing vulnerability if compromised. Failing to follow the principle of least privilege can lead to unauthorised access, exposing proprietary AI models and sensitive data, such as PII, with severe consequences, including data theft and intellectual property loss. 

Securing AI in the cloud requires proactive and tailored approaches to manage exposure, protect sensitive data, and comply with emerging regulations. 

Here are some recommended strategies for mitigating AI risks in the cloud: 

1. Manage Exposure Across Cloud Environments 

Implement a contextual approach to monitor exposure across cloud infrastructure, identities, data, workloads, and AI tools. Unify visibility and prioritise actions to address vulnerabilities as environments evolve and new threats emerge. 

2. Classify Sensitive AI Components 

Classify AI components linked to high-business-impact assets, such as sensitive data or privileged identities, as sensitive. Include AI tools and data in your asset inventory, continuously scanning them to understand potential risks if compromised. 

3. Stay Compliant with AI Regulations 

Keep up with evolving AI regulations and guidelines. Ensure your AI engineers are following secure development and deployment practices, adhering to NIST standards, and implementing necessary access controls for cloud-based AI data stores. 

4. Follow Cloud Provider Recommendations 

Follow your cloud provider’s security playbooks to avoid risky configurations. Be mindful of insecure defaults and ensure resource provisioning aligns with best practices, such as the principle of least privilege. 

5. Prevent Unauthorised Access 

Prevent unauthorised or excessive access to AI models and data stores by reducing overprivileged permissions. Use robust identity management tools to enforce least privilege and detect misconfigurations in your AI environment. 

6. Prioritise Vulnerability Remediation 

Focus on remediating the most critical vulnerabilities in your cloud environment. Leverage advanced tools to improve the efficiency of vulnerability remediation, reducing alert fatigue while addressing the most impactful issues. 

Organisations are rapidly adopting AI in development environments, with the cloud serving as a natural platform due to its ability to manage growing data volumes. However, cloud-based AI often suffers from misconfigurations and frequently handles sensitive assets like proprietary algorithms and models, making it a high-value target. Despite the risks, most companies have only addressed a small portion of their AI security challenges. Now is the ideal time for security leaders to implement strong exposure management and best practices to support secure AI growth.