Information Security Frameworks Every AI-Adopting Company Needs
Security teams at companies deploying AI are discovering an uncomfortable gap. Their existing compliance certifications, the SOC 2 reports and ISO 27001 certifications that gave customers confidence, were designed for a world where software did what it was told. AI systems learn, adapt, and sometimes behave in ways their developers did not anticipate. The frameworks need to catch up, and several of them are doing exactly that.
Where Traditional Frameworks Fall Short
SOC 2, ISO 27001, and the NIST Cybersecurity Framework each address fundamental security concerns: access control, data protection, incident response, risk management. These concerns do not go away when you deploy AI. They intensify. But the frameworks themselves were not written with AI-specific risks in mind.
Consider SOC 2. It evaluates organizations against Trust Services Criteria covering security, availability, processing integrity, confidentiality, and privacy. These criteria are broad enough to encompass AI systems in theory. In practice, there is no SOC 2 control that specifically addresses model poisoning, training data contamination, or adversarial inputs. It falls to the organization and its auditor to interpret how existing criteria apply to AI-specific threats. That interpretation varies widely.
ISO 27001 provides a more structured approach through its Information Security Management System requirements, but the same gap exists. The standard tells you to manage information security risks. It does not tell you how to evaluate the risk that your customer service chatbot will hallucinate a company policy that does not exist, or that your fraud detection model will develop bias against certain demographic groups over time.
NIST CSF offers a risk-based maturity model that is flexible enough to accommodate AI risks, but again, the specific controls and assessment criteria for AI systems are not baked into the framework itself. Organizations have to do the mapping work themselves.
The AI-Specific Frameworks Emerging
The industry has recognized these gaps, and new frameworks are filling them. ISO 42001, published in late 2023 and gaining rapid adoption since, is specifically designed for AI Management Systems. It provides requirements for establishing, implementing, maintaining, and continually improving an AI management system within organizations that provide or use AI-based products or services.
ISO 42001 addresses the risks that traditional frameworks miss: bias and fairness, transparency and explainability, data quality for AI training, and the lifecycle management of AI models. Used together with ISO 27001, these two standards come much closer to what a serious AI trust program looks like in 2026.
The NIST AI Risk Management Framework, or AI RMF, takes a different but complementary approach. Rather than prescribing specific controls, it provides a structured way to identify, assess, and manage AI risks throughout the system lifecycle. It organizes AI risk management into four functions: Govern, Map, Measure, and Manage. Each function includes categories and subcategories that help organizations systematically address AI-specific concerns.
The Cloud Security Alliance released a solid framework in 2025 providing control objectives across 18 security domains, mapping to both ISO 42001 and the NIST AI RMF. This framework is particularly useful for organizations running AI workloads in cloud environments, where the shared responsibility model adds another layer of complexity to security governance.
Twelve Risks That Need New Controls
When security researchers analyze the specific threats that AI introduces, they consistently identify categories that fall outside traditional framework coverage. Some of the most critical include the following.
Training data poisoning, where an attacker deliberately introduces malicious data into the training set, can cause the model to produce wrong outputs or create backdoors. Your existing data integrity controls may not cover the specific pipelines used for AI training.
Model inversion and extraction attacks allow adversaries to reconstruct training data or steal model architecture by carefully analyzing the model's outputs. Access controls and rate limiting need to account for these attack vectors.
Prompt injection, ranked as the number one LLM security risk by OWASP, allows attackers to manipulate AI behavior through crafted inputs. This is a category of attack that simply did not exist before generative AI.
Supply chain risks take on new dimensions with AI. A compromised model file, a poisoned fine-tuning dataset, or a malicious plugin in an AI orchestration framework can give an attacker control over your AI system's behavior. Traditional software supply chain controls need to extend to model artifacts and training data.
Despite the AI-specific nature of these threats, many of the most important vulnerabilities in AI infrastructure are recognizable software flaws: path traversal, authentication bypass, server-side request forgery, and remote code execution. The AI label changes the blast radius and context, but not the basic need for patching, hardening, isolation, and least privilege.
What Security Teams Should Implement Before Deploying AI Agents
Before any AI agent goes into production, security teams should establish several foundational elements.
An AI asset inventory that catalogs every model, agent, training dataset, and AI-powered service in the organization. You cannot secure what you do not know exists. This inventory should include shadow AI deployments that business units may have set up independently.
A risk assessment specific to AI that maps identified threats to your existing control framework and identifies gaps. Use the NIST AI RMF's Govern and Map functions as a starting point. Document where your SOC 2 or ISO 27001 controls adequately cover AI risks and where additional controls are needed.
Data governance for AI that establishes clear policies on what data can be used for training, inference, and fine-tuning. Include data quality requirements, retention policies for training data, and procedures for responding to data subject requests that affect AI systems.
An incident response plan that covers AI-specific scenarios: model compromise, data leakage through AI outputs, adversarial attacks, and unintended model behavior. Your existing incident response procedures likely do not address what to do when your customer-facing chatbot starts generating harmful content.
Vendor assessment criteria for AI-powered services that evaluate not just the vendor's general security posture, but their AI-specific practices: model security, training data provenance, output filtering, and compliance with AI-specific regulations like the EU AI Act.
Making It Practical
The most effective approach for most organizations is to layer AI-specific controls onto their existing framework rather than starting from scratch. If you already have ISO 27001 certification, extend your ISMS to incorporate ISO 42001 requirements. If you are SOC 2 compliant, work with your auditor to develop AI-specific control descriptions that map to the Trust Services Criteria. If you use NIST CSF, integrate the AI RMF's categories into your existing risk management process.
The goal is not to create a separate AI security program that operates in parallel with your existing security program. It is to evolve your existing program to account for the new risks that AI introduces. The frameworks to do this now exist. The question is whether organizations will adopt them before or after their first AI-related security incident forces the issue.