The US Cybersecurity and Infrastructure Security Agency (CISA) and its G7 partners have published guidance defining minimum elements for AI software bills of materials, a framework that extends traditional SBOM practices to cover the unique components of AI systems. The voluntary guidance calls for documentation of models, datasets, software components, providers, licenses, and dependencies. CISA emphasized the elements reflect consensus among G7 experts and will expand as AI technology evolves.
The guidance addresses a fundamental difference between traditional software and AI systems. While conventional SBOMs focus on code libraries and dependencies, AI systems require visibility into model lineage, training data, fine-tuning history, prompts, vector databases, foundation models, APIs, and runtime behavior. AI software is probabilistic, with outputs shaped by data provenance and model weights in addition to code. This creates new layers of opacity that traditional supply-chain oversight does not address.
Security teams can apply the guidance immediately in procurement and vendor risk management processes. Organizations should request visibility into model provenance, training data sources, software and API dependencies, licensing obligations, security testing practices, update cycles, runtime monitoring controls, and shared responsibility boundaries. The level of scrutiny should vary based on vendor type: large vendors should provide transparency around third-party foundation model dependencies and data flows, while startups require focus on governance maturity and secure development practices.
For high-risk AI deployments, AI SBOMs should form part of a broader evidence package that includes documentation on data flows, security architecture, model behavior, privacy impact assessments, red-team findings, incident response procedures, logging capabilities, and prompt-injection testing. The risk-based approach allows security leaders to calibrate vendor requirements based on how the AI technology will be used in production.
The guidance has significant limitations. An AI SBOM shows what a vendor claims is inside an AI system but does not prove the system can be trusted for its intended use. The document creates visibility but not assurance, and does not guarantee that every dependency has been disclosed, every dataset is lawful, or every control functions as described. Security teams must still verify that AI SBOMs reflect production systems and keep pace with changes to AI environments. Issues such as evolving AI behavior, hallucinations, changing prompt usage, and limited training data transparency remain difficult to assess through documentation alone.
Source: https://www.csoonline.com/article/4170694/cisas-ai-sbom-guidance-pushes-software-supply-chain-oversight-into-new-territory.html


