The US Cybersecurity and Infrastructure Security Agency (CISA) and its G7 cyber agency partners have released a list of minimum elements for an AI software bill of materials, a move that could help CISOs assess the security and provenance of AI systems entering enterprise environments.The guidance extends traditional SBOM concepts into AI by calling for documentation of models, datasets, software components, providers, licenses, and other dependencies. The supplemental minimum elements are not exhaustive or mandatory, CISA said, but reflect a consensus among G7 experts and are expected to expand as AI technology evolves.For security leaders, the document puts AI risk more firmly inside enterprise supply-chain oversight. That could make AI SBOMs part of the same vendor-risk conversations that already surround software composition, cloud services, and third-party technology platforms.But one important difference is that AI SBOMs require visibility beyond software composition, because AI risk is shaped by models, data, infrastructure, and system behavior.“AI systems add new layers of opacity: model lineage, training and inference data, fine-tuning history, prompts, vector databases, third-party foundation models, APIs, orchestration logic, and runtime behavior,” said Sakshi Grover, senior research manager for IDC Asia Pacific Cybersecurity Services.AI software is also different because it is probabilistic, with outputs shaped by data provenance as well as code, according to Keith Prabhu, founder and CEO of Confidis.“AI software inherently encompasses more than just software,” Prabhu said. “In addition to the software components, it would also need to track models, training data, prompts and system instructions, model weights and checkpoints, and GPU dependencies.”Sanchit Vir Gogia, chief analyst at Greyhound Research, put the shift more broadly.“The question is no longer only, ‘what code is inside this product?’ The question is, ‘what code, model, data, infrastructure, control, and vendor decision shapes this system’s behavior?’” Gogia said.How to make use of itThe immediate use of the guidance may be in procurement and vendor risk management. It gives security teams a way to press vendors before AI-enabled products are allowed into production.“Organizations should ask vendors to provide visibility into model provenance, training data sources, software and API dependencies, licensing obligations, security testing practices, update cycles, runtime monitoring controls, and shared responsibility boundaries,” Grover said.The level of scrutiny may also depend on the type of supplier.“For large vendors, CISOs should specifically seek transparency around third-party foundation model dependencies, geographic data flows, model update practices, and whether customer data is being retained for model training or fine-tuning,” Grover added. “For startups, the focus should be on the maturity of governance processes, dependency tracking, secure development practices, identity controls, and operational monitoring across the AI life cycle.”The same risk-based approach should apply to how the technology will be used. For higher-risk deployments, Gogia said AI SBOMs should become part of a broader vendor evidence pack, supported by documentation on data flows, security architecture, model behavior, privacy impact, red-team findings, incident response, logging, and prompt-injection testing.The gaps that remainThe biggest gap is that an AI SBOM may show what a vendor says is inside an AI system, but does not prove whether the system can be trusted for the way an enterprise plans to use it.“Minimum elements create visibility,” Gogia said. “They do not create assurance. They tell the buyer what the vendor says exists. They do not, by themselves, prove that every dependency has been disclosed, every dataset is lawful, every control works, every model behaves within tolerance, or every runtime pathway is being monitored.”The hard part will be proving that the document matches reality. Security teams may receive an AI SBOM from a vendor, but they still need to determine whether it reflects the system running in production and keeps pace with changes to the AI environment. Prabhu said even a high-quality AI SBOM will offer only partial visibility into AI risk. Issues such as evolving AI behavior, hallucinations, changing prompt usage, and limited training data transparency can still make it difficult for security leaders to assess actual risk. As AI systems mature, AI SBOMs will also have to evolve to address those gaps, Prabhu added.