Tech News
← Back to articles

Seven steps to AI supply chain visibility — before a breach forces the issue

read original related products more articles

Four in 10 enterprise applications will feature task-specific AI agents this year. Yet, research from Stanford University’s 2025 Index Report shows that a mere 6% of organizations have an advanced AI security strategy in place.Palo Alto Networks predicts 2026 will bring the first major lawsuits holding executives personally liable for rogue AI actions. Many organizations are grappling with how to contain the accelerating and unpredictable nature of AI threats. Governance doesn’t respond to quick fixes like bigger budgets or more headcount.There's a visibility gap when it comes to how, where, when, and through which workflows and tools LLMs are being used or modified. One CISO told VentureBeat that model SBOMs are the Wild West of governance today. Without visibility into which models are running where, AI security collapses into guesswork — and incident response becomes impossible.Over the last several years, the U.S. government has pursued a policy of mandating SBOMs for all software acquired for use. AI models need them more, and the lack of consistent improvement in this area is one of AI’s most significant risks.The visibility gap is the vulnerability Harness surveyed 500 security practitioners across the U.S., U.K., France, and Germany. The findings should alarm every CISO: 62% of their peers have no way to tell where LLMs are in use across their organization. There's a need for more rigor and transparency at the SBOM level to improve model traceability, data use, integration points, and use patterns by department.Enterprises continue to experience increasing levels of prompt injection (76%), vulnerable LLM code (66%), and jailbreaking (65%). These are among the most lethal risks and attack methods adversaries use to exfiltrate anything they can from an organization’s AI modeling and LLM efforts. Despite spending millions on cybersecurity software, many organizations aren’t seeing these adversaries’ intrusion efforts, as they’re cloaked in living-off-the-land techniques and comparable attack tradecraft not traceable by legacy perimeter systems. “Shadow AI has become the new enterprise blind spot,” said Adam Arellano, Field CTO at Harness. “Traditional security tools were built for static code and predictable systems, not for adaptive, learning models that evolve daily.”IBM’s 2025 Cost of a Data Breach Report quantifies the cost, finding that 13% of organizations reported breaches of AI models or applications last year. Of those breached, 97% lacked AI access controls. One in five reported breaches was due to shadow AI or unauthorized AI use. Shadow AI incidents cost $670,000 more than their comparable baseline intrusion counterparts. When nobody knows which models run where, incident response can’t scope the impact. Why SBOMs stop at the model file Executive Order 14028 (2021) and OMB Memorandum M-22-18 (2022) require software SBOMs for federal vendors. NIST’s AI Risk Management Framework, released in 2023, explicitly calls for AI-BOMs as part of its “Map” function, acknowledging that traditional software SBOMs don’t capture model-specific risks. But software dependencies resolve at build time and stay fixed. Conversely, model dependencies resolve at runtime, often fetching weights from HTTP endpoints during initialization, and mutate continuously through retraining, drift correction, and feedback loops. LoRA adapters modify weights without version control, making it impossible to track which model version is actually running in production.Here’s why this matters for security teams: When AI models are saved in pickle format, loading them is like opening an email attachment that executes code on your computer, except these files, acting like attachments, are trusted by default in production systems. A PyTorch model saved this way is serialized Python bytecode that must be deserialized and executed to load. When torch.load() runs, pickle opcodes execute sequentially. Any callable embedded in the stream fires. These commonly include os.system(), network connections, and reverse shells. SafeTensors, an alternative format that stores only numerical tensor data without executable code, addresses pickle’s inherent risks. Still, migration means rewriting load functions, revalidating model accuracy, and potentially losing access to legacy models where original training code no longer exists. That’s one of the primary factors holding adoption back. In many organizations, it’s not just policy, it’s an engineering effort.Model files aren’t inert artifacts — they’re executable supply chain entry points.Standards exist and have been in place for years, but adoption continues to lag. CycloneDX 1.6 added ML-BOM support in April 2024. SPDX 3.0, released in April 2024, included AI profiles. ML-BOMs complement but don’t replace documentation frameworks like Model Cards and Datasheets for Datasets, which focus on performance attributes and training data ethics rather than making supply chain provenance a priority. VentureBeat continues to see adoption lagging how quickly this area is becoming an existential threat to models and LLMs.A June 2025 Lineaje survey found 48% of security professionals admit their organizations are falling behind on SBOM requirements. ML-BOM adoption is significantly lower.Bottom line: The tooling exists. What’s missing is operational urgency.AI-BOMs enable response, not prevention AI-BOMs are forensics, not firewalls. When ReversingLabs discovered nullifAI-compromised models, documented provenance would have immediately identified which organizations downloaded them. That’s invaluable to know for incident response, while being practically useless for prevention. Budgeting for protecting AI-BOMs needs to take that factor into account. The ML-BOM tooling ecosystem is maturing fast, but it's not where software SBOMs are yet. Tools like Syft and Trivy generate complete software inventories in minutes. ML-BOM tooling is earlier in that curve. Vendors are shipping solutions, but integration and automation still require additional steps and more effort. Organizations starting now may need manual processes to fill gaps.AI-BOMs won't stop model poisoning as that happens during training, often before an organization ever downloads the model. They won't block prompt injection either, as that attack exploits what the model does, not where it came from. Prevention requires runtime defenses that include input validation, prompt firewalls, output filtering, and tool call validation for agentic systems. AI-BOMs are visibility and compliance tools. Valuable, but not a substitute for runtime security. CISOs and security leaders are increasingly relying on both. The attack surface keeps expanding JFrog's 2025 Software Supply Chain Report documented more than 1 million new models hitting Hugging Face in 2024 alone, with a 6.5-fold increase in malicious models. By April 2025, Protect AI's scans of 4.47 million model versions found 352,000 unsafe or suspicious issues across 51,700 models. The attack surface expanded faster than anyone's ability to monitor it.In early 2025, ReversingLabs discovered malicious models using "nullifAI" evasion techniques that bypassed Picklescan detection. Hugging Face responded within 24 hours, removing the models and updating Picklescan to detect similar evasion techniques, demonstrating that platform security is improving, even as attacker sophistication increases. “Many organizations are enthusiastically embracing public ML models to drive rapid innovation,” said Yoav Landman, CTO and Co-Founder of JFrog. “However, over a third still rely on manual efforts to manage access to secure, approved models, which can lead to potential oversights.”Seven steps to AI supply chain visibilityThe gap between hours and weeks in AI supply chain incident response comes down to preparation. Organizations with visibility built in before the breach have the insights needed to react with greater accuracy and speed. Those without scramble. None of the following requires a new budget — only the decision to treat AI model governance as seriously as software supply chain security.Commit to building a model inventory and defining processes to keep it current. Survey ML platform teams. Scan cloud spend for SageMaker, Vertex AI, and Bedrock usage. Review Hugging Face downloads in network logs. A spreadsheet works: model name, owner, data classification, deployment location, source, and last verification date. You can’t secure what you can’t see.Go all in on using advanced techniques to manage and redirect shadow AI use to apps, tools, and platforms that are secure. Survey every department. Check API keys in environment variables. Realize accounting, finance, and consulting teams may have sophisticated AI apps with multiple APIs linking directly into and using the company's proprietary data. The 62% visibility gap exists because nobody asked.Require human approval for production models and design human-in-the-middle workflows always. Every model touching customer data needs a named owner, documented purpose, and an audit trail showing who approved deployment. Just as red teams do at Anthropic, OpenAI, and other AI companies, design human-in-the-middle approval processes for every model release. Consider mandating SafeTensors for new deployments. Policy changes cost nothing. SafeTensors stores only numerical tensor data, no code execution on load. Grandfather existing pickle models with documented risk acceptance and sunset timelines.Consider piloting ML-BOMs for the top 20% of risk models first. Pick the ones touching customer data or making business decisions. Document architecture, training data sources, base model lineage, framework dependencies. Use CycloneDX 1.6 or SPDX 3.0. Get started immediately if not already pursuing this, realizing that incomplete provenance beats none when incidents happen.Treat every model pull as a supply chain decision, so it becomes part of your organization’s muscle memory. Verify cryptographic hashes before load. Cache models internally. Block runtime network access for model execution environments. Apply the same rigor enterprises learned from leftpad, event-stream, and colors.js.Add AI governance to vendor contracts during the next renewal cycle. Require SBOMs, training data provenance, model versioning, and incident notification SLAs. Ask whether your data trains future models. Costs nothing to request.2026 will be a year of reckoning for AI SBOMs Securing AI models is becoming a boardroom priority. The EU AI Act prohibitions are already in effect, with fines reaching €35 million or 7% of global revenue. EU Cyber Resilience Act SBOM requirements begin this year. Full AI Act compliance is required by August 2, 2027.Cyber insurance carriers are watching. Given the $670,000 premium for shadow AI breaches and emerging executive liability exposure, expect AI governance documentation to become a policy requirement this year, much as ransomware readiness became table stakes after 2021.The SEI Carnegie Mellon SBOM Harmonization Plugfest analyzed 243 SBOMs from 21 tool vendors for identical software and found significant variance in component counts. For AI models with embedded dependencies and executable payloads, the stakes are higher.The first poisoned model incident that costs seven figures in response and fines will make the case that should have been obvious already.Software SBOMs became mandatory after attackers proved the supply chain was the softest target. AI supply chains are more dynamic, less visible, and harder to contain. The only organizations that will scale AI safely are the ones building visibility now — before they need it.