Presented by SAPThe enterprise software industry has undergone a fundamental shift, and vendors are adapting their approaches to better protect the customers who rely on them. For years, every global platform vendor running multi-tenant cloud infrastructure has maintained documented rate limits, usage controls, and restrictions on the use of undocumented internal interfaces. CRM platforms impose daily API call limits per organization, enforce platform-layer limits, and maintain a strict separation between bulk data APIs and transactional REST surfaces. Productivity and collaboration suites throttle their graph APIs and redirect bulk workloads to purpose-built data access channels designed for that load. HR and workforce management platforms enforce concurrent request limits and per-session data retrieval caps. IT service management platforms enforce per-user rate limits and instance-level throttling. Hyperscalers publish per-service quotas, enforce them at the infrastructure layer, and explicitly prohibit applications from calling non-SDK or non-published interfaces. These are not controversial measures. They are baseline hygiene for enterprise-grade software platforms operating shared infrastructure at scale. For more than a decade these measures have been in place without serious objection.As SAP has taken responsibility for securing customers' mission-critical workloads in the cloud, a unified API policy with clarified usage controls is not a restriction but the expression of enterprise-grade stewardship. Some have read the policy as a new restriction. The policy does not introduce new restrictions. It names and unifies controls that have existed across individual SAP products for years. SAP is not introducing API governance as a novel concept. SAP SuccessFactors, SAP Ariba, SAP LeanIX, and several other SAP solutions have enforced documented rate limits and usage controls. SAP Notes and SAP’s documentation have also in the past defined API usage. What the recent policy does is unify that existing practice into a single cross-portfolio standard, a step made urgent by the arrival of autonomous agentic harnesses that SAP is fully committed to enabling, but which place a categorically different performance, stability, and security load on API surfaces that were never designed for autonomous orchestration and data extraction at scale.Custom interfaces: What SAP’s API policy does and does not restrictCustom APIs built by customers in their own namespace for their own extensibility, integration, and migration purposes are customer-developed interfaces. If you have spent years building custom data services, custom RFCs, and ABAP interfaces to connect your SAP system to the world around it, the policy's restriction on non-published APIs might read, on first encounter, like a demolition order. It is not. The policy's restriction targets SAP's own internal unreleased objects. It does not reach into the Z namespace and condemn two decades of ABAP engineering.SAP’s Private Cloud customers are in a distinctly privileged position compared with much of the enterprise world, because they have long been able to build in their own namespace and to shape an environment they were free to modify and extend, and that freedom is not being revoked. The policy is focused on something narrower: SAP’s own internal interfaces that were never published, never documented for customer use, and never offered as a dependable foundation for integration. Most custom code never touches these internals and will continue untouched; where it does, the risk for customers has always been present, and the policy merely names it rather than inventing it. However, within that set there is a smaller class of interfaces that is not a matter for debate but for prohibition. ODP-RFC belongs in that class: it sits in SAP’s namespace as an internal, non-released interface that SAP explicitly classifies as “unpermitted” for customer or third-party application use as documented in SAP Note 3255746. These are precisely the kinds of interfaces SAP will flag as prohibited in notes and automated tooling so that such usage can be identified early through tooling and guidance, rather than discovered late in deployment or operational context. Clean Core is distinct from the API Policy but points in the same direction, and it bears noting that customers did not merely accept it but asked for it repeatedly, having lived through the upgrade costs of the alternative; in the agentic era, where SAP runs mission-critical ERP as a service, both the Clean Core Recommendations and API Policy are conditions of the enterprise-grade reliability that cloud operations make possible.How AI agents change API usage patterns in SAP systemsWhile some commentators have argued this policy is primarily a commercial move, the technical evidence tells a different story.AI has changed everything about our traditional view of transactional interfaces. The APIs that enterprises have used for decades to integrate SAP systems with third-party applications are request-response interfaces built for transactional workloads. They were designed to fetch a sales order, post a goods receipt, or trigger a payment run. They were designed to be mostly called by a human-authored integration flow, at a predictable frequency, for a defined business purpose. They were not designed to have an autonomous AI orchestration harness run thousands of sequential calls against them in pursuit of semantic context about the business model encoded within. That is not a clean core integration pattern.Much of the debate misses a core architectural distinction. A traditional integration tool reads a sales order from SAP, converts it into the format a target schema needs, and moves it on. SAP's data model plays no role beyond being a transient interpretation step. An AI agent does something categorically different. It does not merely retrieve a value. It reads the sales order header data and learns that this structure represents a customer commitment to buy. It reads the line item data and learns how individual items relate to that order. It reads the net value and learns that this number is meaningful only when paired with the document currency. It traces the path that a sales order takes through delivery, billing, and finally into the accounting ledger, and internalizes how SAP reconciles operations and finance within its business object model. The agent is not only consuming a customer's transactional data. It is consuming the semantic ontology: the business object definitions, the relationships between entities, the conceptual architecture that SAP has built and refined over five decades of enterprise knowledge encoding.SAP has long distinguished between enabling transactional access to customer data and the broader extraction or replication of the underlying ontology. The policy does not create this boundary, because it already existed. Autonomous agents must continue to respect that boundary, rather than redefine it.Security risks in third-party MCP implementationsThen there is a security angle, and it is not abstract. The same week this policy was published, a supply chain attack named the Mini Shai-Hulud - a variant of the npm worm, quietly compromised hundreds of software packages. SAP-ecosystem npm packages were compromised and we addressed this with this security note for customers. This is not a theoretical threat model. This is the active threat environment in which community-built MCP servers are being connected to productive SAP systems running mission-critical business processes.The OWASP MCP Top 10 documents the vulnerability classes systematically: tool poisoning, prompt injection, privilege escalation via scope creep, token mismanagement, and supply chain compromise. Recent research across thousands of analyzed MCP implementations shows that a majority operate with static long-lived credentials or carry identifiable security findings, and a single compromised package in the MCP ecosystem can cascade into hundreds of thousands of exposed development environments. VentureBeat just last week reported a serious com.mand execution flaw that made up to 200,000 MCP servers vulnerable.Consider what that means in practice. An AI agent that has just internalized the semantic structure of your SAP data model and is operating through a community MCP server, moves beyond a productivity tool and into an elevated risk category, one that combines broad system access with an attack surface that is still evolving.Why MCP alone cannot run SAP business processesThe MCP debate has also obscured a technical reality that enterprise architects need to confront directly. The Model Context Protocol is plumbing. It specifies how an AI model calls a tool. It says nothing about whether the model understands what the tool does in a business context, in what sequence tools must be called, what side effects a given API invocation will trigger, or what the consequences of an incorrect parameter will be. A naive MCP implementation connecting to SAP OData services can call a tool. It cannot run a business process.The token consumption data from production agentic deployments is instructive. For illustration, a query asking for an employee's manager and traversing through the list of peers in an SAP SuccessFactors system consumed 565,000 tokens under a standard MCP implementation. The same query under a context-aware implementation consumed 80,000 tokens. That is the difference between a query costing $1.70 and a query costing $.24, for example, on a single operation, repeated across thousands of daily transactions. The standard MCP implementation is not automation. It is an expensive approximation of automation that fails on complex queries while loading the API surface with traffic it was not designed to carry.SAP’s architecture for open third-party AI integration via A2A SAP's response to these challenges is not to close the ecosystem but to build the right infrastructure for an open one. That distinction is worth dwelling on.The API Policy anchors compliance in documented, co-engineered architectures. The agentic interoperability reference architectures jointly developed with major technology partners are published and available on the SAP Architecture Center, prioritized by customer demand and updated as new patterns are validated. The bi-directional integration of SAP Joule and Microsoft 365 Copilot is the most visible example of what co-engineered agentic integration looks like in production: two AI systems, from two different vendors, working across each other's application surfaces without either party bypassing the other's security model. The endorsed path for external AI agent access to SAP is the Agent Gateway via the A2A protocol, with reference AI Golden Path on the SAP Architecture Center. The SAP Knowledge Graph, Open Resource Discovery (ORD) specification for metadata, and SAP BDC data products provide the context layer that transforms a protocol connection into a business-capable interaction. SAP also offers governed MCP servers for CAP, UI5, Fiori Elements, and has indicated its intent to extent this model to additional development environments, including ABAP development. These are not closed doors, they are the right doors. SAP's position in the standards community is that of an active contributor, not a gatekeeper. SAP is a launch partner of the Agent2Agent (A2A) protocol under the Linux Foundation and holds Gold level membership in the Agentic AI Foundation, co-chairing the Agent Identity and Trust workstream alongside the organizations that define how AI agents authenticate, authorize, and interoperate across enterprise boundaries. A2A and MCP are not external constraints that SAP is grudgingly accommodating. They are protocols SAP uses internally and is actively hardening through standards work. When community and open-source frameworks meet the security floor that enterprise deployment requires, external integration pathways will follow. The API Policy issued by SAP does not mark the end of openness. The industry has spent two years deploying AI agents against enterprise systems using protocols that the enterprise security community had not finished hardening, against APIs that were never designed for autonomous orchestration, with community tooling that documented attackers had already learned to compromise. Governance was not optional, it was timely. Anirban Majumdar is Head of the Office of the CTO at SAP.Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact [email protected].
Governance, not gatekeeping: How SAP brings enterprise‑grade safety to AI connectivity
Why This Matters
SAP's approach to enterprise API governance emphasizes safeguarding mission-critical workloads through clear, unified policies rather than imposing new restrictions. This shift highlights the importance of responsible AI connectivity and data security in enterprise software, ensuring trust and stability for consumers and businesses alike. By reinforcing existing best practices, SAP sets a standard for enterprise-grade stewardship in the evolving AI landscape.
Key Takeaways
- SAP unifies existing API controls to enhance security and reliability.
- The policy emphasizes responsible stewardship rather than new restrictions.
- Enterprise API governance is a proven practice, now reinforced for AI connectivity.
Get alerts for these topics