Skip to content
Tech News
← Back to articles

GitHub Copilot shifts to usage-based pricing June 1 - why that's no surprise

read original get GitHub Copilot Subscription → more articles
Why This Matters

GitHub Copilot is shifting to a usage-based pricing model starting June 1, 2026, reflecting its evolution into a more advanced, agentic platform with higher computational demands. This change aims to ensure long-term service sustainability but may lead to increased costs for users, especially those engaging in extensive coding sessions. The move signifies a broader industry trend towards more transparent, consumption-based AI service pricing.

Key Takeaways

asbe/ iStock / Getty Images Plus via Getty Images

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

GitHub shifts pricing for its flagship Copilot service.

Under the new AI Credit approach, if you run out of credits, you can't use the service.

Users who expect to see far higher prices already hate the deal.

It's been an open secret that people haven't been paying anything like the full cost for their AI services. The bill's finally coming due. GitHub announced that as of June 1, 2026, all GitHub Copilot plans will shift to usage-based billing.

This is a radical change from its current premium request unit (PRU) system. Going forward, users will consume monthly allotments of GitHub AI Credits based on token consumption, including input, output, and cached tokens at published API rates. In other words, GitHub is moving to a token-based pricing model.

Smart people saw this coming. A week ago, GitHub blocked users from getting a new GitHub Copilot subscription. GitHub also began restricting the models available from its individual subscription plans, while dropping access to Opus models entirely. Price increases were clearly on their way.

Why? According to GitHub, it's no longer the same service. What was once a smart programming editor has evolved into "an agentic platform capable of running long, multi-step coding sessions, using the latest models, and iterating across entire repositories." On top of that, "Agentic usage is becoming the default, and it brings significantly higher compute and inference demands."

... continue reading