Following OpenAI’s big week filled with open models and GPT-5, Anthropic is on a streak of its own with AI announcements. Bigger prompts, bigger possibilities The company today revealed that Claude Sonnet 4 now supports up to 1 million tokens of context in the Anthropic API — a five-fold increase over the previous limit. This expanded “long context” capability allows developers to feed far larger datasets into Claude in a single request. Anthropic says the 1M-token window can handle entire codebases with more than 75,000 lines of code, or dozens of lengthy research papers at once. Use cases include large-scale code analysis that considers every file, test, and piece of documentation; synthesis of massive document collections like contracts or technical specs; and context-aware AI agents that can maintain coherence across hundreds of tool calls and multi-step workflows. The upgrade is available in public beta for Anthropic API customers with Tier 4 or custom rate limits, as well as through Amazon Bedrock. Support for Google Cloud’s Vertex AI is “coming soon.” Pricing doubles for prompts over 200,000 tokens, though Anthropic notes that prompt caching and batch processing can cut costs by up to 50 percent. Early adopters share results Anthropic highlighted two customers already using the feature: Bolt.new, which integrates Claude into its browser-based development platform, and iGent AI, whose Maestro agent turns conversations into code. Both say the 1M-token window enables larger, more accurate, and more autonomous coding workflows. Anthropic announced yesterday that Claude is gaining its own memory feature, allowing users to reference other chat history, including by pasting the exact context to reference. A week ago, Anthropic released a small but useful improvement to Claude Sonnet with version 4.1. Mac users can download Claude for macOS here. Anthropic also has iPhone and iPad apps for Claude.