Latest Tech News

Stay updated with the latest in technology, AI, cybersecurity, and more

Filtered by: context Clear Filter

You can now feed Claude Sonnet 4 entire codebases at once

Following OpenAI’s big week filled with open models and GPT-5, Anthropic is on a streak of its own with AI announcements. Bigger prompts, bigger possibilities The company today revealed that Claude Sonnet 4 now supports up to 1 million tokens of context in the Anthropic API — a five-fold increase over the previous limit. This expanded “long context” capability allows developers to feed far larger datasets into Claude in a single request. Anthropic says the 1M-token window can handle entire

Anthropic just made its latest move in the AI coding wars

is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. Posts from this author will be added to your daily email digest and your homepage feed. The AI coding wars are heating up. One of the main battlegrounds? “Context windows,” or an AI model’s working memory — the amount of text it can take into account when it’s coming up with an answer. On that front, Anthropic just gained some

Claude Sonnet's memory gets a big boost with 1M tokens of context

Sabrina Ortiz/ZDNET ZDNET's key takeaways Claude Sonnet 4 now has one million context tokens. As a result, the model can process much larger developer tasks. Developers can access it now, but API pricing does increase for certain requests. We all have that friend who is a great active listener and can recall details from past interactions, which then feeds into better conversations in the future. Similarly, AI models have context windows that impact how much content they can reference -- an

Claude can now process entire software projects in single request, Anthropic says

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Anthropic announced Tuesday that its Claude Sonnet 4 artificial intelligence model can now process up to 1 million tokens of context in a single request — a fivefold increase that allows developers to analyze entire software projects or dozens of research papers without breaking them into smaller chunks. The expansion, available now in pub

Anthropic’s Claude AI model can now handle longer prompts

Anthropic is increasing the amount of information that enterprise customers can send to Claude in a single prompt, part of an effort to attract more developers to the company’s popular AI coding models. For Anthropic’s API customers, the company’s Claude Sonnet 4 AI model now has a one million token context window — meaning the AI can handle requests as long as 750,000 words, more than the entire Lord of the Rings trilogy, or 75,000 lines of code. That’s roughly five times Claude’s previous lim

From terabytes to insights: Real-world AI obervability architecture

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Consider maintaining and developing an e-commerce platform that processes millions of transactions every minute, generating large amounts of telemetry data, including metrics, logs and traces across multiple microservices. When critical incidents occur, on-call engineers face the daunting task of sifting through an ocean of data to unravel

Programming with AI: You're Probably Doing It Wrong

Programming with AI: You're Probably Doing It Wrong 2025 is the year of Artificial Intelligence. With GPT-5 just released, many developers will re-evaluate their use of large language models for assisting in their daily work. I’m here to tell you: you’re probably doing it wrong. And you’re missing out on the real power that AI assisted development can give you. What “doing it wrong” looks like Let’s kick off with a (non-exhaustive) list of symptoms you’re using your AI coding assistant wrong

Topics: agent ai code context llm

AI must RTFM: Why tech writers are becoming context curators

AI must RTFM: Why technical writers are becoming context curators I’ve been noticing a trend among developers that use AI: they are increasingly writing and structuring docs in context folders so that the AI powered tools they use can build solutions autonomously and with greater accuracy. They now strive to understand information architecture, semantic tagging, docs markup. All of a sudden they’ve discovered docs, so they write more than they code. Because AI must RTFM now. It’s docs-driven d

The tradeoff between human and AI context

AI coding is a skill. You have to decide how much context to put in your brain vs the AI. You can waste your time thinking about the wrong problem because you failed to delegate. Or you can give yourself a headache when the AI coder doesn’t get it. I think about it in terms of spectrum of human to AI context. At the highest levels, we, humans, own all the context. We operate here when our specific value-add matters. We also work here in the many cases AI coders aren’t that smart yet. At the low

Principles for production AI agents

Every now and then, people ask me: “I am new to agentic development, I’m building something, but I feel like I'm missing some tribal knowledge. Help me catch up!”. I’m tempted to suggest some serious stuff like multiweek courses (e.g. by HuggingFace or Berkeley), but not everyone is interested in that level of diving. So I decided to gather six simple empirical learnings that helped me a lot during app.build development. This post is somewhat inspired by Design Decisions Behind app.build, but

Six Principles for Production AI Agents

Every now and then, people ask me: “I am new to agentic development, I’m building something, but I feel like I'm missing some tribal knowledge. Help me catch up!”. I’m tempted to suggest some serious stuff like multiweek courses (e.g. by HuggingFace or Berkeley), but not everyone is interested in that level of diving. So I decided to gather six simple empirical learnings that helped me a lot during app.build development. This post is somewhat inspired by Design Decisions Behind app.build, but

Modular Interpreters and Visitors in Rust with Extensible Variants and CGP

Programming Extensible Data Types in Rust with CGP - Part 2: Modular Interpreters and Extensible Visitors Posted on 2025-07-09 Authored by Soares Chen Discuss on Reddit, GitHub or Discord. This is the second part of the blog series on Programming Extensible Data Types in Rust with CGP. You can read the first part here. As a recap, we have covered the new release of CGP v0.4.2 which now supports the use of extensible records and variants, allowing developers to write code that operates on an

Context Engineering Guide

What is Context Engineering? A few years ago, many, even top AI researchers, claimed that prompt engineering would be dead by now. Obviously, they were very wrong, and in fact, prompt engineering is now even more important than ever. It is so important that it is now being rebranded as context engineering. Yes, another fancy term to describe the important process of tuning the instructions and relevant context that an LLM needs to perform its tasks effectively. Much has been written already

Context is a native macOS app that was almost entirely written by AI

Like many image and video AI tools, which have (mostly) stopped creating people with six fingers, AI coding tools have also been making great strides. Case in point: developer Indragie Karunaratne just shipped Context, a native macOS app that was 95% built by Anthropic’s Claude Code. Anthropic has been standing out in AI-assisted development For the better part of the last year, Anthropic has pulled away from the pack when it comes to how good its Claude models are at generating code (to be fa

Building a Mac app with Claude code

I recently shipped Context, a native macOS app for debugging MCP servers. The goal was to build a useful developer tool that feels at home on the platform, powered by Apple's SwiftUI framework. I've been building software for the Mac since 2008, but this time was different: Context was almost 100% built by Claude Code1. There is still skill and iteration involved in helping Claude build software, but of the 20,000 lines of code in this project, I estimate that I wrote less than 1,000 lines by ha

Prompting LLMs is not engineering

Prompting LLMs is not engineering published in: With the proliferation of AI models and tools, there's a new industry-wide fascination with snake oil remedies called "prompt engineering". As of July 2025 the term is now "context engineering" or "context prompting" or "context manipulation". To put it succinctly, prompt engineering is nothing but an attempt to reverse-engineer a non-deterministic black box for which any of the parameters below are unknown: training set weights constraints o

Context Engineering for Agents

Lance Martin TL;DR Agents need context to perform tasks. Context engineering is the art and science of filling the context window with just the right information at each step of an agent’s trajectory. In this post, I group context engineering into a few common strategies seen across many popular agents today. Context Engineering As Andrej Karpathy puts it, LLMs are like a new kind of operating system. The LLM is like the CPU and its context window is like the RAM, serving as the model’s work

LLMs as Compilers

LLMs as compilers 7/2/2025 by Kadhir So far, I've only used LLMs as an assistant, where I'm doing something, and an LLM helps me along the way. Code autocomplete feels like a great example of how useful it can be when it gets it right. I don't doubt that over time this will improve, but I'm excited to see a more significant transition from this assistant mode to a compiler mode, at least for coding. It will be exciting when we focus solely on the context we fed the LLM, then test the features

Soldier’s wrist purse discovered at Roman legionary camp

Archaeologists have discovered a fragment of a soldier’s wrist purse at the site of a temporary Roman camp in South Moravia, Czech Republic. The camp was established by the 10th Legion, who was stationed in the area between AD 172 and 180 during the Marcomannic Wars, a campaign against the Germanic Marcomanni, the Quadi, and the Sarmatian Iazyges. - Advertisement - The find is especially significant because it was uncovered outside the traditional boundaries of the Roman Empire. “It is quite

Soldier's wrist purse discovered at Roman legionary camp

Archaeologists have discovered a fragment of a soldier’s wrist purse at the site of a temporary Roman camp in South Moravia, Czech Republic. The camp was established by the 10th Legion, who was stationed in the area between AD 172 and 180 during the Marcomannic Wars, a campaign against the Germanic Marcomanni, the Quadi, and the Sarmatian Iazyges. - Advertisement - The find is especially significant because it was uncovered outside the traditional boundaries of the Roman Empire. “It is quite

The new skill in AI is not prompting, it's context engineering

June 30, 2025 5 minute read Context Engineering is new term gaining traction in the AI world. The conversation is shifting from "prompt engineering" to a broader, more powerful concept: Context Engineering. Tobi Lutke describes it as "the art of providing all the context for the task to be plausibly solvable by the LLM.” and he is right. With the rise of Agents it becomes more important what information we load into the “limited working memory”. We are seeing that the main thing that determine

The New Skill in AI Is Not Prompting, It's Context Engineering

June 30, 2025 5 minute read Context Engineering is new term gaining traction in the AI world. The conversation is shifting from "prompt engineering" to a broader, more powerful concept: Context Engineering. Tobi Lutke describes it as "the art of providing all the context for the task to be plausibly solvable by the LLM.” and he is right. With the rise of Agents it becomes more important what information we load into the “limited working memory”. We are seeing that the main thing that determine

Claude Code for VSCode

Claude Code Extension for VS Code IMPORTANT: This plugin requires Claude Code to be installed separately. For more information, see claude.ai/code. Claude Code seamlessly integrates with popular Integrated Development Environments (IDEs) to enhance your coding workflow. This integration allows you to leverage Claude’s capabilities directly within your preferred development environment. Features Auto-installation: When you launch Claude Code from within VSCode’s terminal, it automatically det

From LLM to AI Agent: What's the Real Journey Behind AI System Development?

AI agents are a hot topic, but not every AI system needs to be one. While agents promise autonomy and decision-making power, simpler & more cost-saving solutions better serve many real-world use cases. The key lies in choosing the right architecture for the problem at hand. In this post, we'll explore recent developments in Large Language Models (LLMs) and discuss key concepts of AI systems. We've worked with LLMs across projects of varying complexity, from zero-shot prompting to chain-of-tho

MCP Specification – version 2025-06-18 changes

This document lists changes made to the Model Context Protocol (MCP) specification since the previous revision, 2025-03-26. ​ Major changes ​ Other schema changes Add _meta field to additional interface types (PR #710), and specify proper usage. Add context field to CompletionRequest , providing for completion requests to include previously-resolved variables (PR #598). Add title field for human-friendly display names, so that name can be used as a programmatic identifier (PR #663) ​ Full ch

Writing documentation for AI: best practices

Retrieval-Augmented Generation (RAG) systems like Kapa rely on your documentation to provide accurate, helpful information. When documentation serves both humans and machines well, it creates a self-reinforcing loop of content quality: clear documentation improves AI answers, and those answers help surface gaps that further improve the docs. This guide provides best practices for creating documentation that works effectively for both human readers and AI/LLM consumption in RAG systems. Many bes

MiniMax-M1 is a new open source model with 1 MILLION TOKEN context and new, hyper efficient reinforcement learning

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Chinese AI startup MiniMax, perhaps best known in the West for its hit realistic AI video model Hailuo, has released its latest large language model, MiniMax-M1 — and in great news for enterprises and developers, it’s completely open source under an Apache 2.0 license, meaning businesses can take it and use it for commercial applications a

Groq just made Hugging Face way faster — and it’s coming for AWS and Google

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Groq, the artificial intelligence inference startup, is making an aggressive play to challenge established cloud providers like Amazon Web Services and Google with two major announcements that could reshape how developers access high-performance AI models. The company announced Monday that it now supports Alibaba’s Qwen3 32B language mode