Latest Tech News

Stay updated with the latest in technology, AI, cybersecurity, and more

Filtered by: claude Clear Filter

The Unbearable Slowness of AI Coding

The Unbearable Slowness of AI Coding 19 Aug, 2025 I’ve been coding entirely with Claude Code for the past two months. At first it was exhilarating. I was speeding through tasks. I was committing like mad. Now, as I’ve built up a fairly substantial app, it’s slowed to a crawl. Ironically, the app I’m building lets me parallelize many instances of Claude Code at once. Often, I’ll have 5 instances running while I’m thinking about new features. The slowness comes in when I actually need to revi

Topics: app claude code coding ll

Enterprise Claude gets admin, compliance tools—just not unlimited usage

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now A few weeks after announcing rate limits for Claude and the popular Claude Code, Anthropic will offer Claude Enterprise and Teams customers upgrades to access more usage and Claude Code in a single subscription. The upgrades will also include more admin controls and a new Compliance API that will give enterprises “access to usage data and

Anthropic bundles Claude Code into enterprise plans

Anthropic on Wednesday announced a new subscription offering that will incorporate Claude Code into Claude for Enterprise. Previously available only through individual accounts, Anthropic’s command-line coding tool can now be purchased as part of a broader enterprise suite, allowing for more sophisticated integrations and more powerful admin tools. “This is the most requested feature from our business team and enterprise customers,” Anthropic product lead Scott White told TechCrunch. The integ

Docker container for running Claude Code in "dangerously skip permissions" mode

Claude Code Container A Docker container for running Claude Code in "dangerously skip permissions" mode. claude-container3.mp4 Build the docker container and execute run_claude.sh to run an isolated version of claude code with access to the current working dir ( readOnly:/workspace/input ). /workspace/ ├── input/ # Host input files (read-only mount of $PWD) ├── output/ # Analysis results (writable mount to host) ├── data/ # Reference data (optional read-only mount) ├── temp/ # Temporary file

Apple preps native Claude integration on Xcode

Towards the end of this year’s WWDC keynote, Craig Federighi said that Apple had “expanded” their vision for Swift Assist, and would bring native integration with ChatGPT, alongside support for other LLMs via API, directly to Xcode. Now, 9to5Mac can confirm that Apple is set to support native integration with Anthropic’s Claude as well. Digging into today’s release of Xcode 26 beta 7, 9to5Mac found multiple references to built-in support for Anthropic accounts directly within the new “Intellige

Claude AI will end ‘persistently harmful or abusive user interactions’

is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO. Posts from this author will be added to your daily email digest and your homepage feed. Anthropic’s Claude AI chatbot can now end conversations deemed “persistently harmful or abusive,” as spotted earlier by TechCrunch. The capability is now available in Opus 4 and 4.1 models, and will allow the chatbot to end conversations as a “last resort” after

Anthropic's Claude AI now has the ability to end 'distressing' conversations

Anthropic's latest feature for two of its Claude AI models could be the beginning of the end for the AI jailbreaking community. The company announced in a post on its website that the Claude Opus 4 and 4.1 models now have the power to end a conversation with users. According to Anthropic, this feature will only be used in "rare, extreme cases of persistently harmful or abusive user interactions." To clarify, Anthropic said those two Claude models could exit harmful conversations, like "requests

Anthropic: Claude can now end conversations to prevent harmful uses

OpenAI rival Anthropic says Claude has been updated with a rare new feature that allows the AI model to end conversations when it feels it poses harm or is being abused. This only applies to Claude Opus 4 and 4.1, the two most powerful models available via paid plans and API. On the other hand, Claude Sonnet 4, which is the company's most used model, won't be getting this feature. Anthropic describes this move as a "model welfare." "In pre-deployment testing of Claude Opus 4, we included a pr

Anthropic says some Claude models can now end ‘harmful or abusive’ conversations

Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as “rare, extreme cases of persistently harmful or abusive user interactions.” Strikingly, Anthropic says it’s doing this not to protect the human user, but rather the AI model itself. To be clear, the company isn’t claiming that its Claude AI models are sentient or can be harmed by their conversations with users. In its own words, Anthropic remains “hig

Using AI to secure AI

One of Anthropic's quieter releases recently was their "Security Review," where Claude Code can identify and fix security issues in your code. But how good is it really? In my case, will it find issues with code it helped me write for my newsletter service and Chrome extension? The release states it uses a "specialized security-focused prompt that checks for common vulnerability patterns." After throwing so much compute at model training, LLMs are nearing the top of the S-Curve, so finding ways

Anthropic has new rules for a more dangerous AI landscape

is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO. Posts from this author will be added to your daily email digest and your homepage feed. Anthropic has updated the usage policy for its Claude AI chatbot in response to growing concerns about safety. In addition to introducing stricter cybersecurity rules, Anthropic now specifies some of the most dangerous weapons that people should not develop usin

Letting inmates run the asylum: Using AI to secure AI

One of Anthropic's quieter releases recently was their "Security Review," where Claude Code can identify and fix security issues in your code. But how good is it really? In my case, will it find issues with code it helped me write for my newsletter service and Chrome extension? The release states it uses a "specialized security-focused prompt that checks for common vulnerability patterns." After throwing so much compute at model training, LLMs are nearing the top of the S-Curve, so finding ways

Anthropic takes on OpenAI and Google with new Claude AI features designed for students and developers

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Anthropic is launching new “learning modes” for its Claude AI assistant that transform the chatbot from an answer-dispensing tool into a teaching companion, as major technology companies race to capture the rapidly growing artificial intelligence education market while addressing mounting concerns that AI undermines genuine learning. The S

Anthropic brings Claude's learning mode to regular users and devs

This past spring, Anthropic introduced learning mode, a feature that changed Claude's interaction style. When enabled, the chatbot would, following a question, try to guide the user to their own solution, instead of providing them with an answer outright. Since its introduction in April, learning mode has only been available to Claude for Education users. Now, like OpenAI did with Study Mode, Anthropic is making the tool available to everyone. Starting today, Claude.ai users will find a new opt

Claude just learned a useful ChatGPT trick

Anthropic has introduced a helpful new feature for Claude that solves a problem similar to one ChatGPT already addressed. As of today, Claude is capable of referencing information from your other conversations with the AI chatbot. Anthropic demonstrates how the feature works: Claude can now reference past chats, so you can easily pick up from where you left off. pic.twitter.com/n9ZgaTRC1y — Claude (@claudeai) August 11, 2025 The new Claude feature matches OpenAI’s ChatGPT memory feature. An

Claude gets 1M tokens support via API to take on Gemini 2.5 Pro

Claude Sonnet 4 has been upgraded, and it can now remember up to 1 million tokens of context, but only when it's used via API. This could change in the future. This is 5x more than the previous limit. It also means that Claude now supports remembering over 75,000 lines of code, or even hundreds of documents in a single session. Previously, you were required to submit details to Claude in small chunks, but that also meant Claude would forget the context as it hit the limit. With up to a 1 milli

Claude Sonnet 4 now supports 1M tokens of context

Claude Sonnet 4 now supports up to 1 million tokens of context on the Anthropic API—a 5x increase that lets you process entire codebases with over 75,000 lines of code or dozens of research papers in a single request. Long context support for Sonnet 4 is now in public beta on the Anthropic API and in Amazon Bedrock, with Google Cloud’s Vertex AI coming soon. Longer context, more use cases With longer context, developers can run more comprehensive and data-intensive use cases with Claude, incl

You can now feed Claude Sonnet 4 entire codebases at once

Following OpenAI’s big week filled with open models and GPT-5, Anthropic is on a streak of its own with AI announcements. Bigger prompts, bigger possibilities The company today revealed that Claude Sonnet 4 now supports up to 1 million tokens of context in the Anthropic API — a five-fold increase over the previous limit. This expanded “long context” capability allows developers to feed far larger datasets into Claude in a single request. Anthropic says the 1M-token window can handle entire

Claude can now save you more time by automatically referencing past chats

J Studios/Getty Images ZDNET's key takeaways Claude can now be prompted to reference past user interactions. The feature rolls out today to Max, Team, and Enterprise users. It'll be turned on by default, but you can also switch it off. Claude just got a major memory upgrade: Anthropic's flagship generative AI chatbot can now retrieve information from past conversations, the company announced Monday. The new feature is designed to enable a more streamlined, convenient, and personalized user

Claude Sonnet's memory gets a big boost with 1M tokens of context

Sabrina Ortiz/ZDNET ZDNET's key takeaways Claude Sonnet 4 now has one million context tokens. As a result, the model can process much larger developer tasks. Developers can access it now, but API pricing does increase for certain requests. We all have that friend who is a great active listener and can recall details from past interactions, which then feeds into better conversations in the future. Similarly, AI models have context windows that impact how much content they can reference -- an

Claude vs. Gemini: Testing on 1M Tokens of Context

Was this newsletter forwarded to you? Sign up to get it in your inbox. Today, Anthropic is releasing a version of Claude Sonnet 4 that has a 1-million token context window. That’s approximately the entire extant set of Harry Potter books in each prompt. We got early access last week, so you know we had to put it to the test. We did three main tests on Claude Sonnet 4: Long context text analysis: We hid two movie scenes in 1 million tokens of context, and asked Claude to find those scenes and

Claude can now process entire software projects in single request, Anthropic says

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Anthropic announced Tuesday that its Claude Sonnet 4 artificial intelligence model can now process up to 1 million tokens of context in a single request — a fivefold increase that allows developers to analyze entire software projects or dozens of research papers without breaking them into smaller chunks. The expansion, available now in pub

Anthropic’s Claude AI model can now handle longer prompts

Anthropic is increasing the amount of information that enterprise customers can send to Claude in a single prompt, part of an effort to attract more developers to the company’s popular AI coding models. For Anthropic’s API customers, the company’s Claude Sonnet 4 AI model now has a one million token context window — meaning the AI can handle requests as long as 750,000 words, more than the entire Lord of the Rings trilogy, or 75,000 lines of code. That’s roughly five times Claude’s previous lim

Anthropic takes aim at OpenAI, offers Claude to ‘all three branches of government’ for $1

Just a week after OpenAI announced it would offer ChatGPT Enterprise to the entire federal executive branch workforce at $1 per year per agency, Anthropic has raised the stakes. The AI giant said Tuesday it would also offer its Claude models to government agencies for just $1 – but not only to the executive branch. Anthropic is targeting “all three branches” of the U.S. government, including the legislative and judiciary branches. The package will be available for one year, says Anthropic. The

Claude can now reference past chats, if you want it to

Claude is getting a better, if selective, memory. Rather than acting as perfect catalog of everything you've talked about or shared, Anthropic says the AI chatbot now has the ability to reference past chats when asked, so you don't have to re-explain yourself. The feature seems like it could help you pick up a work project after time away, or query Claude for the details of a past research session that you don't quite remember. The key point is that Claude has to be prompted: It doesn't call on

Anthropic’s Claude chatbot can now remember your past conversations

is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. Posts from this author will be added to your daily email digest and your homepage feed. On Monday, Anthropic released a hotly anticipated memory function for its Claude chatbot. In a YouTube video, the company demonstrated a user asking what they had been chatting about with Claude before their vacation. Claude searches past c

Optimizing my sleep around Claude usage limits

For the past 4 weeks I have been managing my sleep schedule around maximizing the usage of my Claude Pro subscription. Every five hours your Claude session usage is reset. When I first began using Claude Code daily, this limit would often come at inconvenient times. I would be in flow, vibing with Claude on my B2B SaaS side project. Unfortunately, just as I was stepping into Claude's mind like I'm Raz, the dreaded usage warning would appear. Your limit will reset at 7am. But I was just getting t

The current state of LLM-driven development

I spent the past ~4 weeks trying out all the new and fancy AI tools for software development. Let’s get a few things out of the way: Learning how to use LLMs in a coding workflow is trivial. There is no learning curve. You can safely ignore them if they don’t fit your workflows at the moment. LLMs won’t magically make you deliver production-ready code If you can’t read the code and spot issues, they’re hard to use past the PoC stage They have terrible code organization skills, making them los