Latest Tech News

Stay updated with the latest in technology, AI, cybersecurity, and more

Filtered by: anthropic Clear Filter

The AI Hype Index: AI-designed antibiotics show promise

Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Using AI to improve our health and well-being is one of the areas scientists and researchers are most excited about. The last month has seen an interesting leap forward: The technology has been put to work designing new antibiotics to fight hard-to-treat conditions, and OpenAI and Anthropic have bo

‘Vibe-hacking’ is now a top AI threat

is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. Posts from this author will be added to your daily email digest and your homepage feed. “Agentic AI systems are being weaponized.” That’s one of the first lines of Anthropic’s new Threat Intelligence report, out today, which details the wide range of cases in which Claude — and likely many other leading AI agents and chatbots

Anthropic launches Claude for Chrome in limited beta, but prompt injection attacks remain a major concern

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Anthropic has begun testing a Chrome browser extension that allows its Claude AI assistant to take control of users’ web browsers, marking the company’s entry into an increasingly crowded and potentially risky arena where artificial intelligence systems can directly manipulate computer interfaces. The San Francisco-based AI company announc

Anthropic Will Settle Lawsuit With Authors Over Pirated AI Training Materials

Anthropic agreed to settle a lawsuit brought by a group of authors alleging that the AI company illegally pirated their copyrighted books to use in training its AI models. The parties in the lawsuit filed a motion indicating the agreement with the 9th US Circuit Court of Appeals on Tuesday. We don't yet know the terms of the settlement. Justin Nelson, lawyer for the authors, told CNET via email that more information will be announced soon. "This historic settlement will benefit all class member

Authors celebrate “historic” settlement coming soon in Anthropic class action

Authors are celebrating a "historic" settlement expected to be reached soon in a class-action lawsuit over Anthropic's AI training data. On Tuesday, US District Judge William Alsup confirmed that Anthropic and the authors "believe they have a settlement in principle" and will file a motion for preliminary approval of the settlement by September 5. The settlement announcement comes after Alsup certified what AI industry advocates criticized as the largest copyright class action of all time. Alt

Anthropic reaches a settlement over authors' class-action piracy lawsuit

Anthropic has settled a class-action lawsuit brought by a group of authors for an undisclosed sum. The move means the company will avoid a potentially more costly ruling if the case regarding its use of copyright materials to train artificial intelligence tools had moved forward. In June, Judge William Alsup handed down a mixed result in the case, ruling that Anthropic's move to train LLMs on copyrighted materials constituted fair use. However the company's illegal and unpaid acquisition of tho

LiteLLM (YC W23) is hiring a back end engineer

TLDR LiteLLM is an open-source LLM Gateway with 27K+ stars on GitHub and trusted by companies like NASA, Rocket Money, Samsara, Lemonade, and Adobe. We’re rapidly expanding and seeking a founding full-stack engineer to help scale the platform. We’re based in San Francisco. What is LiteLLM LiteLLM provides an open source Python SDK and Python FastAPI Server that allows calling 100+ LLM APIs (Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic) in the OpenAI format We have raised a $1.6M seed

Anthropic settles AI book piracy lawsuit

is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO. Posts from this author will be added to your daily email digest and your homepage feed. Anthropic has settled a class action lawsuit with a group of US authors who accused the AI startup of copyright infringement. In a legal filing on Tuesday, Anthropic says it has negotiated a “proposed class settlement,” allowing it to skip a trial that would hav

Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors

Anthropic has reached a preliminary settlement in a class action lawsuit brought by a group of prominent authors, marking a major turn in of the most significant ongoing AI copyright lawsuits in history. The move will allow Anthropic to avoid what could have been a financially devastating outcome in court. The settlement agreement is expected to be finalized September 3, with more details to follow, according to a legal filing published on Tuesday. Lawyers for the plaintiffs did not immediately

Anthropic settles AI book-training lawsuit with authors

In Brief Anthropic has settled a class action lawsuit with a group of fiction and nonfiction authors, as announced in a filing on Tuesday with the Ninth Circuit Court of Appeals. Anthropic had won a partial victory in a lower court ruling and was in the process of appealing that ruling. No details of the settlement were made public, and Anthropic did not immediately respond to a request for comment. Called Bartz v. Anthropic, the case deals with Anthropic’s use of books as training material fo

Open the pod bay doors, Claude

It’s a well-worn trope in science fiction. We see it in Stanley Kubrick’s 1968 movie 2001: A Space Odyssey. It’s the premise of the Terminator series, in which Skynet triggers a nuclear holocaust to stop scientists from shutting it down. Those sci-fi roots go deep. AI doomerism, the idea that this technology—specifically its hypothetical upgrades, artificial general intelligence and super-intelligence—will crash civilizations, even kill us all, is now riding another wave. The weird thing is th

You can learn AI for free with these new courses from Anthropic

Anthropic Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways Students and teachers can now try Anthropic's three new free AI courses. Anthropic also appointed a Higher Education Advisory Board. The industry at large is investing in making AI accessible to students. This year's back-to-school season nearly guarantees a new classmate: AI. Some form of the tech is now baked into most products -- it's more ubiquitous than ever. Anthropic's new education initiative seek

Microsoft AI chief says it’s ‘dangerous’ to study AI consciousness

AI models can respond to text, audio, and video in ways that sometimes fool people into thinking a human is behind the keyboard, but that doesn’t exactly make them conscious. It’s not like ChatGPT experiences sadness doing my tax return… right? Well, a growing number of AI researchers at labs like Anthropic are asking when — if ever — might AI models develop subjective experiences similar to living beings, and if they do, what rights should they have? The debate over whether AI models could on

Anthropic bundles Claude Code into enterprise plans

Anthropic on Wednesday announced a new subscription offering that will incorporate Claude Code into Claude for Enterprise. Previously available only through individual accounts, Anthropic’s command-line coding tool can now be purchased as part of a broader enterprise suite, allowing for more sophisticated integrations and more powerful admin tools. “This is the most requested feature from our business team and enterprise customers,” Anthropic product lead Scott White told TechCrunch. The integ

In Xcode 26, Apple shows first signs of offering ChatGPT alternatives

The latest Xcode beta contains clear signs that Apple plans to bring Anthropic's Claude and Opus large language models into the integrated development environment (IDE), expanding on features already available using Apple's own models or OpenAI's ChatGPT. Apple enthusiast publication 9to5Mac "found multiple references to built-in support for Anthropic accounts," including in the "Intelligence" menu, where users can currently log into ChatGPT or enter an API key for higher message limits. Apple

Claude AI will end ‘persistently harmful or abusive user interactions’

is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO. Posts from this author will be added to your daily email digest and your homepage feed. Anthropic’s Claude AI chatbot can now end conversations deemed “persistently harmful or abusive,” as spotted earlier by TechCrunch. The capability is now available in Opus 4 and 4.1 models, and will allow the chatbot to end conversations as a “last resort” after

Anthropic's Claude AI now has the ability to end 'distressing' conversations

Anthropic's latest feature for two of its Claude AI models could be the beginning of the end for the AI jailbreaking community. The company announced in a post on its website that the Claude Opus 4 and 4.1 models now have the power to end a conversation with users. According to Anthropic, this feature will only be used in "rare, extreme cases of persistently harmful or abusive user interactions." To clarify, Anthropic said those two Claude models could exit harmful conversations, like "requests

Anthropic: Claude can now end conversations to prevent harmful uses

OpenAI rival Anthropic says Claude has been updated with a rare new feature that allows the AI model to end conversations when it feels it poses harm or is being abused. This only applies to Claude Opus 4 and 4.1, the two most powerful models available via paid plans and API. On the other hand, Claude Sonnet 4, which is the company's most used model, won't be getting this feature. Anthropic describes this move as a "model welfare." "In pre-deployment testing of Claude Opus 4, we included a pr

Anthropic says some Claude models can now end ‘harmful or abusive’ conversations

Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as “rare, extreme cases of persistently harmful or abusive user interactions.” Strikingly, Anthropic says it’s doing this not to protect the human user, but rather the AI model itself. To be clear, the company isn’t claiming that its Claude AI models are sentient or can be harmed by their conversations with users. In its own words, Anthropic remains “hig

Anthropic has new rules for a more dangerous AI landscape

is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO. Posts from this author will be added to your daily email digest and your homepage feed. Anthropic has updated the usage policy for its Claude AI chatbot in response to growing concerns about safety. In addition to introducing stricter cybersecurity rules, Anthropic now specifies some of the most dangerous weapons that people should not develop usin

Anthropic takes on OpenAI and Google with new Claude AI features designed for students and developers

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Anthropic is launching new “learning modes” for its Claude AI assistant that transform the chatbot from an answer-dispensing tool into a teaching companion, as major technology companies race to capture the rapidly growing artificial intelligence education market while addressing mounting concerns that AI undermines genuine learning. The S

Anthropic brings Claude's learning mode to regular users and devs

This past spring, Anthropic introduced learning mode, a feature that changed Claude's interaction style. When enabled, the chatbot would, following a question, try to guide the user to their own solution, instead of providing them with an answer outright. Since its introduction in April, learning mode has only been available to Claude for Education users. Now, like OpenAI did with Study Mode, Anthropic is making the tool available to everyone. Starting today, Claude.ai users will find a new opt

Anthropic nabs Humanloop team as competition for enterprise AI talent heats up

Anthropic has acquired the co-founders and most of the team behind Humanloop – a platform for prompt management, LLM evaluation, and observability – in a push to strengthen its enterprise strategy. The terms of the deal were not shared, but it appears to follow the acqui-hire playbook we’re increasingly seeing in the tech industry amid the war for AI talent. Humanloop’s three co-founders – CEO Raza Habib, CTO Peter Hayes, and CPO Jordan Burgess – have all joined Anthropic, alongside around a do

Claude just learned a useful ChatGPT trick

Anthropic has introduced a helpful new feature for Claude that solves a problem similar to one ChatGPT already addressed. As of today, Claude is capable of referencing information from your other conversations with the AI chatbot. Anthropic demonstrates how the feature works: Claude can now reference past chats, so you can easily pick up from where you left off. pic.twitter.com/n9ZgaTRC1y — Claude (@claudeai) August 11, 2025 The new Claude feature matches OpenAI’s ChatGPT memory feature. An

You can now feed Claude Sonnet 4 entire codebases at once

Following OpenAI’s big week filled with open models and GPT-5, Anthropic is on a streak of its own with AI announcements. Bigger prompts, bigger possibilities The company today revealed that Claude Sonnet 4 now supports up to 1 million tokens of context in the Anthropic API — a five-fold increase over the previous limit. This expanded “long context” capability allows developers to feed far larger datasets into Claude in a single request. Anthropic says the 1M-token window can handle entire

Anthropic just made its latest move in the AI coding wars

is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. Posts from this author will be added to your daily email digest and your homepage feed. The AI coding wars are heating up. One of the main battlegrounds? “Context windows,” or an AI model’s working memory — the amount of text it can take into account when it’s coming up with an answer. On that front, Anthropic just gained some

Claude Sonnet's memory gets a big boost with 1M tokens of context

Sabrina Ortiz/ZDNET ZDNET's key takeaways Claude Sonnet 4 now has one million context tokens. As a result, the model can process much larger developer tasks. Developers can access it now, but API pricing does increase for certain requests. We all have that friend who is a great active listener and can recall details from past interactions, which then feeds into better conversations in the future. Similarly, AI models have context windows that impact how much content they can reference -- an

Claude can now process entire software projects in single request, Anthropic says

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Anthropic announced Tuesday that its Claude Sonnet 4 artificial intelligence model can now process up to 1 million tokens of context in a single request — a fivefold increase that allows developers to analyze entire software projects or dozens of research papers without breaking them into smaller chunks. The expansion, available now in pub

AI companies are chasing government users with steep discounts

Ever since the launch of ChatGPT, AI companies have been racing to gain a foothold in government in more ways than one. Most recently, that’s meant luring government users with attractive low prices for their products. Within the last week, both OpenAI and Anthropic have introduced special prices for government versions of their generative AI chatbots, ChatGPT and Claude, and xAI announced its Grok for Government in mid-July. OpenAI and Anthropic are both offering their chatbots to federal agen

Anthropic’s Claude AI model can now handle longer prompts

Anthropic is increasing the amount of information that enterprise customers can send to Claude in a single prompt, part of an effort to attract more developers to the company’s popular AI coding models. For Anthropic’s API customers, the company’s Claude Sonnet 4 AI model now has a one million token context window — meaning the AI can handle requests as long as 750,000 words, more than the entire Lord of the Rings trilogy, or 75,000 lines of code. That’s roughly five times Claude’s previous lim