Latest Tech News

Stay updated with the latest in technology, AI, cybersecurity, and more

Filtered by: anthropic Clear Filter

Screw the money — Anthropic’s $1.5B copyright settlement sucks for writers

Around half a million writers will be eligible for a payday of at least $3,000, thanks to a historic $1.5 billion settlement in a class action lawsuit that a group of authors brought against Anthropic. This landmark settlement marks the largest payout in the history of U.S. copyright law, but this isn’t a victory for authors — it’s yet another win for tech companies. Tech giants are racing to amass as much written material as possible to train their LLMs, which power groundbreaking AI chat pro

Anthropic will pay a record-breaking $1.5 billion to settle copyright lawsuit with authors

Anthropic will pay a record-breaking $1.5 billion to settle a class action lawsuit lawsuit brought by authors and publishers. The settlement is the largest-ever payout for a copyright case in the United States. The AI company behind the Claude chatbot reached a settlement in the case last week, but terms of the agreement weren't disclosed at the time. Now, The New York Times reports that the 500,000 authors involved in the case will get $3,000 per work. The case has been closely watched as top

Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI Copyright Settlement

Anthropic has agreed to pay at least $1.5 billion to settle a lawsuit brought by a group of book authors alleging copyright infringement, an estimated $3,000 per work. In a court motion on Friday, the plaintiffs emphasized that the terms of the settlement are “critical victories” and that going to trial would have been an “enormous” risk. This is the first class action settlement centered on AI and copyright in the United States, and the outcome may shape how regulators and creative industries

Stripe Launches L1 Blockchain: Tempo

Tempo was started by Stripe and Paradigm, with design input from Anthropic, Coupang, Deutsche Bank, DoorDash, Lead Bank, Mercury, Nubank, OpenAI, Revolut, Shopify, Standard Chartered, Visa, and more. If you’re a company with large, real-world economic flows and would like to help shape the future of Tempo, get in touch.

How Anthropic's enterprise dominance fueled its monster $183B valuation

PM Images/DigitalVision via Getty Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways Anthropic is valued at $183 billion after a new funding round. The company currently serves over 300,000 enterprise customers. A marketing emphasis on safety could be a major driving factor. Anthropic is soaring, and the popularity of its tools among enterprise clients is providing much of the lift. The AI start-up announced on Tuesday that its latest funding round raised $13 bill

Anthropic Rakes in $183B Valuation as It Takes on Musk, Altman

Anthropic, the AI startup behind the Claude family of models, has secured a $13 billion Series F financing at a staggering $183 billion post-money valuation, nearly tripling its worth since March. Anthropic is backed by Amazon and Google-parent Alphabet. The company said that the round was led by ICONIQ Capital and co-led by Fidelity and Lightspeed Venture Partners, with institutional heavyweights such as BlackRock, GIC, Qatar Investment Authority, Ontario Teachers’ Pension Plan, and Coatue am

Anthropic raises $13B Series F

Anthropic has completed a Series F fundraising of $13 billion led by ICONIQ. This financing values Anthropic at $183 billion post-money. Along with ICONIQ, the round was co-led by Fidelity Management & Research Company and Lightspeed Venture Partners. The investment reflects Anthropic’s continued momentum and reinforces our position as the leading intelligence platform for enterprises, developers, and power users. Significant investors in this round include Altimeter, Baillie Gifford, affiliate

Anthropic raises $13 billion funding round at $183 billion valuation

Dario Amodei, Anthropic CEO, speaking on CNBC's Squawk Box outside the World Economic Forum in Davos, Switzerland on Jan. 21st, 2025. Anthropic on Tuesday announced it has closed a $13 billion funding round at a $183 billion post-money valuation, roughly triple what the artificial intelligence startup was worth as of its last raise in March. The most recent funding round was led by Iconiq, Fidelity Management & Research Company and Lightspeed Venture Partners. Other investors including Altimet

Anthropic raises $13B Series F at $183B post-money valuation

Anthropic has completed a Series F fundraising of $13 billion led by ICONIQ. This financing values Anthropic at $183 billion post-money. Along with ICONIQ, the round was co-led by Fidelity Management & Research Company and Lightspeed Venture Partners. The investment reflects Anthropic’s continued momentum and reinforces our position as the leading intelligence platform for enterprises, developers, and power users. Significant investors in this round include Altimeter, Baillie Gifford, affiliate

Anthropic is now valued at $183 billion

is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. Posts from this author will be added to your daily email digest and your homepage feed. Anthropic, the AI startup behind Claude and one of OpenAI’s chief competitors, emerged from the holiday weekend with big news: A completed funding round of $13 billion, awarding the company a $183 billion post-money valuation. The company s

Flunking my Anthropic interview again

The Curious Case of Flunking My Anthropic Interview (Again) Here's a vague overview of what just happened: I recently applied for Anthropic's Developer Relations role. My friend who works there gave me a glowing recommendation (thanks again, dude!). I completed their secret take-home assignment. On top of their secret take-home assignment, I independently published diggit.dev and a companion blogpost about my [sincerely] positive experiences with Claude. I was hoping that some unsolicited "ext

OpenAI and Anthropic evaluated each others' models - which ones came out on top

Elyse Betters Picaro/ZDNET Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways Anthropic and OpenAI ran their own tests on each other's models. The two labs published findings in separate reports. The goal was to identify gaps in order to build better and safer models. The AI race is in full swing, and companies are sprinting to release the most cutting-edge products. Naturally, this has raised concerns about speed compromising proper safety evaluations. A first-of-

Anthropic Settles With Authors Over Pirated Material: What Does That Mean for Other AI Lawsuits?

Anthropic agreed to settle a lawsuit brought by a group of authors alleging that the AI company illegally pirated their copyrighted books to use in training its Claude AI models. On Tuesday, the parties in the lawsuit filed a motion indicating their agreement with the 9th US Circuit Court of Appeals. We don't yet know the terms of the settlement, but we could know more as soon as next week. Justin Nelson, lawyer for the authors, told CNET via email that more information will be announced soon.

Anthropic users face a new choice – opt out or share your chats for AI training

Anthropic is making some big changes to how it handles user data, requiring all Claude users to decide by September 28 whether they want their conversations used to train AI models. While the company directed us to its blog post on the policy changes when asked about what prompted the move, we’ve formed some theories of our own. But first, what’s changing: Previously, Anthropic didn’t use consumer chat data for model training. Now, the company wants to train its AI systems on user conversations

Anthropic users face a new choice – opt out or share your data for AI training

Anthropic is making some big changes to how it handles user data, requiring all Claude users to decide by September 28 whether they want their conversations used to train AI models. While the company directed us to its blog post on the policy changes when asked about what prompted the move, we’ve formed some theories of our own. But first, what’s changing: previously, Anthropic didn’t use consumer chat data for model training. Now, the company wants to train its AI systems on user conversations

Malware devs abuse Anthropic’s Claude AI to build ransomware

Anthropic's Claude Code large language model has been abused by threat actors who used it in data extortion campaigns and to develop ransomware packages. The company says that its tool has also been used in fraudulent North Korean IT worker schemes and to distribute lures for Contagious Interview campaigns, in Chinese APT campaigns, and by a Russian-speaking developer to create malware with advanced evasion capabilities. AI-created ransomware In another instance, tracked as ‘GTG-5004,’ a UK-b

OpenAI–Anthropic cross-tests expose jailbreak and misuse risks — what enterprises must add to GPT-5 evaluations

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now OpenAI and Anthropic may often pit their foundation models against each other, but the two companies came together to evaluate each other’s public models to test alignment. The companies said they believed that cross-evaluating accountability and safety would provide more transparency into what these powerful models could do, enabling ente

Anthropic will start training its AI models on chat transcripts

is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. Posts from this author will be added to your daily email digest and your homepage feed. Anthropic will start training its AI models on user data, including new chat transcripts and coding sessions, unless users choose to opt out. It’s also extending its data retention policy to five years — again, for users that don’t choose to

AI firm says its technology weaponised by hackers

AI firm says its technology weaponised by hackers 3 hours ago Share Save Imran Rahman-Jones Technology reporter Share Save Getty Images US artificial intelligence (AI) company Anthropic says its technology has been "weaponised" by hackers to carry out sophisticated cyber attacks. Anthropic, which makes the chatbot Claude, says its tools were used by hackers "to commit large-scale theft and extortion of personal data". The firm said its AI was used to help write code which carried out cyber-at

Hackers used AI to 'to commit large-scale theft', says Anthropic

Hackers used AI to 'to commit large-scale theft', says Anthropic 1 hour ago Share Save Imran Rahman-Jones Technology reporter Share Save Getty Images US artificial intelligence (AI) company Anthropic says its technology has been "weaponised" by hackers to carry out sophisticated cyber attacks. Anthropic, which makes the chatbot Claude, says its tools were used by hackers "to commit large-scale theft and extortion of personal data". The firm said its AI was used to help write code which carrie

Anthropic's Claude Chrome browser extension rolls out - how to get early access

DrPixel/Moment/Getty Images Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways: Claude is incorporating AI into a Chrome web browser extension. The closed beta allows users to chat with Claude in a side panel. Anthropic warned early users to use the extension carefully. Claude, Anthropic's AI model, is following Perplexity with its Comet web browser and Dia by incorporating AI into a web browser. Anthropic's first effort is a closed beta of a Chrome web browser ext

Huge Number of Authors Stand to Get Paid After Anthropic Agrees to Settle Potentially $1 Trillion Lawsuit

As OpenAI's ChatGPT and its imitators exploded onto the world stage over the past few years, they kicked off a series of legal showdowns that are still working their way through the courts. The New York Times is suing OpenAI. Disney is suing Midjourney. And in a class action case representing potentially millions of writers, book authors are suing Anthropic. All these cases are orbiting around a central question: what do the creators of modern AI systems — which are trained by ingesting vast a

Some teachers are using AI to grade their students, Anthropic finds - why that matters

Anthropic Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways Anthropic published its Education Report, analyzing educators' Claude usage. Teachers are using Claude to help grade students, a controversial use case. AI companies are doubling down on tools for education. Much of the focus on AI in education is on how students will be affected by AI tools. Many are concerned that the temptation to cheat and AI's erosion of critical thinking skills will diminish the qua

OpenAI and Anthropic conducted safety evaluations of each other's AI systems

Most of the time, AI companies are locked in a race to the top, treating each other as rivals and competitors. Today, OpenAI and Anthropic revealed that they agreed to evaluate the alignment of each other's publicly available systems and shared the results of their analyses. The full reports get pretty technical, but are worth a read for anyone who's following the nuts and bolts of AI development. A broad summary showed some flaws with each company's offerings, as well as revealing pointers for

The Era of AI-Generated Ransomware Has Arrived

As cybercrime surges around the world, new research increasingly shows that ransomware is evolving as a result of widely available generative AI tools. In some cases, attackers are using AI to draft more intimidating and coercive ransom notes and conduct more effective extortion attacks. But cybercriminals’ use of generative AI is rapidly becoming more sophisticated. Researchers from the generative AI company Anthropic today revealed that attackers are leaning on generative AI more heavily—somet

OpenAI co-founder calls for AI labs to safety-test rival models

OpenAI and Anthropic, two of the world’s leading AI labs, briefly opened up their closely guarded AI models to allow for joint safety testing — a rare cross-lab collaboration at a time of fierce competition. The effort aimed to surface blind spots in each company’s internal evaluations and demonstrate how leading AI companies can work together on safety and alignment work in the future. In an interview with TechCrunch, OpenAI co-founder Wojciech Zaremba said this kind of collaboration is increa

Anthropic admits its AI is being used to conduct cybercrime

Anthropic’s agentic AI, Claude , has been "weaponized" in high-level cyberattacks, according to a new report published by the company. It claims to have successfully disrupted a cybercriminal whose "vibe hacking" extortion scheme targeted at least 17 organizations, including some related to healthcare, emergency services and government. Anthropic says the hacker attempted to extort some victims into paying six-figure ransoms to prevent their personal data from being made public, with an "unprec

Claude for Chrome Extension Bakes AI Right Into the Browser

You'll soon be able to integrate Anthropic's chatbot into your online life even more easily. Claude for Chrome, a new extension that implants the AI model right into the web browser, will allow users to analyze and summarize webpages on screen, the company said in a press release on Tuesday. The extension, which is currently being piloted with 1,000 subscribers on the Max plan, which costs $200 per month, has both analysis and agentic capabilities. Not only will it summarize your emails and ana

Anthropic agrees to settle copyright infringement class action suit - what it means

Anadolu / Contributor / Anadolu via Getty Images Follow ZDNET: Add us as a preferred source on Google. ZDNET key takeaways Anthropic is settling a class action lawsuit with three authors. The authors claim Anthropic trained AI on their pirated work. The future of AI and fair usage is still unclear. AI startup Anthropic has agreed to settle a class action lawsuit against three authors for the tech company's misuse of their work to train its Claude chatbot. Also: Claude wins high praise fro

Anthropic Warns of New 'Vibe Hacking' Attacks That Use Claude AI

Anthropic, the company behind the popular AI model Claude, said in a new Threat Intelligence report that it disrupted a "vibe hacking" extortion scheme. In the report, the company detailed how the attack was carried out, allowing hackers to scale up a mass attack against 17 targets, including entities in government, healthcare, emergency services and religious organizations. (You can read the full report in this PDF file.) Anthropic says that its Claude AI technology was used as both a "techni