Latest Tech News

Stay updated with the latest in technology, AI, cybersecurity, and more

Filtered by: thr Clear Filter

Anthropic Rakes in $183B Valuation as It Takes on Musk, Altman

Anthropic, the AI startup behind the Claude family of models, has secured a $13 billion Series F financing at a staggering $183 billion post-money valuation, nearly tripling its worth since March. Anthropic is backed by Amazon and Google-parent Alphabet. The company said that the round was led by ICONIQ Capital and co-led by Fidelity and Lightspeed Venture Partners, with institutional heavyweights such as BlackRock, GIC, Qatar Investment Authority, Ontario Teachers’ Pension Plan, and Coatue am

They know where you are: Cybersecurity and the shadow world of geolocation

Tony Soprano knew. When one of his follow poker players in season 5, episode 4 of The Sopranos asks Tony how he likes his new Cadillac Escalade, the fictional mobster responds, “I love it. After I pulled out that global positioning [system].” OK, his language was a little more spicy than “system,” but the point is that Tony knew the dangers of being trackable. The rest of us might not have the same concerns Tony had about being findable just about anywhere, but we should all realize how danger

Anthropic raises $13B Series F

Anthropic has completed a Series F fundraising of $13 billion led by ICONIQ. This financing values Anthropic at $183 billion post-money. Along with ICONIQ, the round was co-led by Fidelity Management & Research Company and Lightspeed Venture Partners. The investment reflects Anthropic’s continued momentum and reinforces our position as the leading intelligence platform for enterprises, developers, and power users. Significant investors in this round include Altimeter, Baillie Gifford, affiliate

Anthropic raises $13 billion funding round at $183 billion valuation

Dario Amodei, Anthropic CEO, speaking on CNBC's Squawk Box outside the World Economic Forum in Davos, Switzerland on Jan. 21st, 2025. Anthropic on Tuesday announced it has closed a $13 billion funding round at a $183 billion post-money valuation, roughly triple what the artificial intelligence startup was worth as of its last raise in March. The most recent funding round was led by Iconiq, Fidelity Management & Research Company and Lightspeed Venture Partners. Other investors including Altimet

Anthropic raises $13B Series F at $183B post-money valuation

Anthropic has completed a Series F fundraising of $13 billion led by ICONIQ. This financing values Anthropic at $183 billion post-money. Along with ICONIQ, the round was co-led by Fidelity Management & Research Company and Lightspeed Venture Partners. The investment reflects Anthropic’s continued momentum and reinforces our position as the leading intelligence platform for enterprises, developers, and power users. Significant investors in this round include Altimeter, Baillie Gifford, affiliate

Anthropic is now valued at $183 billion

is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. Posts from this author will be added to your daily email digest and your homepage feed. Anthropic, the AI startup behind Claude and one of OpenAI’s chief competitors, emerged from the holiday weekend with big news: A completed funding round of $13 billion, awarding the company a $183 billion post-money valuation. The company s

Palo Alto Networks data breach exposes customer info, support cases

Palo Alto Networks suffered a data breach that exposed customer data and support cases after attackers abused compromised OAuth tokens from the Salesloft Drift breach to access its Salesforce instance. The company states that it was one of hundreds of companies affected by a supply-chain attack disclosed last week, in which threat actors abused the stolen authentication tokens to exfiltrate data. BleepingComputer learned of the breach this weekend from Palo Alto Networks' customers, who expres

Palo Alto Networks data breach exposes customer info, support tickets

Palo Alto Networks suffered a data breach that exposed customer data and support cases after attackers abused compromised OAuth tokens from the Salesloft Drift breach to access its Salesforce instance. The company states that it was one of hundreds of companies affected by a supply-chain attack disclosed last week, in which threat actors abused the stolen authentication tokens to exfiltrate data. BleepingComputer learned of the breach this weekend from Palo Alto Networks' customers, who expres

Amazon disrupts Russian APT29 hackers targeting Microsoft 365

Researchers have disrupted an operation attributed to the Russian state-sponsored threat group Midnight Blizzard, which sought access to Microsoft 365 accounts and data. Also known as APT29, the hacker group compromised websites in a watering hole campaign to redirect selected targets "to malicious infrastructure designed to trick users into authorizing attacker-controlled devices through Microsoft’s device code authentication flow." The Midnight Blizzard threat actor has been linked to Russia

C++: Strongly Happens Before?

Strongly Happens Before? It started innocently enough. I just wanted to brush up on C++ memory orderings. It’s been a while since I last stared into the abyss of std::atomic , so I figured, why not revisit some good ol’ std::memory_order mayhem? Then I saw it. Strongly happens before. Wait, what? When did we get a stronger version of happens before? Turns out, it has been there for quite some time (since C++20 in fact), and it’s actually solving a very real problem in the memory model. If yo

Leak suggests new Philips Hue lights will have direct Matter support

There’s already been a number of leaks of upcoming Philips Hue products that are expected to be announced next week ahead of IFA. But one thing that hasn’t been mentioned was support for Matter-over-Thread. While there’s no confirmation that support is coming, there’s compelling evidence to suggest it might be. First off, packaging for two unannounced bulbs appeared on Amazon, with a Matter logo prominently displayed on the box. While Hue devices have been capable of connecting via Matter using

Flunking my Anthropic interview again

The Curious Case of Flunking My Anthropic Interview (Again) Here's a vague overview of what just happened: I recently applied for Anthropic's Developer Relations role. My friend who works there gave me a glowing recommendation (thanks again, dude!). I completed their secret take-home assignment. On top of their secret take-home assignment, I independently published diggit.dev and a companion blogpost about my [sincerely] positive experiences with Claude. I was hoping that some unsolicited "ext

Taco Bell's Attempt to Replace Drive-Thru Employees With AI Is Not Going Well

That distant ringing? It's the sound of the Taco Bell death knell, tolling for the restaurant chain's shambolic AI-powered drive-thrus. We exaggerate, but only a little. The much-maligned tech experiment, which has been deployed at 500 Taco Bell locations across the United States, isn't quite dead yet. But it's received enough backlash since being unleashed on hangry motorists that even one of the company's top executives is having second thoughts. "We're learning a lot, I'm going to be honest

Topics: ai bell drive taco thrus

Taco Bell's Attempt to Replace Drive-Thru Employees Is Not Going Well

That distant ringing? It's the sound of the Taco Bell death knell, tolling for the restaurant chain's shambolic AI-powered drive-thrus. We exaggerate, but only a little. The much-maligned tech experiment, which has been deployed at 500 Taco Bell locations across the United States, isn't quite dead yet. But it's received enough backlash since being unleashed on hangry motorists that even one of the company's top executives is having second thoughts. "We're learning a lot, I'm going to be honest

Topics: ai bell drive taco thrus

OpenAI and Anthropic evaluated each others' models - which ones came out on top

Elyse Betters Picaro/ZDNET Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways Anthropic and OpenAI ran their own tests on each other's models. The two labs published findings in separate reports. The goal was to identify gaps in order to build better and safer models. The AI race is in full swing, and companies are sprinting to release the most cutting-edge products. Naturally, this has raised concerns about speed compromising proper safety evaluations. A first-of-

Anthropic Settles With Authors Over Pirated Material: What Does That Mean for Other AI Lawsuits?

Anthropic agreed to settle a lawsuit brought by a group of authors alleging that the AI company illegally pirated their copyrighted books to use in training its Claude AI models. On Tuesday, the parties in the lawsuit filed a motion indicating their agreement with the 9th US Circuit Court of Appeals. We don't yet know the terms of the settlement, but we could know more as soon as next week. Justin Nelson, lawyer for the authors, told CNET via email that more information will be announced soon.

Anthropic users face a new choice – opt out or share your chats for AI training

Anthropic is making some big changes to how it handles user data, requiring all Claude users to decide by September 28 whether they want their conversations used to train AI models. While the company directed us to its blog post on the policy changes when asked about what prompted the move, we’ve formed some theories of our own. But first, what’s changing: Previously, Anthropic didn’t use consumer chat data for model training. Now, the company wants to train its AI systems on user conversations

Threads is testing long-form posts with support for formatted text

While Threads already allows up to 500 characters per post (which is more than enough for casual users used to the microblogging format), it is now testing support for long-form posts through “text attachments”. Here’s how it works. Meta has confirmed the test, but has no ETA for the feature As spotted by app researcher Radu Oncescu (via TechCrunch), Threads is testing a new “text attachment” feature on iOS, which could replace the common practice of stringing together multiple posts that blow

Anthropic users face a new choice – opt out or share your data for AI training

Anthropic is making some big changes to how it handles user data, requiring all Claude users to decide by September 28 whether they want their conversations used to train AI models. While the company directed us to its blog post on the policy changes when asked about what prompted the move, we’ve formed some theories of our own. But first, what’s changing: previously, Anthropic didn’t use consumer chat data for model training. Now, the company wants to train its AI systems on user conversations

Meta is experimenting with long-form text on Threads

Meta seems to be working on ways for Threads users to share long-form writing within a single post. Several users have reported seeing a new "attach text" feature on the service, which allows them to embed large chunks of text within a single post. The feature, which hasn't been formally announced by Meta, is similar to the "articles" feature that's available on X to Premium+ subscribers. It enables Threads users to embed longer text excerpts within a single Threads post and offers some basic f

Malware devs abuse Anthropic’s Claude AI to build ransomware

Anthropic's Claude Code large language model has been abused by threat actors who used it in data extortion campaigns and to develop ransomware packages. The company says that its tool has also been used in fraudulent North Korean IT worker schemes and to distribute lures for Contagious Interview campaigns, in Chinese APT campaigns, and by a Russian-speaking developer to create malware with advanced evasion capabilities. AI-created ransomware In another instance, tracked as ‘GTG-5004,’ a UK-b

Threads tests a way to share long-form text on the platform

Threads is testing a new feature that makes it easy to share long-form text on the social network, Meta confirmed to TechCrunch on Thursday. The feature lets users attach a block of text to a post instead of creating a thread of several different posts when looking to share more in-depth thoughts and ideas. App researcher Radu Oncescu first spotted the new “text attachment” feature on iOS and shared a screenshot of it. According to the app’s description of the new feature, it’s designed to allo

OpenAI–Anthropic cross-tests expose jailbreak and misuse risks — what enterprises must add to GPT-5 evaluations

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now OpenAI and Anthropic may often pit their foundation models against each other, but the two companies came together to evaluate each other’s public models to test alignment. The companies said they believed that cross-evaluating accountability and safety would provide more transparency into what these powerful models could do, enabling ente

Anthropic will start training its AI models on chat transcripts

is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. Posts from this author will be added to your daily email digest and your homepage feed. Anthropic will start training its AI models on user data, including new chat transcripts and coding sessions, unless users choose to opt out. It’s also extending its data retention policy to five years — again, for users that don’t choose to

AI firm says its technology weaponised by hackers

AI firm says its technology weaponised by hackers 3 hours ago Share Save Imran Rahman-Jones Technology reporter Share Save Getty Images US artificial intelligence (AI) company Anthropic says its technology has been "weaponised" by hackers to carry out sophisticated cyber attacks. Anthropic, which makes the chatbot Claude, says its tools were used by hackers "to commit large-scale theft and extortion of personal data". The firm said its AI was used to help write code which carried out cyber-at

Hackers used AI to 'to commit large-scale theft', says Anthropic

Hackers used AI to 'to commit large-scale theft', says Anthropic 1 hour ago Share Save Imran Rahman-Jones Technology reporter Share Save Getty Images US artificial intelligence (AI) company Anthropic says its technology has been "weaponised" by hackers to carry out sophisticated cyber attacks. Anthropic, which makes the chatbot Claude, says its tools were used by hackers "to commit large-scale theft and extortion of personal data". The firm said its AI was used to help write code which carrie

Anthropic's Claude Chrome browser extension rolls out - how to get early access

DrPixel/Moment/Getty Images Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways: Claude is incorporating AI into a Chrome web browser extension. The closed beta allows users to chat with Claude in a side panel. Anthropic warned early users to use the extension carefully. Claude, Anthropic's AI model, is following Perplexity with its Comet web browser and Dia by incorporating AI into a web browser. Anthropic's first effort is a closed beta of a Chrome web browser ext

Huge Number of Authors Stand to Get Paid After Anthropic Agrees to Settle Potentially $1 Trillion Lawsuit

As OpenAI's ChatGPT and its imitators exploded onto the world stage over the past few years, they kicked off a series of legal showdowns that are still working their way through the courts. The New York Times is suing OpenAI. Disney is suing Midjourney. And in a class action case representing potentially millions of writers, book authors are suing Anthropic. All these cases are orbiting around a central question: what do the creators of modern AI systems — which are trained by ingesting vast a

Some teachers are using AI to grade their students, Anthropic finds - why that matters

Anthropic Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways Anthropic published its Education Report, analyzing educators' Claude usage. Teachers are using Claude to help grade students, a controversial use case. AI companies are doubling down on tools for education. Much of the focus on AI in education is on how students will be affected by AI tools. Many are concerned that the temptation to cheat and AI's erosion of critical thinking skills will diminish the qua

OpenAI and Anthropic conducted safety evaluations of each other's AI systems

Most of the time, AI companies are locked in a race to the top, treating each other as rivals and competitors. Today, OpenAI and Anthropic revealed that they agreed to evaluate the alignment of each other's publicly available systems and shared the results of their analyses. The full reports get pretty technical, but are worth a read for anyone who's following the nuts and bolts of AI development. A broad summary showed some flaws with each company's offerings, as well as revealing pointers for