Latest Tech News

Stay updated with the latest in technology, AI, cybersecurity, and more

Filtered by: thr Clear Filter

Threads now has a better word filter than Instagram

Threads has taken another step towards decoupling from Instagram by introducing its own word blocking filters. Instagram head Adam Mosseri announced on Thursday that the Hidden Words setting on Threads includes “newly added custom filters to block words, phrases, and emojis in batches, with optional time limits.” A previous version of Hidden Words was tied to Instagram, meaning you could only mute the same words and phrases across both platforms. This change enables users to tailor what they wa

Calculating the Fibonacci numbers on GPU

Calculating the fibonacci numbers on GPU 21 Jun, 2025 In this blogpost we will show how to perform very fast calculation of the Fibonacci sequence using GPU programming. In this blogpost we will employ Thrust an NVIDIA library which uses concepts from modern C++ to make GPU programming easy. Introduction Scan is one of the fundamental examples for parallelizable algorithms. If you are interested in the foundations of the algorithm I refer you to a previous blogpost where I implemented scan i

Anthropic summons the spirit of Flash games for the AI age

On Wednesday, Anthropic announced a new feature that expands its Artifacts document management system into the basis of a personal AI app gallery resembling something from the Flash game era of the early 2000s—though these apps run on modern web code rather than Adobe's defunct plugin. Using plain English dialogue, users can build and share interactive applications directly within Claude's chatbot interface using a new API capability that lets artifacts interact with Claude itself. Claude is an

Threads now has a Hidden Words setting that's separate from Instagram

This lets users filter out phrases they don't want to see in feeds, searches, profiles and replies. Threads and Instagram are continuing to decouple. Meta's social network has updated its Hidden Words setting to make it separate from Instagram. Prior to this update, users had one global Hidden Words setting that impacted both platforms. For the uninitiated, the Hidden Words setting lets users filter out stuff they don't want to see. The setting can be applied to posts, feeds, searches, profile

People use AI for companionship much less than we’re led to believe

The overabundance of attention paid to how people are turning to AI chatbots for emotional support, sometimes even striking up relationships, often leads one to think such behavior is commonplace. A new report by Anthropic, which makes the popular AI chatbot Claude, reveals a different reality: In fact, people rarely seek out companionship from Claude and turn to the bot for emotional support and personal advice only 2.9% of the time. “Companionship and roleplay combined comprise less than 0.5

Threads now lets you manage Hidden Words separately from Instagram, set time limits

Threads, Meta’s competitor to X, now has its own Hidden Words setting that operates separately from Instagram. The feature allows you to filter out posts that contain words, phrases, or emojis you don’t want to see in feeds, search, profiles, and replies. Previously, Hidden Words was tied to Instagram, so the filters that you entered would be applied to content on both platforms. Now, you can create a separate list of Hidden Words on Threads to further personalize the content that you see on th

Anthropic destroyed millions of physical books to train its AI, court documents reveal

WTF?! Generative AI has already faced sharp criticism for its well-known issues with reliability, its massive energy consumption, and the unauthorized use of copyrighted material. Now, a recent court case reveals that training these AI models has also involved the large-scale destruction of physical books. Buried in the details of a recent split ruling against Anthropic is a surprising revelation: the generative AI company destroyed millions of physical books by cutting off their bindings and d

A Review of Aerospike Nozzles: Current Trends in Aerospace Applications

The design of rocket nozzles aims to expand the combustion gases until the exit pressure matches the ambient pressure, thereby maximizing thrust. However, ambient pressuredecreases as the rocket ascends, posing a challenge for conventional nozzles, which operate efficiently only at a specific altitude [ 30 ]. By adjusting their flow characteristics to the varying atmospheric pressure, aerospike nozzles offer a significant advantage in this regard. To quantify this impact, the thrust coefficient

People use AI for companionship much less than we’re led to think

The overabundance of attention paid to how people are turning to AI chatbots for emotional support, sometimes even striking up relationships, often leads one to think such behavior is commonplace. A new report by Anthropic, which makes the popular AI chatbot Claude, reveals a different reality: In fact, people rarely seek out companionship from Claude, and turn to the bot for emotional support and personal advice only 2.9% of the time. “Companionship and roleplay combined comprise less than 0.

Anthros Chair V2 Review: Surprisingly Great

It's rare for me to keep sitting on a chair I'm reviewing well after I've given it enough testing time. Usually, I want to hop back on my Herman Miller Embody, which feels just right for my body. But the Anthros V2 has been a pleasant surprise. It's been on my radar for several months, thanks to endless Instagram marketing reels, but honestly, those just made me even more skeptical. Anthros is a newcomer to the scene, only launching the first version of the chair in 2023. It makes grand claims

Anthropic now lets you make apps right from its Claude AI chatbot

is a news editor covering technology, gaming, and more. He joined The Verge in 2019 after nearly two years at Techmeme. Anthropic is adding a new feature to its Claude AI chatbot that lets you build AI-powered apps right inside the app. The upgrade, launching in beta, builds upon Anthropic’s Artifacts feature introduced last year that lets you see and interact with what you ask Claude to make. “Start building in the Claude app by enabling this new interactive capability,” the company says in a

Hackers abuse Microsoft ClickOnce and AWS services for stealthy attacks

A sophisticated malicious campaign that researchers call OneClik has been leveraging Microsoft’s ClickOnce software deployment tool and custom Golang backdoors to compromise organizations within the energy, oil, and gas sectors. The hackers rely on legitimate AWS cloud services (AWS, Cloudfront, API Gateway, Lambda) to keep the command and control (C2) infrastructure hidden. ClickOnce is a deployment technology from Microsoft that allows developers to create self-updating Windows-based applica

These Are Our Favorite Supplements for Joint Health in 2025

While "there's not a ton of evidence out there to firmly say one supplement is going to help you over another," Mysore said, glucosamine likely has the most evidence backing its use. Glucosamine naturally occurs in our bodies -- it's in your cartilage and helps your joints function. A glucosamine supplement is believed to help with arthritis in that it can bring down some of the pain brought on by osteoarthritis or rheumatoid arthritis. According to the Arthritis Foundation, glucosamine is commo

Anthropic destroyed millions of print books to build its AI models

On Monday, court documents revealed that AI company Anthropic spent millions of dollars physically scanning print books to build Claude, an AI assistant similar to ChatGPT. In the process, the company cut millions of print books from their bindings, scanned them into digital files, and threw away the originals solely for the purpose of training AI—details buried in a copyright ruling on fair use whose broader fair use implications we reported yesterday. The 32-page legal decision tells the stor

Anthropic just made every Claude user a no-code app developer

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Anthropic announced Wednesday that it will transform its Claude AI assistant into a platform for creating interactive, shareable applications, marking a significant evolution from conversational chatbots toward functional software tools that users can build and distribute without coding knowledge. The San Francisco-based AI company reveal

Anthropic makes it easier to create and share Claude's bite-sized Artifact apps

Last August, Anthropic released Artifacts. The feature allows Claude users to create small, AI-programmed apps for their own use. Today, Anthropic is making it easier to share Artifacts. At the same time, it's making the apps you can make with the feature more powerful. To start, Artifacts now have their own dedicated space you can access from the Claude app sidebar. Here you'll find a curated selection of projects made by other people to get you started on your own programs. Every Artifact you

Anthropic launches new AI feature to build your own customizable chatbots

Anthropic Anthropic, the American startup company that produces the Claude family of generative artificial intelligence programs, on Wednesday said users can now make full-fledged applications using the "artifacts" function in Claude, and choose from a curated list of pre-built apps others have made. Artifacts, which were introduced in June of last year, and made generally available in August, allow for objects you make at the prompt — a picture, a diagram — to be displayed in their own separa

Federal court says AI training on books is fair use, but sends Anthropic to trial over pirated copies

What just happened? A federal court has delivered a split decision in a high-stakes copyright case that could reshape the future of artificial intelligence development. US District Judge William Alsup ruled that Anthropic's use of copyrighted books to train its Claude AI system qualifies as lawful "fair use" under copyright law, marking a significant victory for the AI industry. However, the judge simultaneously ordered the company to face trial this December for allegedly building a "central l

BreachForums hacking forum operators reportedly arrested in France

The French police have reportedly arrested five operators of the BreachForum cybercrime forum, a website used by cybercriminals to leak and sell stolen data that exposed the sensitive information of millions. News of the arrests come from Le Parisien, which claims the law enforcement operation was carried out by the cybercrime unit (BL2C) of the Paris police department on Monday. According to reporters, the police carried out simultaneous raids in the regions of Hauts-de-Seine (Paris), Seine-M

Court says AI training on books is fair use but Anthropic must face trial over pirated copies

What just happened? A federal court has delivered a split decision in a high-stakes copyright case that could reshape the future of artificial intelligence development. US District Judge William Alsup ruled that Anthropic's use of copyrighted books to train its Claude AI system qualifies as lawful "fair use" under copyright law, marking a significant victory for the AI industry. However, the judge simultaneously ordered the company to face trial this December for allegedly building a "central l

Judge backs AI firm over use of copyrighted books

Judge backs AI firm over use of copyrighted books 47 minutes ago Share Save Natalie Sherman and Lucy Hooker BBC News Share Save Getty Images A US judge has ruled that using books to train artificial intelligence (AI) software is not a violation of US copyright law. The decision came out of a lawsuit brought last year against AI firm Anthropic by three writers, a novelist, and two non-fiction authors, who accused the firm of stealing their work to train its Claude AI model and build a multi-bi

Judge OKs Anthropic's Use of Copyrighted Books in AI Training. That's Bad News for Creators

Anthropic's use of copyright-protected books in its AI training process was "exceedingly transformative" and fair use, US senior district judge William Alsup ruled on Monday. It's the first time a judge has decided in favor of an AI company on the issue of fair use, in a significant win for generative AI companies and a blow for creators. Fair use is a doctrine that's part of US copyright law. It's a four-part test that, when the criteria is met, lets people and companies use protected content

Key fair use ruling clarifies when books can be used for AI training

Artificial intelligence companies don't need permission from authors to train their large language models (LLMs) on legally acquired books, US District Judge William Alsup ruled Monday. The first-of-its-kind ruling that condones AI training as fair use will likely be viewed as a big win for AI companies, but it also notably put on notice all the AI companies that expect the same reasoning will apply to training on pirated copies of books—a question that remains unsettled. In the specific case

Judge rules Anthropic's AI training on copyrighted materials is fair use

Anthropic has received a mixed result in a class action lawsuit brought by a group of authors who claimed the company used their copyrighted creations without permission. On the positive side for the artificial intelligence company, senior district judge William Alsup of the US District Court for the Northern District of California determined that Anthropic's training of its AI tools on copyrighted works was protected as fair use. Developing large language models for artificial intelligence has

A federal judge sides with Anthropic in lawsuit over training AI on books

Federal judge William Alsup ruled that it was legal for Anthropic to train its AI models on published books without the authors’ permission. This marks the first time that the courts have given credence to AI companies’ claim that fair use doctrine can absolve AI companies from fault when they use copyrighted materials to train large language models (LLMs). This decision comes as a blow to authors, artists, and publishers who have brought dozens of lawsuits against companies like OpenAI, Meta,

Anthropic Scores a Landmark AI Copyright Win—but Will Face Trial Over Piracy Claims

Anthropic has scored a major victory in an ongoing legal battle over artificial intelligence models and copyright, one that may reverberate across the dozens of other AI copyright lawsuits winding through the legal system in the United States. A court has determined that it was legal for Anthropic to train its AI tools on copyrighted works, arguing that the behavior is shielded by the “fair use” doctrine, which allows for unauthorized use of copyrighted materials under certain conditions. “The

Judge rules Anthropic did not violate authors' copyrights with AI book training

Dario Amodei, Anthropic CEO, speaking on CNBC's Squawk Box outside the World Economic Forum in Davos, Switzerland on Jan. 21st, 2025. Anthropic's use of books to train its artificial intelligence model Claude was "fair use" and "transformative," a federal judge ruled late on Monday. Amazon -backed Anthropic's AI training did not violate the authors' copyrights since the large language models "have not reproduced to the public a given work's creative elements, nor even one author's identifiable

Judge sides with Anthropic over training AI on books without authors' permission

Federal judge William Alsup ruled that it was legal for Anthropic to train its AI models on published books without the authors’ permission. This marks the first time that the courts have given credence to AI companies’ claim that fair use doctrine can absolve AI companies from fault when they use copyrighted materials to train LLMs. This decision comes as a blow to authors, artists, and publishers who have brought dozens of lawsuits against companies like OpenAI, Meta, Midjourney, Google, and

Anthropic wins a major fair use victory for AI — but it’s still in trouble for stealing books

is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO. A federal judge has sided with Anthropic in an AI copyright case, ruling that training — and only training — its AI models on legally purchased books without authors’ permission is fair use. It’s a first-of-its-kind ruling in favor of the AI industry, but it’s importantly limited specifically to physical books Anthropic purchased and digitized. Jud

A federal judge sides with Anthropic in lawsuit over training AI on books without authors’ permission

Federal judge William Alsup ruled that it was legal for Anthropic to train its AI models on published books without the authors’ permission. This marks the first time that the courts have given credence to AI companies’ claim that fair use doctrine can absolve AI companies from fault when they use copyrighted materials to train LLMs. This decision comes as a blow to authors, artists, and publishers who have brought dozens of lawsuits against companies like OpenAI, Meta, Midjourney, Google, and