Latest Tech News

Stay updated with the latest in technology, AI, cybersecurity, and more

Filtered by: gpu Clear Filter

Shipping WebGPU on Windows in Firefox 141

After years in development, we will be releasing WebGPU on Windows in Firefox 141! WebGPU gives web content a modern interface to the user’s graphics processor, enabling high-performance computation and rendering. We’re excited about WebGPU because we believe it will raise the ceiling for games, visualization, and local computation on the web. You can find a tutorial on WebGPU at webgpufundamentals.org, try out the WebGPU Samples, and read documentation for the API at MDN. WebGPU is defined in

GPUHammer: Rowhammer attacks on GPU memories are practical

GPUHammer: Rowhammer Attacks on GPU Memories are Practical # Chris (Shaopeng) Lin†, Joyce Qu†, Gururaj Saileshwar, from University of Toronto Published at USENIX Security 2025 (link to paper). Artifact available on GitHub and Zenodo. † equal contribution GPUHammer is the first attack to show Rowhammer bit flips on GPU memories, specifically on a GDDR6 memory in an NVIDIA A6000 GPU. Our attacks induce bit flips across all tested DRAM banks, despite in-DRAM defenses like TRR, using user-level C

Topics: bit dram flips gpu memory

Razer revives its eGPU line with a Thunderbolt 5 dock

Razer is back with a new addition to its Core line of external graphics enclosures. The Razer Core X V2 external graphics enclosure can house recent GPUs from brands including NVIDIA GeForce and AMD Radeon. It uses a single Thunderbolt 5 cable to the host device; in its press materials, Razer claims this tech delivers up to twice the bandwidth of Thunderbolt 4. This is true, but Thunderbolt 5 still isn't on the level of a modern desktop GPU connection, being roughly equivalent to 8 lanes of PCIe

Nvidia chips become the first GPUs to fall to Rowhammer bit-flip attacks

Nvidia is recommending a mitigation for customers of one of its GPU product lines that will degrade performance by up to 10 percent in a bid to protect users from exploits that could let hackers sabotage work projects and possibly cause other compromises. The move comes in response to an attack a team of academic researchers demonstrated against Nvidia’s RTX A6000, a widely used GPU for high-performance computing that’s available from many cloud services. A vulnerability the researchers discove

NVIDIA shares guidance to defend GDDR6 GPUs against Rowhammer attacks

NVIDIA is warning users to activate System Level Error-Correcting Code mitigation to protect against Rowhammer attacks on graphical processors with GDDR6 memory. The company is reinforcing the recommendation as new research demonstrates a Rowhammer attack against an NVIDIA A6000 GPU (graphical processing unit). Rowhammer is a hardware fault that can be triggered through software processes and stems from memory cells being too close to each other. The attack was demonstrated on DRAM cells but i

The best laptops for gaming and schoolwork in 2025

Engadget has been testing and reviewing consumer tech since 2004. Our stories may include affiliate links; if you buy something through a link, we may earn a commission. Read more about how we evaluate products . The line between laptops built for gaming and those built for heavy-duty computing work has all but disappeared. These days, you don’t have to look too hard to find a not-so-flashy gaming notebook that could easily sit alongside MacBook Pros in a classroom. After all, a powerful CPU, v

Tyr, a new Rust DRM driver targeting CSF-based ARM Mali GPUs

The last year has seen substantial progress on the DRM infrastructure required to write GPU drivers in Rust. While a great part of it was fueled by the development of Nova (the new driver for GSP-based NVIDIA GPUs), and by AGX (the driver for the GPUs on Apple's M-series chip that preceded Nova), a few components were being worked on to cater to a then undisclosed driver that was being prototyped behind the scenes. A driver that we now introduce to the community at large in a very early stage.

Nothing's untestable

Vidhi Katkoria Technical Writer Nothing's untestable As the co-founder of HashiCorp, Mitchell has been instrumental in the development of tools that many of us use daily, like Vagrant, Terraform, Vault, and more. He also helped shape the initial testing strategies for them, gaining hard-won insights into testing complex software along the way. At BugBash, where everyone is a testing nerd (or at least wants to be), most of us have come across that one piece of code that cannot be tested. What d

How to buy a GPU in 2025

Engadget has been testing and reviewing consumer tech since 2004. Our stories may include affiliate links; if you buy something through a link, we may earn a commission. Read more about how we evaluate products . One of the trickiest parts of any new computer build or upgrade is finding the right video card. In a gaming PC, the GPU is easily the most important component, and you can hamstring your experience by buying the wrong model. The buying process can be frustrating, with many manufacture

Sirius: A GPU-native SQL engine

Sirius is a GPU-native SQL engine. It plugs into existing databases such as DuckDB via the standard Substrait query format, requiring no query rewrites or major system changes. Sirius currently supports DuckDB and Doris (coming soon), other systems marked with * are on our roadmap. Performance Running TPC-H on SF=100, Sirius achieves ~10x speedup over existing CPU query engines at the same hardware rental cost, making it well-suited for interactive analytics, financial workloads, and ETL jobs.

I tested a bunch of gaming laptops and these are the best

is a reviewer covering laptops and the occasional gadget. He spent over 15 years in the photography industry before joining The Verge as a deals writer in 2021. Gaming laptops come in various shapes and sizes, though many of the heavy hitters remain thick and beefy machines that maximize cooling to get the most out of their powerful chips. Most are awash in RGB lighting and edgy designs geared towards capital-G Gamers, but there are also thinner, lighter options that are more portable and less

Nvidia announces RTX 5050 laptop GPU, desktop version set to launch on July 1

Something to look forward to: Nvidia has officially announced the RTX 5050 mobile GPU for entry-level gaming laptops. The company has also reportedly informed its AIC partners that the desktop version of the card will launch on July 1 , earlier than the end-of-month debut that was initially expected. The RTX 5050 mobile is based on the GB207 GPU and features 2,560 CUDA cores – the same number of shaders as the desktop model. It comes with 8GB of GDDR7 VRAM, a 128-bit memory interface, and 384 G

Topics: 5050 desktop gpu july rtx

Nvidia's RTX 5050 GPU starts at $249 with last-gen GDDR6 VRAM

is a reviewer covering laptops and the occasional gadget. He spent over 15 years in the photography industry before joining The Verge as a deals writer in 2021. Nvidia has announced GeForce RTX 5050 GPUs for both desktops and laptops with support for ray tracing and DLSS 4 with Multi Frame Generation. On the desktop side, the RTX 5050 will start at $249, draw up to 130W of power, and feature 8GB of last-gen GDDR6 video memory (VRAM) and 2,560 Blackwell CUDA cores. The cards will be made by thi

Topics: 5050 gddr6 gpu nvidia rtx

Nvidia’s RTX 5050 GPU starts at $249 with last-gen GDDR6 VRAM

is a reviewer covering laptops and the occasional gadget. He spent over 15 years in the photography industry before joining The Verge as a deals writer in 2021. Nvidia has announced GeForce RTX 5050 GPUs for both desktops and laptops with support for ray tracing and DLSS 4 with Multi Frame Generation. On the desktop side, the RTX 5050 will start at $249, draw up to 130W of power, and feature 8GB of last-gen GDDR6 video memory (VRAM) and 2,560 Blackwell CUDA cores. The cards will be made by thi

Topics: 5050 gddr6 gpu nvidia rtx

NVIDIA's RTX 5050 arrives early in laptops from Acer, MSI and more

NVIDIA's add-in board partners won't start selling the GeForce RTX 5050 until mid-July, but it looks like the company has given the early go-ahead to OEMs to start announcing laptops with the new entry-level GPU. Wccftech and Videocardz report that 5050-equipped laptops are available to order in China as of this morning from domestic manufacturers like Mechrevo. Over in the US, companies like MSI and Acer have begun announcing their own RTX 5050 laptops. The former, for instance, will sell the

Topics: 5050 gpu memory rtx vram

China's first gaming GPU, the Lisuan G100, performs like a 13-year-old Nvidia GTX 660 Ti

Facepalm: China recently unveiled its first gaming GPU, the Lisuan G100. Built on a 6nm process, the card was touted as a potential rival to Nvidia's RTX 4060. However, a recent Geekbench listing suggests its performance is closer to that of the 13-year-old GeForce GTX 660 Ti or the 10-year-old Radeon R9 370. The listing also appears to reveal shockingly anemic specifications, including just 32 Compute Units, 256 MB of VRAM, and a 300 MHz GPU clock. Overall, the card managed a score of only 15,

AMD's next-gen UDNA graphics cards will support up to 80 Gbps HDMI 2.2 connectivity

Something to look forward to: AMD has reportedly been working on its next-generation GPU architecture for at least two years. The new architecture will be called UDNA, replacing the RDNA name, and is expected to deliver significantly faster performance than RDNA 4. A tipster has now shared details about the rumored HDMI configuration of the first-generation UDNA GPUs. According to Kepler_L2, UDNA GPUs – codenamed GFX13 – will support 64 Gbps and 80 Gbps bandwidths over HDMI 2.2 connections. If

Topics: amd gbps gpus hdmi udna

Gigabyte Radeon RX 9060 XT Review: Great Value Gaming

It's AMD's turn. After months of $2,000+ GPUs and long discussions of DLSS, we're finally on the red team's turf. AMD's strength historically lies at the budget end of the spectrum, where the majority of gamers are playing at 1080p, and spending $1,000 or less for their entire system. Even though we really recommend splurging on a GPU, that's just not the reality for most folks. An $800 GPU needs $1,200 in other parts, and at that point most people who aren't into PC gaming will start shopping

Compiling LLMs into a MegaKernel: A path to low-latency inference

One of the most effective ways to reduce latency in LLM inference is to fuse all computation and communication into a single megakernel — also known as a persistent kernel. In this design, the system launches just one GPU kernel to execute the entire model — from layer-by-layer computation to inter-GPU communication — without interruption. This approach offers several key performance advantages: Eliminates kernel launch overhead, even in multi-GPU settings, by avoiding repeated kernel invocatio

Topics: gpu graph kernel mpk task

Compiling LLMs into a MegaKernel: A Path to Low-Latency Inference

One of the most effective ways to reduce latency in LLM inference is to fuse all computation and communication into a single megakernel — also known as a persistent kernel. In this design, the system launches just one GPU kernel to execute the entire model — from layer-by-layer computation to inter-GPU communication — without interruption. This approach offers several key performance advantages: Eliminates kernel launch overhead, even in multi-GPU settings, by avoiding repeated kernel invocatio

Topics: gpu graph kernel mpk task

Van Gogh, AMD's Steam Deck APU (2023)

Zen 2’s launch was a defining moment for AMD. For the first time in many, many years, AMD’s single thread performance could go head to head with Intel’s best. Zen 2 also started a trend where AMD brought up to 16 cores to desktop CPUs, giving consumers very strong multithreaded performance without having to buy HEDT platforms. But Zen 2 was also flexible and did a very good job of scaling down to lower power targets. That was especially true when Zen 2 cores were implemented in more power effic

AMD's Pre-Zen Interconnect: Testing Trinity's Northbridge

Today, AMD’s Infinity Fabric interconnect is ubiquitous across the company’s lineup. Infinity Fabric provides well-defined interfaces to a transport layer, and lets different functional blocks treat the interconnect as a black box. The system worked well enough to let AMD create integrated GPU products all the way from the Steam Deck’s tiny van Gogh APU, to giant systems packing four MI300A chips. Across all those offerings, Infinity Fabric enables coherent memory access as CPU and GPU requests

43% of games skipped GPU upgrade to pay rent instead | Liquid Web

About 43% of gamers skipped a graphics processing unit (GPU) upgrade just to pay rent, according to a new report by Liquid Web. Based on a survey of 1,000 PC gamers, Liquid Web came up with a variety of conclusions about what affects the buying decisions of gamers. And clearly they’re pretty price sensitive now. Liquid Web surveyed the PC gamers to uncover what drives today’s graphics card decisions. The company noted that cloud gaming is a way for players to deal with the high costs of upgrad

Peeling the Covers Off Germany's Exascale "Jupiter" Supercomputer

The newest of the exascale-class supercomputer to be profiled in the Top500 rankings in the June list is the long-awaited “Jupiter” system at Forschungszentrum Jülich facility in Germany. We finally have a sense of how this hybrid CPU-GPU machine will perform, although some of the details on its configuration are still not nailed down publicly. Jupiter is the first exascale system to be completed under the EuroHPC Joint Undertaking of the European Union, and the fact that it is not using a cust

Topics: gb gpu grace jupiter sec

AMD debuts AMD Instinct MI350 Series accelerator chips with 35X better inferencing

AMD unveiled its comprehensive end-to-end integrated AI platform vision and introduced its open, scalable rack-scale AI infrastructure built on industry standards at its annual Advancing AI event. The Santa Clara, California-based chip maker announced its new AMD Instinct MI350 Series accelerators, which are four times faster on AI compute and 35 times faster on inferencing than prior chips. AMD and its partners showcased AMD Instinct-based products and the continued growth of the AMD ROCm eco

Topics: ai amd gpus instinct open

How much power can a PCIe x16 graphics card draw from the motherboard alone?

Choose wisely! The correct answer, the explanation, and an intriguing story await. Correct Answer: 75W What was Comdex before it was shut down in the early 2000s? When installed in a motherboard's PCIe x16 slot, a graphics card such as the GeForce RTX 5080 can draw up to 75 watts of power directly from the slot. This power comes from the motherboard's +3.3V and +12V rails (3.3V × 3A + 12V × 5.5A), which is the maximum defined by the PCI Express base specification for a x16 slot (this limit ha

iPhone 16e benchmark results, RAM, and charging speed revealed

The iPhone 16e is powered by the A18 chip, but there’s one difference between its GPU and the version in the chip that powers the iPhone 16. While the iPhone 16 has a 5-core GPU, the iPhone 16E has a 4-core GPU. New Geekbench results for the iPhone 16e reveal just how much of a performance impact buyers can expect, plus a few other tidbits about the device. The Geekbench results were spotted by MySmartPrice this morning. In testing, the iPhone 16e scored 24,188 points in graphic testing. This

Topics: 16 16e gpu iphone testing

Voltron Data just partnered with Accenture to solve one of AI’s biggest headaches

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More As artificial intelligence drives unprecedented demand for data processing, a Mountain View startup is offering a solution to one of AI’s least discussed but most critical challenges: moving and transforming massive datasets quickly enough to keep up. Voltron Data, which announced a strategic partnership with Accenture today, has developed a GPU-accelerated analytics e