Latest Tech News

Stay updated with the latest in technology, AI, cybersecurity, and more

Filtered by: gpu Clear Filter

Startup and Nobel laureate collaborate to create GPU financial exchange

What just happened? A new financial marketplace aims to offer crucial risk management tools to a resource at the center of the tech industry's explosive growth. If successful, the initiative could make access to high-performance compute more predictable and affordable. The world of artificial intelligence is built on computing power, and at the heart of that engine are graphics processing units. These chips are in such high demand that they have often been compared to oil during the gold rush,

MSI expects to top 10 million motherboard sales for the first time as market rebounds

Bottom line: MSI is staging a decisive comeback in the fiercely competitive motherboard and GPU markets, navigating supply chain challenges and shifting industry demands to reclaim its place alongside top rivals. This rebound signals broader shifts in tech manufacturing and consumer appetite amid rapid innovation. DigiTimes reports MSI is poised for a milestone year, with analysts projecting global motherboard shipments will top 10 million units in 2025. The surge marks a sharp recovery for the

Topics: 2025 ai digitimes gpu msi

Ryzen 7 9800X3D vs. Ryzen 5 7600X: CPU and GPU Scaling Benchmark

Time for a new benchmark series. CPU and GPU scaling tests have been high on the community's wishlist for a while now – and it's time to deliver. Several comparisons are already in the pipeline, including fan favorites like AMD's 5800X3D and Intel's discounted Core Ultra 7 265K, so expect those soon. To kick things off and establish a baseline, we're starting with the Ryzen 7 9800X3D versus the ever-popular Ryzen 5 7600X. It's a clash between a gaming powerhouse and a budget-friendly performanc

China sees surge in Nvidia AI chip repair businesses despite export bans

In brief: There's a strange situation occurring in China: despite Nvidia's high-end AI chips being restricted from export to the country, businesses that repair these GPUs are experiencing a boom in demand. One company now handles up to 500 AI chip repairs every month. The US has restricted the export of Nvidia's most powerful AI chips to China since 2022 over fears that they could be used for military purposes. Although these chips aren't officially available in the Asian nation, a booming re

Topics: 000 ai china gpus nvidia

Emulating PC games on your non-Snapdragon device just got better

Hadlee Simons / Android Authority TL;DR GameSir has announced that its GameHub app now offers much better support for MediaTek chipsets with Mali GPUs. The company also revealed that it’s working with MediaTek to launch custom Mali GPU drivers. GameHub lets people play PC games on their Android phones without streaming. There are a few ways to natively play PC games on your phone without streaming, with Winlator and GameSir’s GameHub app being the most popular solutions. These apps tradition

Rust running on every GPU

I've built a demo of a single shared Rust codebase that runs on every major GPU platform: CUDA for NVIDIA GPUs for NVIDIA GPUs SPIR-V for Vulkan-compatible GPUs from AMD, Intel, NVIDIA, and Android devices for Vulkan-compatible GPUs from AMD, Intel, NVIDIA, and Android devices Metal for Apple devices for Apple devices DirectX 12 for Windows for Windows WebGPU for browsers for browsers CPU fallback for non-GPU systems The same compute logic runs on all targets, written entirely in regular

Topics: code cpu gpu rust self

Rust on Every GPU

I've built a demo of a single shared Rust codebase that runs on every major GPU platform: CUDA for NVIDIA GPUs for NVIDIA GPUs SPIR-V for Vulkan-compatible GPUs from AMD, Intel, NVIDIA, and Android devices for Vulkan-compatible GPUs from AMD, Intel, NVIDIA, and Android devices Metal for Apple devices for Apple devices DirectX 12 for Windows for Windows WebGPU for browsers for browsers CPU fallback for non-GPU systems The same compute logic runs on all targets, written entirely in regular

Topics: code cpu gpu rust self

Thunder Compute (YC S24) Is Hiring a C++ Systems Engineer

C++ Systems Engineer (Full-Time) Company Thunder Compute is a GPU cloud platform built on custom virtualization that delivers 5x better margins than competitors. 4-person team, seed-funded (approaching series A), 100%+ MoM revenue growth. 100% in-person in Atlanta, relocating to SF or NYC within 6 months. The technical challenge: We network-attach GPUs over TCP through API remoting, decoupling GPUs from instances. This lets us oversubscribe hardware aggressively—while Lambda Labs needs 100 G

Show HN: My GPU Fan Saga – A DIY ATX Fan Controller

Having a problem-solving mindset is incredibly valuable and rewarding, especially when it leads to exciting DIY adventures. My latest experience with a noisy GPU fan turned into just such an opportunity. It guided me through fascinating explorations involving ATX power, MOSFET motor drivers, Pulse Width Modulation (PWM), ATTiny85's bit-banged 1-wire bus, and a DS18B20 temperature sensor. While many ready-made solutions exists, this project provided me with invaluable learning and immense satisfa

Gaming Laptops Have a Secret Weapon Against Desktop PCs, and It Looks a Lot Like the Switch 2

I want a future where we don’t need a desktop PC to get the best gaming graphics. Maybe I’m a dreamer who needs to keep on dreaming, but I think that that pie-in-the-sky wonderland is closer to reality than you may think, and all PC component makers need to do is hop on the eGPU train. Better yet, they need to make something that’s as easily dockable as a Switch 2, but with the added power of a discrete graphics card that could push a laptop or gaming handheld into a full desktop experience. Th

Shipping WebGPU on Windows in Firefox 141

After years in development, we will be releasing WebGPU on Windows in Firefox 141! WebGPU gives web content a modern interface to the user’s graphics processor, enabling high-performance computation and rendering. We’re excited about WebGPU because we believe it will raise the ceiling for games, visualization, and local computation on the web. You can find a tutorial on WebGPU at webgpufundamentals.org, try out the WebGPU Samples, and read documentation for the API at MDN. WebGPU is defined in

GPUHammer: Rowhammer attacks on GPU memories are practical

GPUHammer: Rowhammer Attacks on GPU Memories are Practical # Chris (Shaopeng) Lin†, Joyce Qu†, Gururaj Saileshwar, from University of Toronto Published at USENIX Security 2025 (link to paper). Artifact available on GitHub and Zenodo. † equal contribution GPUHammer is the first attack to show Rowhammer bit flips on GPU memories, specifically on a GDDR6 memory in an NVIDIA A6000 GPU. Our attacks induce bit flips across all tested DRAM banks, despite in-DRAM defenses like TRR, using user-level C

Topics: bit dram flips gpu memory

Razer revives its eGPU line with a Thunderbolt 5 dock

Razer is back with a new addition to its Core line of external graphics enclosures. The Razer Core X V2 external graphics enclosure can house recent GPUs from brands including NVIDIA GeForce and AMD Radeon. It uses a single Thunderbolt 5 cable to the host device; in its press materials, Razer claims this tech delivers up to twice the bandwidth of Thunderbolt 4. This is true, but Thunderbolt 5 still isn't on the level of a modern desktop GPU connection, being roughly equivalent to 8 lanes of PCIe

Nvidia chips become the first GPUs to fall to Rowhammer bit-flip attacks

Nvidia is recommending a mitigation for customers of one of its GPU product lines that will degrade performance by up to 10 percent in a bid to protect users from exploits that could let hackers sabotage work projects and possibly cause other compromises. The move comes in response to an attack a team of academic researchers demonstrated against Nvidia’s RTX A6000, a widely used GPU for high-performance computing that’s available from many cloud services. A vulnerability the researchers discove

NVIDIA shares guidance to defend GDDR6 GPUs against Rowhammer attacks

NVIDIA is warning users to activate System Level Error-Correcting Code mitigation to protect against Rowhammer attacks on graphical processors with GDDR6 memory. The company is reinforcing the recommendation as new research demonstrates a Rowhammer attack against an NVIDIA A6000 GPU (graphical processing unit). Rowhammer is a hardware fault that can be triggered through software processes and stems from memory cells being too close to each other. The attack was demonstrated on DRAM cells but i

The best laptops for gaming and schoolwork in 2025

Engadget has been testing and reviewing consumer tech since 2004. Our stories may include affiliate links; if you buy something through a link, we may earn a commission. Read more about how we evaluate products . The line between laptops built for gaming and those built for heavy-duty computing work has all but disappeared. These days, you don’t have to look too hard to find a not-so-flashy gaming notebook that could easily sit alongside MacBook Pros in a classroom. After all, a powerful CPU, v

Tyr, a new Rust DRM driver targeting CSF-based ARM Mali GPUs

The last year has seen substantial progress on the DRM infrastructure required to write GPU drivers in Rust. While a great part of it was fueled by the development of Nova (the new driver for GSP-based NVIDIA GPUs), and by AGX (the driver for the GPUs on Apple's M-series chip that preceded Nova), a few components were being worked on to cater to a then undisclosed driver that was being prototyped behind the scenes. A driver that we now introduce to the community at large in a very early stage.

Nothing's untestable

Vidhi Katkoria Technical Writer Nothing's untestable As the co-founder of HashiCorp, Mitchell has been instrumental in the development of tools that many of us use daily, like Vagrant, Terraform, Vault, and more. He also helped shape the initial testing strategies for them, gaining hard-won insights into testing complex software along the way. At BugBash, where everyone is a testing nerd (or at least wants to be), most of us have come across that one piece of code that cannot be tested. What d

How to buy a GPU in 2025

Engadget has been testing and reviewing consumer tech since 2004. Our stories may include affiliate links; if you buy something through a link, we may earn a commission. Read more about how we evaluate products . One of the trickiest parts of any new computer build or upgrade is finding the right video card. In a gaming PC, the GPU is easily the most important component, and you can hamstring your experience by buying the wrong model. The buying process can be frustrating, with many manufacture

Sirius: A GPU-native SQL engine

Sirius is a GPU-native SQL engine. It plugs into existing databases such as DuckDB via the standard Substrait query format, requiring no query rewrites or major system changes. Sirius currently supports DuckDB and Doris (coming soon), other systems marked with * are on our roadmap. Performance Running TPC-H on SF=100, Sirius achieves ~10x speedup over existing CPU query engines at the same hardware rental cost, making it well-suited for interactive analytics, financial workloads, and ETL jobs.

I tested a bunch of gaming laptops and these are the best

is a reviewer covering laptops and the occasional gadget. He spent over 15 years in the photography industry before joining The Verge as a deals writer in 2021. Gaming laptops come in various shapes and sizes, though many of the heavy hitters remain thick and beefy machines that maximize cooling to get the most out of their powerful chips. Most are awash in RGB lighting and edgy designs geared towards capital-G Gamers, but there are also thinner, lighter options that are more portable and less

Nvidia announces RTX 5050 laptop GPU, desktop version set to launch on July 1

Something to look forward to: Nvidia has officially announced the RTX 5050 mobile GPU for entry-level gaming laptops. The company has also reportedly informed its AIC partners that the desktop version of the card will launch on July 1 , earlier than the end-of-month debut that was initially expected. The RTX 5050 mobile is based on the GB207 GPU and features 2,560 CUDA cores – the same number of shaders as the desktop model. It comes with 8GB of GDDR7 VRAM, a 128-bit memory interface, and 384 G

Topics: 5050 desktop gpu july rtx

Nvidia's RTX 5050 GPU starts at $249 with last-gen GDDR6 VRAM

is a reviewer covering laptops and the occasional gadget. He spent over 15 years in the photography industry before joining The Verge as a deals writer in 2021. Nvidia has announced GeForce RTX 5050 GPUs for both desktops and laptops with support for ray tracing and DLSS 4 with Multi Frame Generation. On the desktop side, the RTX 5050 will start at $249, draw up to 130W of power, and feature 8GB of last-gen GDDR6 video memory (VRAM) and 2,560 Blackwell CUDA cores. The cards will be made by thi

Topics: 5050 gddr6 gpu nvidia rtx

Nvidia’s RTX 5050 GPU starts at $249 with last-gen GDDR6 VRAM

is a reviewer covering laptops and the occasional gadget. He spent over 15 years in the photography industry before joining The Verge as a deals writer in 2021. Nvidia has announced GeForce RTX 5050 GPUs for both desktops and laptops with support for ray tracing and DLSS 4 with Multi Frame Generation. On the desktop side, the RTX 5050 will start at $249, draw up to 130W of power, and feature 8GB of last-gen GDDR6 video memory (VRAM) and 2,560 Blackwell CUDA cores. The cards will be made by thi

Topics: 5050 gddr6 gpu nvidia rtx

NVIDIA's RTX 5050 arrives early in laptops from Acer, MSI and more

NVIDIA's add-in board partners won't start selling the GeForce RTX 5050 until mid-July, but it looks like the company has given the early go-ahead to OEMs to start announcing laptops with the new entry-level GPU. Wccftech and Videocardz report that 5050-equipped laptops are available to order in China as of this morning from domestic manufacturers like Mechrevo. Over in the US, companies like MSI and Acer have begun announcing their own RTX 5050 laptops. The former, for instance, will sell the

Topics: 5050 gpu memory rtx vram

China's first gaming GPU, the Lisuan G100, performs like a 13-year-old Nvidia GTX 660 Ti

Facepalm: China recently unveiled its first gaming GPU, the Lisuan G100. Built on a 6nm process, the card was touted as a potential rival to Nvidia's RTX 4060. However, a recent Geekbench listing suggests its performance is closer to that of the 13-year-old GeForce GTX 660 Ti or the 10-year-old Radeon R9 370. The listing also appears to reveal shockingly anemic specifications, including just 32 Compute Units, 256 MB of VRAM, and a 300 MHz GPU clock. Overall, the card managed a score of only 15,

AMD's next-gen UDNA graphics cards will support up to 80 Gbps HDMI 2.2 connectivity

Something to look forward to: AMD has reportedly been working on its next-generation GPU architecture for at least two years. The new architecture will be called UDNA, replacing the RDNA name, and is expected to deliver significantly faster performance than RDNA 4. A tipster has now shared details about the rumored HDMI configuration of the first-generation UDNA GPUs. According to Kepler_L2, UDNA GPUs – codenamed GFX13 – will support 64 Gbps and 80 Gbps bandwidths over HDMI 2.2 connections. If

Topics: amd gbps gpus hdmi udna

Gigabyte Radeon RX 9060 XT Review: Great Value Gaming

It's AMD's turn. After months of $2,000+ GPUs and long discussions of DLSS, we're finally on the red team's turf. AMD's strength historically lies at the budget end of the spectrum, where the majority of gamers are playing at 1080p, and spending $1,000 or less for their entire system. Even though we really recommend splurging on a GPU, that's just not the reality for most folks. An $800 GPU needs $1,200 in other parts, and at that point most people who aren't into PC gaming will start shopping

Compiling LLMs into a MegaKernel: A path to low-latency inference

One of the most effective ways to reduce latency in LLM inference is to fuse all computation and communication into a single megakernel — also known as a persistent kernel. In this design, the system launches just one GPU kernel to execute the entire model — from layer-by-layer computation to inter-GPU communication — without interruption. This approach offers several key performance advantages: Eliminates kernel launch overhead, even in multi-GPU settings, by avoiding repeated kernel invocatio

Topics: gpu graph kernel mpk task

Compiling LLMs into a MegaKernel: A Path to Low-Latency Inference

One of the most effective ways to reduce latency in LLM inference is to fuse all computation and communication into a single megakernel — also known as a persistent kernel. In this design, the system launches just one GPU kernel to execute the entire model — from layer-by-layer computation to inter-GPU communication — without interruption. This approach offers several key performance advantages: Eliminates kernel launch overhead, even in multi-GPU settings, by avoiding repeated kernel invocatio

Topics: gpu graph kernel mpk task