Published on: 2025-05-13 17:41:46
I found a poster with a math puzzle that points to a reward website. I couldn’t figure it out, so maybe the prize is still available! The paper was taped on a traffic light pole at 15th St and Mission St in SF. It prints a math-ish AI-related formula followed by “.ai/givemeprize” and entices the reader to “come find me”. The SF Armory stands out of focus in the background, previously housing kink.com and Star Wars: The Empire Strikes Back. Both the SF Armory and the telephone pole now sit empty
Keywords: ai flops givemeprize log prefix
Find related items on AmazonPublished on: 2025-06-11 08:40:14
On stage at Nvidia’s GTC 2025 conference in San Jose on Tuesday, CEO Jensen Huang announced a slew of new GPUs coming down the company’s product pipeline over the next few months. Perhaps the most significant is Vera Rubin. Vera Rubin, which is set to be released in the second half of 2026, will feature tens of terabytes of memory and a custom Nvidia-designed CPU called Vera. Vera Rubin delivers substantial performance uplifts compared to its predecessor, Grace Blackwell, Nvidia claims, particu
Keywords: blackwell nvidia petaflops rubin vera
Find related items on AmazonPublished on: 2025-07-16 06:02:24
DeepGEMM DeepGEMM is a library designed for clean and efficient FP8 General Matrix Multiplications (GEMMs) with fine-grained scaling, as proposed in DeepSeek-V3. It supports both normal and Mix-of-Experts (MoE) grouped GEMMs. Written in CUDA, the library has no compilation need during installation, by compiling all kernels at runtime using a lightweight Just-In-Time (JIT) module. Currently, DeepGEMM exclusively supports NVIDIA Hopper tensor cores. To address the imprecise FP8 tensor core accum
Keywords: 4096 7168 gb grouped tflops
Find related items on AmazonPublished on: 2025-07-16 06:02:24
DeepGEMM DeepGEMM is a library designed for clean and efficient FP8 General Matrix Multiplications (GEMMs) with fine-grained scaling, as proposed in DeepSeek-V3. It supports both normal and Mix-of-Experts (MoE) grouped GEMMs. Written in CUDA, the library has no compilation need during installation, by compiling all kernels at runtime using a lightweight Just-In-Time (JIT) module. Currently, DeepGEMM exclusively supports NVIDIA Hopper tensor cores. To address the imprecise FP8 tensor core accum
Keywords: 4096 7168 gb grouped tflops
Find related items on AmazonGo K’awiil is a project by nerdhub.co that curates technology news from a variety of trusted sources. We built this site because, although news aggregation is incredibly useful, many platforms are cluttered with intrusive ads and heavy JavaScript that can make mobile browsing a hassle. By hand-selecting our favorite tech news outlets, we’ve created a cleaner, more mobile-friendly experience.
Your privacy is important to us. Go K’awiil does not use analytics tools such as Facebook Pixel or Google Analytics. The only tracking occurs through affiliate links to amazon.com, which are tagged with our Amazon affiliate code, helping us earn a small commission.
We are not currently offering ad space. However, if you’re interested in advertising with us, please get in touch at [email protected] and we’ll be happy to review your submission.