Louise Matsakis: I think that's right. I think that regardless of which ideology was more convincing to you, I think that officials in both the Trump administration and the Biden administration agreed that you kind of had to have a mix of both. And they were trying to strike this really delicate balance, which is don't totally cut them off tomorrow, where that would devastate companies like NVIDIA and AMD. These are really important American companies that have a large impact on the economy, on the stock market. So you don't want to cut one of their arms off and totally cut off their ability to sell into the Chinese market. But at the same time, can you maybe ensure that they don't get the best products from those companies? And that's why you saw the development of chips like the H-20, which NVIDIA actually specifically designed to just come under the thresholds that the Biden administration set. Zoë Schiffer: And Trump has insinuated that these are old and kind of useless chips. But you and I have talked about how they're actually quite good at some of the things that are really important for modern AI development. It's not like they're totally archaic. Louise Matsakis: The future of technology is incredibly difficult to predict. So while these H-20 chips are not that great for training, so for building those huge models like GPT-5, the H-20s are not a great use case. But for doing inference, which is the ability to sort of ping the model and get an answer in real time, they're much better, actually. They're pretty advanced. And we didn't know at the time when these export controls were originally designed how important inference would actually be. Zoë Schiffer: Yeah. I will definitely stop talking about the same dinner after this, but one other thing he mentioned at the dinner was that of all the money that OpenAI plans to invest in its CapEx expenditures for data centers and whatnot, all of that, and I think he said it was going to be trillions of dollars, but certainly hundreds of millions over the next few years. That's all for inference. That is the big frontier right now. Training is its own thing, and it's obviously quite expensive, but the bulk of the resources are going to inference at this point. Louise Matsakis: Totally. And that makes perfect sense, right? It's like you build this incredible model and then you have to let people use it, right? And you have to develop the reasoning ability and sort of the ability to interact with that model. It makes perfect sense to me, but it's not something I think you could have designed into a regulation three, four years ago. I don't think anybody has the magic eight-ball where they can see that. So I just think it's a really difficult, but also fascinating area. Zoë Schiffer: Louise, thank you so much for coming on Uncanny Valley. Louise Matsakis: Thanks so much for having me, Zoë. Zoë Schiffer: That's our show for today. We'll link to all the stories we spoke about in the show notes, make sure to check out Thursday's episode of Uncanny Valley, which is about how vibe coding is changing the tech industry. Adriana Tapia and Mark Leda produced this episode, Amar Lal at Macro Sound Mixed this episode. Kate Osborn is our executive producer. Condé Nast head of global audio is Chris Bannon. And Katie Drummond is WIRED's global editorial director.