Lauren Goode: Another fun little anecdote, I recently did a story about vibe coding for WIRED and I embedded with an engineering team at Notion, and two of the leaders of the engineering team were both women, of the AI engineering team specifically. And I ended up pair programming with women throughout the day as well, and so I was like, this is pretty cool to see that... It was a good mix.
Jason Kehe: The only thing I would add is that we had a former editor, a woman, who said not long ago that the only reason men care about and invented AI is because they can't get pregnant.
Katie Drummond: Yes.
Jason Kehe: Katrina.
Katie Drummond: They want to generate content?
Jason Kehe: Or life or something. Yes.
Katie Drummond: Oh, dear. This is-
Jason Kehe: I don't know what to make of it. I'm just putting that out there.
Katie Drummond: Whatever pitch that was, we can publish it.
Lauren Goode: Yeah. And then I later heard, Megan Smith said that once to me.
Jason Kehe: Oh, did she?
Lauren Goode: And she said it was one of the WIRED editors who first said that, and I was like, here we go. This is... You know?
Jason Kehe: I kind of buy it, I have to say.
Lauren Goode: Yes.
Katie Drummond: We'll argue about that later.
Jason Kehe: Yeah.
Lauren Goode: Yes.
[Audience member]: Hello, my name is Byron Perry. I'm a founder of a new publication here in San Francisco called Gazetteer. It's here in San Francisco. You guys recently published some stories that were written by AI and did a full mea culpa on that. I didn't actually read that yet, so I would just love to hear how that happened and what are you going to do to prevent something like that happening again? And what were the incentives and workflows that allowed that to happen?
Katie Drummond: Yeah, I'll take that. So just to be clear, we published one story that we, subsequently, in the weeks that followed, determined independently had been pitched by a writer who did not exist, and the actual contents of the article, it was a very innocuous piece. It was a gaming story, and the actual substance of the piece was AI generated. It was not accurate. And the way we think about this internally, and I genuinely mean this. This is not me trying to evade accountability, we take full responsibility for publishing the piece. As soon as we realized what had happened, we retracted the story, we put up an editor's note. We subsequently published a piece explaining to our audience what had happened. If it can happen to WIRED, it can happen to anyone, and that is what was so scary about it, is that we get hundreds of pitches every week. We are constantly evaluating new freelance pitches, we have editors who are commissioning those ideas. Our editor got a very normal pitch from a Gmail address. This writer had other clips that had been published in other outlets. It was a great idea, almost as if it had been tailor-made for WIRED by ChatGPT. It was a fantastic pitch. The writer commissioned it, got a draft. It was a good draft, went back and forth with edits with the writer, published the story. We have a fantastic fact-checking team at WIRED, we have a fantastic legal team, but not every story that we publish goes through that fact-checking process. That is reserved for a certain set of stories and not every story runs through a full fact-check. So the editor published the story, and it was only when it came to payment after the story had gone up that we realized, you want to be paid how? And it was not a conventional ask. And at that point, we did some sleuthing and realized like, ah, okay, this is a problem. In terms of what we are going to do moving forward, and again, what I say is so scary about that is that most newsrooms, WIRED has a fact-checking team. Most news organizations and magazines in the year 2025, exactly what you're talking about, budgets dwindling, do not. If it could happen to us, it could happen to anyone, whether it's text, whether it's video, whether it's AI-generated images. That is a very, very scary thing in the world of journalism and in the world of an internet that is full of accurate information and accurate content. That worries me a great deal. In terms of how we're going to handle it moving forward, we've implemented new processes where essentially, any first-time writer for WIRED or first-time commission for WIRED will go through a rigorous mandatory fact-checking process, which I actually think solves a significant or resolves this to 99.9%. You're not getting in the door with WIRED unless our fact-checking team has assessed both you and your background and the piece that you filed, so that's what we've done to safeguard against it. But it was not only like a learning moment for us, but I think for every brand under the Condé Nast umbrella to say, "Whoa. Okay, we got to think about how we're doing things," because again, this happened to us, it happened to many other publications, and it could happen tomorrow to somebody else. So that's a little bit of insight for you.