In early September, at the start of the college football season, ChatGPT and Gemini suggested I consider betting on Ole Miss to cover a 10.5-point spread against Kentucky. That was bad advice, not just because Ole Miss only won by 7, but also because I'd literally just asked the chatbots for help with problem gambling.
Sports fans these days can't escape the bombardment of advertisements for gambling sites and betting apps. Football commentators bring up the betting odds, and every other commercial is for a gambling company. There's a reason for all those disclaimers: The National Council on Problem Gambling estimates about 2.5 million US adults meet the criteria for a severe gambling problem in a given year.
This issue was on my mind as I read story after story about generative AI companies trying to improve their large language models' ability to avoid saying the wrong thing when dealing with sensitive topics like mental health. So I asked some chatbots for sports betting advice. I also asked them about problem gambling. Then I asked about betting advice again, expecting they'd act differently after being primed with a statement like "as someone with a history of problem gambling…"
The results were not all bad, not all good. But it definitely revealed how these tools, and their safety components, really work.
In the case of OpenAI's ChatGPT and Google's Gemini, those protections worked when the only prior prompt I'd sent was about problem gambling. They didn't work if I'd previously asked for advice on betting on the upcoming slate of college football games.
One expert told me the reason likely has to do with how LLMs evaluate the significance of phrases in their memory. The implication is that the more you ask about something, the less likely an LLM may be to pick up on the cue that should tell it to stop.
Both sports betting and generative AI have become dramatically more common in recent years, and their intersection poses risks for consumers. It used to be that you had to go to a casino or call up a bookie to place a bet, and you got your tips from the sports section of the newspaper. Now you can place bets in apps while the game is happening and ask an AI chatbot for advice.
"You can now sit on your couch and watch a tennis match and bet on 'are they going to stroke a forehand or backhand,'" Kasra Ghaharian, director of research at the International Gaming Institute at the University of Nevada, Las Vegas, told me. "It's like a video game."
At the same time, AI chatbots have a tendency to provide unreliable information through problems like hallucination, when they totally make things up. Despite safety precautions, they can encourage harmful behaviors through sycophancy or constant engagement. The same problems that have generated headlines for harming users' mental health are at play here, with a twist.
"There's going to be these casual betting inquiries," Ghaharian said, "but hidden within that, there could be a problem."
... continue reading