In theory, trigger warnings, also known as content warnings, should save us from seeing something upsetting — whether it’s a plain old gross-out, or some legitimately disturbing footage that can ruin your entire day. They’ve become more important in an age where even the most mainstream social media sites have turned into veritable snuff film hubs, platforming all kinds of gore that gets delivered straight to your timeline. But we say “in theory,” because as new research explores, it turns out these warnings have an unexpected and counterproductive effect: luring us into watching the flagged material anyway — an extension of the undying human instinct to jump into stuff that we know goes against our best interests. “Trigger warnings seem to foster a ‘forbidden fruit’ effect for many people whereby when something is off-limits, it often becomes more tempting,” said Victoria Bridgland, lead author of a new study published in the Journal of Behavior Therapy and Experimental Psychiatry, and a psychologist from Flinders University, in a statement about the work. “This may be because negative or disturbing information tends to stand out and feel more valuable or unique compared to everyday information.” In the study, the researchers asked 261 participants between the age of 17 and 25 to log how they responded to trigger warnings they encountered over the course of seven days. An overwhelming 90 percent of them said they still opened the sensitive content they came across — and not because they felt emotionally prepared, but because they were simply curious. One of the most surprising details the researchers found — and one of the most ironic — was that people with mental health conditions like PTSD, anxiety, and depression weren’t any more likely to avoid viewing content with a trigger warning than others. “If most individuals are approaching the content anyway, and vulnerable groups aren’t avoiding it more than others, then we need to reconsider how and why we use these warnings,” Bridgland said. Part of why this happening, Bridgland suggests, is that trigger warnings tend to be “short and vague,” which leaves a “gap in knowledge about what’s coming.” “That gap can spark curiosity and make people want to look, just to find out what they’re missing,” she explained. Maybe this can’t be helped. We’re inherently curious creatures that never quite learn from the first time we touched a hot stove. On the other hand, the deployment of trigger warnings by apps like Instagram may belie something much more nefarious: it’s a convenient, bare minimum excuse for large social media sites to avoid moderating controversial and upsetting content. Once a warning is slapped onto a disturbing video, for example, it becomes your fault that you watched it, exonerating the magical algorithm and its designers of wrongdoing. “It’s time to explore more effective interventions that genuinely support people’s wellbeing,” Bridgland said. More on social media: YouTube Removes Disturbing AI Slop YouTube Channel Filled With Videos of Women Being Murdered