The Claude creator’s first in-house AI welfare researcher says there’s a 20% chance chatbots are self-aware. What if the chatbots we talk to every day actually felt something? What if the systems writing essays, solving problems, and planning tasks had preferences, or even something resembling suffering? And what will happen if we ignore these possibilities?