Dozens of xAI employees expressed concerns—and many objected—when asked to record videos of their facial expressions to help "give Grok a face," Business Insider reported.
BI reviewed internal documents and Slack messages, finding that the so-called project "Skippy" was designed to help Grok learn what a face is and "interpret human emotions."
It's unclear from these documents if workers' facial data helped train controversial avatars that xAI released last week, including Ani—an anime companion that flirts and strips—and Rudi—a red panda with a "Bad" mode that encourages violence. But a recording of an xAI introductory meeting on "Skippy" showed a lead engineer confirming the company "might eventually use" the employees' facial data to build out "avatars of people," BI reported.
Although all employees were told that their training videos would not be shared outside the company and would be used "solely" for training, some workers refused to sign the consent form, worried their likenesses might be used to say things they never said. Likely xAI's recent Grok scandals—where the chatbot went on antisemitic rants praising Hitler—and xAI's reported plan to hire an engineer to design "AI-powered anime girls for people to fall in love with" contributed to employees' discomfort. Confirming on Slack that they opted out, these employees were ultimately too "uneasy" granting xAI "perpetual" access to their data, BI found.