Skip to content
Tech News
← Back to articles

It’s not easy to get depression-detecting AI through the FDA

read original get Mental Health AI Device → more articles
Why This Matters

Kintsugi's AI aimed to revolutionize mental health screening by analyzing speech patterns to detect depression and anxiety, potentially offering a more objective and accessible alternative to traditional questionnaires. The company's inability to secure FDA approval highlights the regulatory challenges faced by innovative health AI technologies, which can delay their deployment and impact their potential benefits. This development underscores the importance of navigating regulatory pathways to bring effective mental health tools to market, especially as AI continues to advance in healthcare.

Key Takeaways

is a London-based reporter at The Verge covering all things AI and a Senior Tarbell Fellow. Previously, he wrote about health, science and tech for Forbes.

For the past seven years, the California-based startup Kintsugi has been developing AI designed to detect signs of depression and anxiety from a person’s speech. But after failing to secure FDA clearance in time, the company is shutting down and releasing most of its technology as open-source. Some elements may even find a second life beyond healthcare, like detecting deepfake audio.

Mental health assessments still largely rely on patient questionnaires and clinical interviews, rather than the lab tests or scans common in physical medicine. Instead of focusing on what someone is saying, Kintsugi’s software analyzes how it is being said. The idea isn’t new — speech patterns like pauses, sentence structure, or speed are known indicators of various mental health issues — but Kintsugi says its AI can pick up subtle shifts that may be less obvious to human observers, though it has not publicly detailed exactly which features drive its models’ predictions. In peer-reviewed research, the company reported results broadly in line with established self-report screening tools for depression using short speech samples.

The company pitched the technology as a complement — or potential alternative — to self-reported screening tools.

The company pitched the technology as a complement — or potential alternative — to self-reported screening tools like the Patient Health Questionnaire-9, or PHQ-9, a staple of primary care and psychiatry. These tools are supposed to be used alongside formal clinical assessment, and although they are widely validated, screening rates can be low, they depend on patients accurately describing symptoms, and they may not capture the full set of symptoms associated with mental health disorders. Kintsugi argued its voice-based model could provide a more objective signal, expand screening to more patients, and be deployed at scale across health systems, insurers, and employer programs. Doing so, however, would require FDA clearance.

Kintsugi had been seeking FDA clearance through the agency’s “De Novo” pathway, a route meant for novel, low-risk medical devices without an existing equivalent on the market. While intended to streamline approval for new kinds of products, it is still a process that can require years of data collection and regulatory review. Kintsugi’s founder and CEO Grace Chang told The Verge a lot of time was spent teaching the regulator about AI. The framework also fits AI poorly; much is designed with more traditional devices in mind — think hip implants, surgical tools, pacemakers — whose design remains largely fixed once approved. For AI systems, that can mean locking a model that would otherwise continue to be optimized and updated over time.

The FDA fits AI poorly; much is designed with more traditional devices in mind.

Despite the Trump administration’s hard push to cut red tape and get AI products into the real world as soon as possible, Chang said regulatory experts tell her that “there’s nothing that helps them do that except loud yelling from the top.” The approval process was further slowed by federal government shutdowns. The startup ran out of funding waiting for its final submission.

Efforts to raise additional funds faltered as the company’s runway shortened. Rather than accept “predatory” short-term offers to meet payroll — Chang said one proposal offered around $50,000 a week in exchange for $1 million in equity — the team decided to open-source most of its technology so others might continue the work. Investors were not happy.

Open-sourcing a mental health screening model also raises concerns about misuse. Tools designed to flag signs of depression or anxiety could, in theory, be deployed outside clinical settings, such as by employers or insurers, without the safeguards typically required in healthcare. Obviously that shouldn’t happen, but once released publicly there is little to prevent the technology from being used in ways its creators did not intend.

... continue reading