The Trump administration is not pleased about a Biden-era, private-sector-led nonprofit that vets AI tools for use in healthcare settings.
The Coalition for Health AI (CHAI) came under criticism from top officials at the Department of Health and Human Services last week, Politico reports.
With laboratories at Duke, Stanford, and Mayo Clinic, CHAI works with the private tech and healthcare sectors to develop guidelines and best practices for AI implementation. The founding partners of the coalition include hospitals like Mass General Brigham, healthcare companies like CVS and UnitedHealth, and big tech giants Microsoft, Amazon, and Google.
“CHAI is this public-private partnership where you have private sector innovators informing public sector officials, and the public sector is able to make better informed policies coming out of there,” CHAI CEO Brian Anderson told Gizmodo.
The non-profit currently has roughly 15 use-case working groups, each led by researchers from the likes of Amazon, Intel, Oracle, Microsoft, and more. The Coalition is also planning to launch a nationwide registry of model cards—kind of like nutritional labels that provide high-level information on AI models—to better inform health systems that are looking to procure AI models.
Trump admin officials are now claiming that the organization’s real aim is to stifle health tech startups and AI development.
“Biden’s Department of Health and Human Services gave CHAI and its Big Tech backers the power to regulate and stifle health-tech startups. This took regulatory capture to a whole new level — one of regulatory outsourcing,” deputy HHS secretary Jim O’Neill and FDA commissioner Marty Makary said in a guest article published by conservative newspaper Washington Examiner last week.
CHAI CEO Brian Anderson argues against that characterization.
“CHAI has nothing to do with government regulation. The government decides on policy and regulation, and CHAI, we will adapt to that,” Anderson said. “[But] in a public-private partnership, we want ideally to inform our regulators and our policymakers so they make informed regulations.”
CHAI has no actual regulatory power, but O’Neill told Politico last week that he has heard from industry officials who have raised concerns that startups that want to work in the space feel as if they need to be a member of the organization. O’Neill believes CHAI holds the potential to become a “cartel” that suppresses innovation in health AI.
“No one should really feel compelled to submit their work for the analysis of their competitors or people that are entangled with their competitors,” O’Neill told Politico.
“It’s like a self-licking ice cream cone, a virtual and unethical syndicate. We’re hitting reset. Under Secretary Robert F. Kennedy Jr., HHS will not allow CHAI—nor any other nonprofit group, think tank, or company—to operate as an implicitly government-backed regulator or policymaker,” O’Neill and Makary wrote in the Washington Examiner. That claim might seem rich considering the Trump administration has largely been following the Project 2025 playbook that was devised by the Heritage Foundation, a conservative think tank.
CHAI’s aims were also questioned along similar lines by Republican Congressmen at previous Senate hearings.
In a hearing from May 2024, Republican Rep. Mariannette Miller-Meeks said that CHAI and other similar organizations, referred to as assurance labs, don’t “pass the smell test” and show “clear signs of attempt at regulatory capture.”
Regulatory capture happens when a regulatory agency created to advance public interest becomes a shell for special interest groups that dominate the very industry that the agency is supposed to regulate. During the hearing, Dr. Jeff Shuren, who led the FDA’s Center for Devices and Radiological Health at the time but has since left his post, responded to the representatives’ concerns by saying that if CHAI produces “anything in terms of deliverables that is useful, we may take it into consideration. But they don’t work for us, we don’t work for them.”
Fears of CHAI attempting regulatory capture were also raised in an opinion piece by an executive at venture capital firm Andreessen Horowitz.
“Big corporations have a strong incentive to seize the market under the guise of safety,” general partner at Andreessen Horowitz’s Bio+Health team Julie Yoo wrote in the op-ed, claiming that additional layers of review of startups’ intellectual property were “slowing down real innovation.”
The founders of Andreessen Horowitz endorsed Trump in the 2024 election and have backed a $100 million effort to influence AI policy with an outside group called “Leading the Future.”
CHAI is not mandatory, but despite that, there are 3,000 organizations that are members of the coalition, a mix of tech and healthcare companies, and hospitals. Anderson says that 700 of these members are startups, the most active and fastest growing stakeholder community in CHAI.
He argues that startups join CHAI not out of pressure but as a way to advance their business and understand their customers.
“The reason why the startups want to be part of this is because they want to build a product that they can sell to as many health systems and life science companies,” Anderson said. “It is Business 101 for any technology company to know your customers, and that’s essentially what we’re creating.”
That logic also goes for big tech companies that provide cloud services to these health AI startups, according to him. Microsoft’s Azure, Amazon’s AWS, and the Google Cloud Platform are all go-to picks for AI startups and are all founding partners.
He insists that startups’ interests won’t be crushed under the big tech companies that are also a part of the Coalition, saying that each company only gets one seat in working groups, and the startups outnumber big tech companies.
But the founding partners, which include Amazon, Google, and Microsoft, also have a guaranteed seat on an advisory board that reports to the board of directors to shape governance decisions.
All for transparency
O’Neill told Politico that the Trump administration supports AI transparency efforts so that buyers can make better-informed decisions. That’s one point that, in principle, he and Anderson see eye to eye on.
Because AI is rather new, “it’s really important to have a level of transparency into how these models actually perform, and to be able to monitor them once they’re deployed,” Anderson said.
“The ultimate liable party when a model is used is not the vendor, it’s the doctor, it’s the health system. So you have health systems rightly saying ‘Well, before I use this model, I need to know what kind of data sets were used to train it, what was the training methodology?” Anderson said.
But Anderson believes it makes sense that tech companies don’t want to reveal their intellectual property. CHAI tries to strike a balance through working groups that bring together researchers and executives from both sectors to come to a consensus on what the minimum amount of disclosure is that can satisfy these transparency requirements.
This includes high-level descriptions of datasets and methodology that is used to train AI models, rather than showing row for row the exact datasets used.
“The model card does not require the startup community, or the tech companies in general, to reveal their IP,” Anderson said. “This was the vendors wanting to build trust with customers, and the customers wanting to be able to trust these tools.”
Assurance resource providers certified by CHAI, BeeKeeperAI, and Citadel AI, with more incoming, also offer a voluntary test that health systems can ask developers to go through before making a final purchasing decision. It’s tests like these that critics are against, but CHAI claims that it’s important for responsible and transparent health AI deployment, and that it actually accelerates AI adoption in healthcare by streamlining the vetting and deployment process.
Government relations
Despite being private sector-led, CHAI does work in some capacity with government officials.
Two former Biden administration officials, HHS assistant secretary for technology policy Mickey Tripathi and director of the FDA’s Digital Health Center of Excellence Troy Tazbaz, were for a limited time non-voting members of the board while still serving in government positions, but later resigned citing conflicts of interest.
The Biden administration also had multiple federal liaisons working in the Coalition’s working groups, and the Coalition’s relationship with the government continues under the Trump administration. At least so far.
Anderson said they have had multiple technical leads from HHS agencies like the Centers for Medicare & Medicaid Services, and recently announced a collaboration with the National Institute of Standards and Technology, which has been directed to focus entirely on advancing AI under Trump.
CHAI also has regular meetings with Republican Senators Mike Crapo, Mike Rounds, Todd Young, and Ted Cruz to talk about AI in health, according to Anderson. Politico reported that CHAI hired Sen. Crapo’s former health care aide, Susan Zook, earlier this year to lobby officials.
He is worried that this change of heart by the HHS might jeopardize these connections. CMS has not yet responded to a request for comment from Gizmodo on its plans going forward regarding CHAI.
Healthcare is one of AI’s most celebrated and scrutinized use cases. AI companies, like OpenAI, are making huge plays for the healthcare sector. Meanwhile, healthcare AI startup OpenEvidence has (allegedly) found its way onto the screens of a quarter of doctors nationwide.
The government is bullish on it, too. In July, the White House hosted a “Make Health Tech Great Again” event where Trump announced a private sector initiative that will use conversational AI assistants for patient care and share the private medical records of Americans across apps and programs from 60 companies.
But there is no clear verdict on just how well AI performs in medical settings. A Stanford study from last year showed that ChatGPT performed very well in medical diagnoses, even better than some physicians did, but some doctors are not convinced that the early tests are reassuring.
Healthcare is a closely scrutinized AI use case because even one mistake can be fatal, and our current AI systems are very far from perfect, with frequent hallucinations and baked-in biases.
It’s led concerned experts to call for more transparency and regulation. The United States made its first major regulatory step towards achieving that transparency earlier this week when California Governor Gavin Newsom signed into law the Transparency in Frontier Artificial Intelligence Act, a first-of-its-kind bill that could be a catalyst for more legislation nationwide.
There is currently limited federal oversight when it comes to the deployment of health AI, though it’s looking as if the Trump administration could be ready to take over the reins.
The FDA published a request for information last week, seeking to gather feedback from the medical sector on health AI deployment and evaluation. Whether that direct feedback will be taken as seriously as the desires of deep-pocketed outside interest groups remains to be seen.