Tech News
← Back to articles

Senator’s RISE Act would require AI developers to list training data, evaluation methods in exchange for ‘safe harbor’ from lawsuits

read original related products more articles

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

Amid an increasingly tense and destabilizing week for international news, it should not escape any technical decision-makers’ notice that some lawmakers in the U.S. Congress are still moving forward with new proposed AI regulations that could reshape the industry in powerful ways — and seek to steady it moving forward.

Case in point, yesterday, U.S. Republican Senator Cynthia Lummis of Wyoming introduced the Responsible Innovation and Safe Expertise Act of 2025 (RISE), the first stand-alone bill that pairs a conditional liability shield for AI developers with a transparency mandate on model training and specifications.

As with all new proposed legislation, both the U.S. Senate and House would need to vote in the majority to pass the bill and U.S. President Donald J. Trump would need to sign it before it becomes law, a process which would likely take months at the soonest.

“Bottom line: If we want America to lead and prosper in AI, we can’t let labs write the rules in the shadows,” wrote Lummis on her account on X when announcing the new bill. We need public, enforceable standards that balance innovation with trust. That’s what the RISE Act delivers. Let’s get it done.”

It also upholds traditional malpractice standards for doctors, lawyers, engineers, and other “learned professionals.”

If enacted as written, the measure would take effect December 1 2025 and apply only to conduct that occurs after that date.

Why Lummis says new AI legislation is necessary

The bill’s findings section paints a landscape of rapid AI adoption colliding with a patchwork of liability rules that chills investment and leaves professionals unsure where responsibility lies.

Lummis frames her answer as simple reciprocity: developers must be transparent, professionals must exercise judgment, and neither side should be punished for honest mistakes once both duties are met.

... continue reading