wildpixel/ iStock/Getty Images Plus via Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
Google latest Frontier Safety Framework explores
It identifies three risk categories for AI.
Despite risks, regulation remains slow.
One of the great ironies of the ongoing AI boom has been that as the technology becomes more technically advanced, it also becomes more unpredictable. AI's "black box" gets darker as a system's number of parameters -- and the size of its dataset -- grows. In the absence of strong federal oversight, the very tech companies that are so aggressively pushing consumer-facing AI tools are also the entities that, by default, are setting the standards for the safe deployment of the rapidly evolving technology.
Also: AI models know when they're being tested - and change their behavior, research shows
On Monday, Google published the latest iteration of its Frontier Safety Framework (FSF), which seeks to understand and mitigate the dangers posed by industry-leading AI models. It focuses on what Google describes as "Critical Capability Levels," or CCLs, which can be thought of as thresholds of ability beyond which AI systems could escape human control and therefore endanger individual users or society at large.
Google published its new framework with the intention of setting a new safety standard for both tech developers and regulators, noting they can't do it alone.
... continue reading