ferrantraite/Getty Images
Technology standardization has been something of an elusive holy grail, with new tech emerging faster than standards groups can keep up. Yet, somehow, things eventually come together -- at least for mature systems -- and achieve interoperability, be it email networks or developer tools.
Now, a new race against time has come to the fore, with efforts to tame one of the fastest-developing technologies seen to date -- artificial intelligence. Can standards groups, with their purposely slower and highly participative deliberations, stay ahead of the AI curve? And can they achieve standards across a technology as abstract and amorphous as AI, that also shifts every few months?
Also: 60% of managers use AI to make decisions now, including who to promote and fire - does yours?
There's a good case to be made for AI standards, as it is a technology riddled with traps: deepfakes, bias, misdirections, and hallucinations. And, unlike technologies that have gone before it, AI presents more than a software engineering problem -- it's a societal problem.
A consortium of standards bodies seeks to take a new approach to AI, recognizing its wide implications. Active efforts are underway to involve non-technical professionals in formulating the standards that will define AI in the years ahead.
A collective of standards bodies -- the AI and Multimedia Authenticity Standards Collaboration (AMAS) -- seeks safer AI for a justifiably skeptical world. The initiative, seeking to address the misuse of AI-generated content, was announced at the recent "AI for Good" Global Summit in Geneva. The effort is spearheaded by the International Electrotechnical Commission (IEC), the International Organization for Standardization (ISO), and the International Telecommunication Union (ITU).
Also: I used Google's photo-to-video AI tool on my selfie - and it made me do the tango
The group hopes to develop standards that will help protect the integrity of information, uphold individual rights, and foster trust in the digital ecosystem. They seek to ensure users can identify the provenance of AI-generated and altered content. Human rights, never mentioned in technical standards, are top of mind for today's standards proponents.
All good stuff, for sure. But will major enterprises and technology firms fully buy into AI standards being handed down that may hamper innovation in what is a fast-moving space?
... continue reading