Last week, the nonprofit research group OpenAI revealed that it had developed a new text-generation model that can write coherent, versatile prose given a certain subject matter prompt. However, the organization said, it would not be releasing the full algorithm due to “safety and security concerns.”
Instead, OpenAI decided to release a “much smaller” version of the model and withhold the data sets and training codes that were used to develop it. If your knowledge of the model, called GPT-2, came solely on headlines from the resulting news coverage, you might think that OpenAI had built a weapons-grade chatbot. A headline from Metro U.K. read, “Elon Musk-Founded OpenAI Builds Artificial Intelligence So Powerful That It Must Be Kept Locked Up for the Good of Humanity.” Another from CNET reported, “Musk-Backed AI Group: Our Text Generator Is So Good It’s Scary.” A column from the Guardian was titled, apparently without irony, “AI Can Write Just Like Me. Brace for the Robot Apocalypse.”
That sounds alarming. Experts in the machine learning field, however, are debating whether OpenAI’s claims may have been a bit exaggerated. The announcement has also sparked a debate about how to handle the proliferation of potentially dangerous A.I. algorithms.
OpenAI is a pioneer in artificial intelligence research that was initially funded by titans like SpaceX and Tesla founder Elon Musk, venture capitalist Peter Thiel, and LinkedIn co-founder Reid Hoffman. The nonprofit’s mission is to guide A.I. development responsibly, away from abusive and harmful applications. Besides text generation, OpenAI has also developed a robotic hand that can teach itself simple tasks, systems that can beat pro players of the strategy video game Dota 2, and algorithms that can incorporate human input into their learning processes.
On Feb. 14, OpenAI announced yet another feat of machine learning ingenuity in a blog post detailing how its researchers had trained a language model using text from 8 million webpages to predict the next word in a piece of writing. The resulting algorithm, according to the nonprofit, was stunning: It could “[adapt] to the style and content of the conditioning text” and allow users to “generate realistic and coherent continuations about a topic of their choosing.” To demonstrate the feat, OpenAI provided samples of text that GPT-2 had produced given a particular human-written prompt.
For example, researchers fed the generator the following scenario:
Advertisement
Advertisement
Advertisement
Advertisement
... continue reading