Human reinforced learning could mean ‘more truthful and less toxic’ AI

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on reddit
Reddit
Share on telegram
Telegram
Share on email
Email

AI has been making huge leaps in terms of scientific research, and companies like Nvidia and Meta are continuing to throw more resources towards the technology. But AI learning can have a pretty huge setback when it adopts the prejudices of those who make it. Like all those chatbots that wind up spewing hate speech thanks to their exposure to the criminally online.

According to Golem, the OpenAI might have made some headway on that with its new successor to the GPT-3, the autoregressive language model that uses deep learning in an effort to appear human in text. It wrote this article, if you want an example of how that works.

But GPT-3 also has a tendency to parrot incorrect, biased, or outright toxic notions thanks to all the sources of information. These biases would impact the language, causing GPT-3 to make bigoted assumptions or implications in its writing. It’s not too different from humans, in that all these reinforced ideas can easily look like truths, and there’s plenty of outdated notions to choose from. GPT-3 seems a bit like the weird Uncle you don’t talk to on Facebook.

The new InstructGPT is said to be an improvement as its answers are “more truthful and less toxic”. This has been achieved thanks to the work of researchers at Open AI, who’s alignment research helps the machine process instructions more accurately, despite being much smaller. InstructGPT uses 1.3 billion parameters, which is a fraction of the 175 billion used by the older GPT-3 model but thanks to reinforcement learning with human feedback, has simply been better trained. The quality of InstructGPT’s answers are assessed and reported on by researchers, hopefully shaping it to be a better bot overall.

That being said, though InstructGPT seems like a promising step up, it’s still far from perfect. “They still generate toxic or biased results, fabricate facts and generate sexual and violent content without explicit request” according to the researchers at OpenAI but it’s still less than the older GPT-3. Perhaps in a few generations we’ll see a language AI that’s a bit further unravelled from some of the worst aspects of humanity. 

Article source: PCGamer

You might also enjoy