Facebook’s AI Uses Training, Empathy to Interact in Conversations
Most chatbots are mindless Q&A regurgitators, with a few programmed responses that signify that it doesn’t understand a question. Facebook’s new chatbot Blender spins up responses from tons of data and training to create a far more interesting interaction, according to a story on wired.com.
The chatbot can riff on topics ranging from vegetarianism to Game of Thrones and even have an exchange on what it’s like to raise an autistic child, according to Facebook’s AI engineers. But it’s still not perfect, by any means. According to the Wired story:
“Blender still gets tripped up by tricky questions and complex language and it struggles to hold the thread of a discussion for long. That’s partly because it generates responses using statistical pattern matching rather than common sense or emotional understanding.”
It’s better than the competition already, due to massive amounts of information, training and the addition of a programmed personality.
“Scale is not enough,” says Emily Dinan, a research engineer at Facebook who helped create Blender. “You have to make sure you’re fine tuning to give your model the appropriate conversational skills like empathy, personality, and knowledge.”
The chatbot builds on the ideas advanced by other companies, like OpenAI, which recently developed an algorithm called GPT-2 that can use conversational language realistically. The company has kept wraps on it for fear it could be used for “malicious” purposes in this age of deepfakes and Russian troll farms. Like OpenAI, Facebook’s engineers used a voluminous amount of training data, going beyond OpenAI to co-opt 147 million conversations from Reddit. The programmers metaphorically trained it arduously like Rocky to learn how to talk to humans. Unlike OpenAI, Facebook is releasing Blender as an open-source project that can be integrated into technology, according to engadget.com.
Blender is better than the average chatbot, but it still can’t match the typical human, and after a few minutes, it becomes obvious that it’s a machine instead of a person, according to the Wired story.
But Dinan says it still represents an incredible advance for a natural language system.
“This is the first time we’ve really shown that you can blend all of these aspects of conversation seamlessly in one. Our evaluation setup showed that models that were fine-tuned on these nice conversational skill datasets are more engaging and consider more human, more lifelike than models which were not.”
read more at wired.com
Leave A Comment