Talk with a Neural Chatbot!

Chitchat dialogue is a very difficult NLP task to master. However, creating a dialogue system (chatbot) with a simple LSTM Seq2Seq model with controls can deliver relatively good performance. Below, we've deployed a chatbot with two control features (conditional training [CT]) and weighted decoding [WD] in order to control four attributes in chitchat dialogue: repetition, specificity, response-relatedness and question-asking. The chatbot leverages a baseline twitter dataset that was fine-tuned on the PersonaChat ConvAI2 dataset found here.

In addition, we've also added a transformer based chatbot called Poly-Encoder. This is a much bigger model than the other options but can achieve state-of-the-art performance on the ConvAI2, scoring 89+ on the valid set. We've included a persona feature allowing the user to choose the bot's persona before conversation!

This demo, with the exception of the Poly-encoder model, is based on the research discussed in the paper "What makes a good conversation? How controllable attributes affect human judgments" released at NAACL 2019 by Abigail See, Stephen Roller, Douwe Kiela and Jason Weston. More information here: Paper | Code | Slides.

The Poly-encoder transformer model is based on the research discussed in the paper "Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring" written by Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston at Facebook AI. More information here: Paper.

Choose a Model
Parameters
You must add parameter   
  You have to type a message