Chatbots without writing any code, this is done by

Chatbots are also known as conversational agents or dialog systems. Your goal is to build a friendly AI. Modeling personality may appear straight forward, however incorporating such knowledge into a model is still a challenging problem. This is because the models are trained on a lot of data from different sources and users. Below is a link to a paper on building such A Persona-Based Neural Conversation Model are two important concepts that one needs to understand when learning about chatbots. That is the chatbot publishing platform and the chatbot development platform. A chatbot publishing platform is a medium through which the chatbot can be accessed by the users i.e. FB messenger, LINE, Telegram or WhatsApp, on the other hand a chatbot development platform is a tool that can be used to create a chatbot. Chatbot development platforms i.e. Beep Boop, Flow XO, Botsify and Chatfuel can be used to build chatbots without writing any code, this is done by simply using a drag and drop interface. Such development platforms however offer only limited functionality and customizability. A developer can choose to use a rule/retrieval based approach, here you simply write a pattern and a template. Such a bot replies with one of the templates when it encounters a similar pattern from the user. Rule based models tend to perform poorly when they encounter completely new sentences. (Suriyadeepan Ram 2016) A different approach involves utilizing a generative model. Generative models construct responses word by word based on the query. Because of this the generated responses are prone to grammatical errors. However once they are trained, the generative model outperforms the rule based approach especially in handling previously unseen queries. >In probability and statistics a generative model is a model that generates all values for a phenomenon, both those that can be observed in the world and “target” variables that can only be computed from those observed.For our example below, we will first preprocess the data and then train the generative model using sequence 2 sequence. Seq 2 Seq is a general purpose encoder-decoder framework for Tensorflow which can be used for image captioning, conversational modeling, text summarization and machine translation. The seq2seq network connects two RNN’s to work together to transform one sequence to another. An encoder network condenses an input sequence into a vector, and a decoder network unfolds that vector into a new sequence. Before training a dataset on a generative model we need to perform a preprocessing step called padding. Here we work on converting the variable length sequences into fixed length sequences. Some of the datasets which can be used to train a chatbot using a generative model approach include: the Cornell Movie Dialog Corpus, which is a collection of dialogs from movie scripts and the Ubuntu Dialog Corpus which is based on chat logs from the Ubuntu IRC public channels. The next step will be to implement the Continuous Bag of Words (CBOW) model, which is a form of word embedding. CBOW is a model of simplifying representation used in NLP. In this step text is represented as a multiset (Bag) of words disregarding grammer but retaining the frequency of words.The final step in our model is implementing the Attention Mechanism. This step is very important since it prevents the information loss that could have occured while compressing all the necessary information of a source sentence into a single vector. For source code, I recommend implementing Siraj Rawal’s Seq 2 Seq Chatbot because of its clear documentation. Its available here