We introduce Recurrent Neural Networks and how they are able to feed in a sequence and predict either a fixed target (categorical/numerical) or another sequence (sequence to sequence).
We create an RNN model to improve on our spam/ham SMS text predictions.
We show how to implement a LSTM (Long Short Term Memory) RNN for Shakespeare language generation. (Word level vocabulary)
We stack multiple LSTM layers to improve on our Shakespeare language generation. (Character level vocabulary)
We show how to use TensorFlow’s sequence-to-sequence models to train an English-German translation model.
Here, we implement a Siamese RNN to predict the similarity of addresses and use it for record matching. Using RNNs for record matching is very versatile, as we do not have a fixed set of target categories and can use the trained model to predict similarities across new addresses.