1 – M4L31 HSA Implementing RNNs V2 RENDERv1 V2

Hi again! So, in the last couple lessons, Ortel and Louise introduce you to recurrent neural networks and LSTMs. In this lesson, Matt and I will be going over some implementations of these networks. Because RNNs have a kind of built-in memory, they’re really useful for tasks that are time or sequence dependent. For example, RNNs are used in tasks like time series prediction, as in predicting the weather over time, or in text generation for which the order of words and characters in a sentence is really important. The challenges in designing and implementing any kind of RNN are really two-fold. First, how can we pre-process sequential data, such as a series of sentences, and convert it into numerical data that can be understood by a neural network? Second, how can we represent memory in code? I’ll address these challenges in a couple of code example. By the end of the lesson, you’ll have learned to build and train a character level RNN, specifically an LSTM that’s trained on the text of Tolstoy’s Anna Karenina. The LSTM will take one characters input and produce a predicted next character as output. You can train such an LSTM on any body of text, such as a publicly available book or television script, and generate some really interesting results. So next, let’s talk more about how we can build RNNs that learn from sequential data.

%d 블로거가 이것을 좋아합니다: