2 – Applications seq2seq

I do want to say a couple of words on applications before delving deeper into the concept. That’s because the term sequence-to-sequence RNN is a little bit abstract and doesn’t relay how many amazing things we can do with this type of model. So let’s think of it like this. We have a model that can learn to generate any sequence of vectors. And these can be letters. They can be words or images or anything, really. If you can represent it as a vector, it can be used in a sequence-to-sequence model. So this model can learn to generate any sequence of vectors after we feed it a sequence of input vectors. What can we do with that? So let’s see. Say you train it on a dataset where the source is an English phrase and the target is a French phrase. And you have a lot of these examples. If you do that and you train it successfully, then your model is now in English-to-French translator. Train it on a dataset of news articles and their summaries and you have a summarization bot. Train it on a dataset of questions and their answers and you have a question-answering model. Train it on a lot of dialogue data and you have a chatbot. But the inputs don’t only have to be words, remember. The RNNs are used along convolutional nets and image captioning tests, for example. So it can also be– the input sequence can also be audio. As we saw, there are many possibilities for what you can do after you master sequence-to-sequence. The challenge will be to find the right dataset to feed your model and guide it through the learning process.

Dr. Serendipity에서 더 알아보기

지금 구독하여 계속 읽고 전체 아카이브에 액세스하세요.

Continue reading