# 5 – Matrices

So, the idea for building our LDA model will be to factor are Bag of Words matrix on the left into two matrices, one indexing documents by topic and the other indexing topics by word. In this video, I’ll be more specific about what these matrices mean. Here’s how we calculate our Bag of Words matrix. Let’s say we have a document, Document two which has the words space three times, climate and rule once each and no other words. Remember that we process this document and extracted the important words so things like is, the, and are not counted. We write these numbers in the corresponding row. Now to find the probabilities, we just divide by the row sum and we get three-fifths, one-fifth, one-fifth, and zeros for the rest. That’s our Bag of Words matrix. Our document topic matrix is as follows; Let’s say we have a document say Document three, and let’s say we have a way to figure out that Document three is mostly about science and a bit about sports and politics. Let’s say it’s 70 percent about science, 10 percent about politics, and 20 percent about sports. So, we just record these numbers in the corresponding row and that’s how we obtain this matrix. The topic term matrix is similar. Here we have a topic, say politics, and let’s say we can figure out the probabilities that words are generated by this topic. We take all this probabilities we should add to one and put them in the corresponding row. As we saw, the product of these two matrices is the Bag of Word matrix. Well, this is not exact but the idea is to get really close. If we can find two matrices whose product is very close to the Bag of Word matrix, then we’ve created a topic model. But I still haven’t told you how to calculate the entries in these two matrices. Well, one way is using the traditional matrix factorization algorithm. This is out of the context of this course but in the instructor comments, we’ll add some resources and keys you want to learn it more in detail. However, these matrices are very special. For ones, the rows add up to one. Also if you think about it, there’s a lot of structure coming from a set of documents, topics and words. So, what we’ll do is something a bit more elaborate than matrix multiplication. The basic idea is the following: the entries in the two topic modeling matrices come from some special distributions. So, we’ll embrace this fact and work with these distributions to find these two matrices. We’ll see this in the next few videos.