31 – Baum Welch

So what’s next? >> A process called Baum Welch re-estimation. >> That’s like Expectation-maximization again, right? >> Correct. >> But how does it differ from what we just did? >> It’s very similar, but with Baum Welch, every sample of the data contributes to every state proportionally to the probability of that frame of data … Read more

30 – HMM Training

When we started this lesson, we create our models by inspection, however, most of the time we want to train using the data itself. When using HMMs for gesture recognition, I like to have at least 12 examples for each gesture I’m trying to recognize, five examples at a minimum. >> For illustration purposes let’s … Read more

3 – Sign Language Recognition

We will use sign language recognition as our first application of HMMs. For example, let’s consider the signs I and we and create HMMs for each of them. Here’s I. [BLANK_AUDIO] We is a little different. [BLANK_AUDIO] Let’s focus on the I gesture. We’ll use delta y as our first feature here. >> Wait a … Read more

29 – New Observation Sequence for _We_ Solution

Here’s the resulting probability for We, 2.91 x 10 to the -5. Note that this answer is higher than what we got for the model of i. Indicating that this observation sequence probably came from a We gesture. This is a different result from what we saw previously. Showing how the additional time spent in … Read more

28 – New Observation Sequence for _We_

Now let’s do the same thing for We. We have the same observation sequence as the previous quiz, where the middle zero was replaced with the sequence negative one, zero, and one. We’ve given you new probabilities for We. So go ahead and tell us the probability of this observation sequence given our model for … Read more

25 – Which Gesture is Recognized_

So it looks like it’s a lot more probable that the model for I generated this data. >> Yep. The main difference between the values for the models producing this observation sequence has to do with the middle state. Remember that we used delta y even though it is a relatively bad feature for distinguishing … Read more

2 – HMM Representation

We should probably go over how to represent an HMM. >> Right, the Russell and Norvig book draws them like a Markov chain and adds an output node for each state. In this representation, which is common in the machine learning community, each Xi represents a frame of data. Xo is the beginning state which … Read more