## 9 – I vs We Quiz Solution

Here’s the answer. It could be probability distributions in middle states, as well as likely time spent in middle states

Thad returns to discuss using Hidden Markov Models for pattern recognition with sequential data.

Here’s the answer. It could be probability distributions in middle states, as well as likely time spent in middle states

What property of the observed sequences of delta_ys can help tell the difference between the two gestures? Probability distributions in respect to starting states, probability distributions in middle states, likely time spent in middle states, or none of the above? Select all answers that could apply.

Great, now here’s the HMM I created for the gesture, we. [BLANK_AUDIO] >> Hold on. I would have used four states here. Why did you only use three? >> Well it was mostly to simplify the problem for our purposes. Note that the middle section varies a little bit more in delta y than with … Read more

Okay, I’ve made an HMM for sign language word, I. >> Great, how did you pick those states? >> Well, the gesture seemed like it had three separate motions. So, I made each of those their own state and chose the transition probabilities based on the timing. >> We can take a look at the … Read more

Here’s the answer. [BLANK_AUDIO]

Here are several plots of y versus t. Given these plots, match each of the y versus t plots with their derivative plots, delta y versus t.

So what’s next? >> A process called Baum Welch re-estimation. >> That’s like Expectation-maximization again, right? >> Correct. >> But how does it differ from what we just did? >> It’s very similar, but with Baum Welch, every sample of the data contributes to every state proportionally to the probability of that frame of data … Read more

When we started this lesson, we create our models by inspection, however, most of the time we want to train using the data itself. When using HMMs for gesture recognition, I like to have at least 12 examples for each gesture I’m trying to recognize, five examples at a minimum. >> For illustration purposes let’s … Read more

We will use sign language recognition as our first application of HMMs. For example, let’s consider the signs I and we and create HMMs for each of them. Here’s I. [BLANK_AUDIO] We is a little different. [BLANK_AUDIO] Let’s focus on the I gesture. We’ll use delta y as our first feature here. >> Wait a … Read more

Here’s the resulting probability for We, 2.91 x 10 to the -5. Note that this answer is higher than what we got for the model of i. Indicating that this observation sequence probably came from a We gesture. This is a different result from what we saw previously. Showing how the additional time spent in … Read more

Now let’s do the same thing for We. We have the same observation sequence as the previous quiz, where the middle zero was replaced with the sequence negative one, zero, and one. We’ve given you new probabilities for We. So go ahead and tell us the probability of this observation sequence given our model for … Read more

Here’s the answer. By multiplying all the transition and output probabilities, along with the curvy path and the new trellis, we get the resulting probability for I 1.42 X 10 to the negative 5th.

Let’s look at a new observation sequence. We’ve replaced the middle 0 observation with a new sequence -1 0 and 1. Given these probabilities, can you tell us the probability of this observation sequence, given the model for I?

So it looks like it’s a lot more probable that the model for I generated this data. >> Yep. The main difference between the values for the models producing this observation sequence has to do with the middle state. Remember that we used delta y even though it is a relatively bad feature for distinguishing … Read more

Here’s the most likely path through the trellis. Notice that it’s very similar to the path for i, but the probability is much smaller.

Finally, we need to determine the most likely sequence through the trellis. Check the boxes to indicate the best path and then fill out the probability of that path here.

Here’s the answer. [BLANK_AUDIO]

In the last quiz, you looked at the transition probabilities. Now, let’s consider the output probabilities. We filled out some of the probabilities to get you started. Choose from these answers and fill out the remaining nodes in the trellis.

Here is the answer. Note that the main difference between I and We is the transitions for state two.

We should probably go over how to represent an HMM. >> Right, the Russell and Norvig book draws them like a Markov chain and adds an output node for each state. In this representation, which is common in the machine learning community, each Xi represents a frame of data. Xo is the beginning state which … Read more

댓글을 달려면 로그인해야 합니다.