Another trick is State Tying. >> You mean combining training for states with the states within the model are closed? >> Yup. Let’s look at our models for I and we again. In the case where we are recognizing isolated signs, the initial movement of the right hand going to the chest is very similar in both models. Instead of having an I state one and a we state one, we’re just going to have an initial state and define the I and we models such that they both include it. That way when we training the HMM we have twice as much data to train the initial state. >> And we can do the same thing with a final state, since the hand going back down to rest looks much the same in the isolated models for I and we. >> We have to be careful with this trick, because State Tying gets more complicated when we start to worry about context training. For our new table and we want CAT2, the only states that should be tied are the first state for I and we and the last state for table and cat. >> And maybe not even the last states for table and CAT2, if we’re using more features than just delta Y, the end of table and CAT2 could be very different. >> Good point, in practice I often just look for states that seem to have close means and variances during training and then determine if tying them looks logical given the motion I expect. Again it’s a situation where some visualization of the data and iteration can help us improve our results.