Stochastic beam search, did we see that before? >> Not in detail. So far we’ve been doing something like a breadth first search, expanding each possible path in each time step. But now we want to prune some of those paths. >> Well some of those paths are going to get a low probability pretty quickly. For example, staying in the first state of the model for I until t equals 10 seems improbable. We could just drop some of those low probability paths. >> Yup, that’s the idea of the beam search. But we don’t want to get rid of all the low-probability paths. It is possible that there is a bad match in the beginning of the phrase that becomes a good match later on. For example, the signer might hesitate or accidentally start with the wrong sign before changing it. >> Like someone stuttering or having a false start in a spoken language. >> Precisely. >> In that case, let’s keep the paths randomly in proportion of their probability. >> Here are some examples of high probability paths through the trellis marked in red. And some low probability paths through the trellis marked in blue. >> In that case, let’s keep the paths randomly in proportion with their probability. >> Yep, the idea has some similarity to the fitness function we talk about with genetic algorithms. Or to the randomness in simulated knee length. In practice, it works very well. >> Randomness seems to be useful in a lot of AI. >> It’s a principle I practice in my daily life. >> What? >> Precisely. >> Okay, let’s get back to the real topic.