Udacity Project Solutions for Nanodegree Programs – Artificial Intelligence, AI for Trading, Computer Vision, Deep Learning, Deep Reinforcement Learning, Natural Language Processing

https://github.com/drserendipity/udacity/tree/main/solutions Udacity Project Solutions for Nanodegree Programs – Artificial Intelligence, AI for Trading, Computer Vision, Deep Learning, Deep Reinforcement Learning, Natural Language Processing including sources codes, reports, and reviews’ comments

9 – Segmentally Boosted HMMs

In your past work on gesture recognition, how many dimensions have you used for your output probabilities? >> Up to hundreds. At one point, we are creating appearance models of the hand, using a similarity metric of how closely the current hand looked like different visual models of the hand as features for the HMM. … Read more

6 – Context Training

OK. Now let’s talk about another trick. When we moved from recognizing isolated signs to recognizing phrases of signs, the combination of movements looks very different. >> For example, when Thad signed NEED in isolation, his hands started from a rest position and finished in the rest position. When he signs NEED in the context … Read more

4 – Phrase Level Recognition

Now that we have topologies for our six signs, let’s talk about phrase level sign language recognition. We have eight phrases we want to recognize. >> Actually, you mean 7 signs and 12 phrases. >> Since we have two variants of cat we are recognizing, expanding all the possibilities leads to 12 phrases. >> Good … Read more

2 – Using a Mixture of Gaussians

What if our output probabilities aren’t Gaussian? >> Well according to the central limit theorem, we should get Gaussians if enough factors are affecting the data. >> But in practice sometimes the output probabilities really are not Gaussian. It is not hard for them to be bimodal. >> You mean like this. >> Yep. >> … Read more

31 – Baum Welch

So what’s next? >> A process called Baum Welch re-estimation. >> That’s like Expectation-maximization again, right? >> Correct. >> But how does it differ from what we just did? >> It’s very similar, but with Baum Welch, every sample of the data contributes to every state proportionally to the probability of that frame of data … Read more

30 – HMM Training

When we started this lesson, we create our models by inspection, however, most of the time we want to train using the data itself. When using HMMs for gesture recognition, I like to have at least 12 examples for each gesture I’m trying to recognize, five examples at a minimum. >> For illustration purposes let’s … Read more

3 – Sign Language Recognition

We will use sign language recognition as our first application of HMMs. For example, let’s consider the signs I and we and create HMMs for each of them. Here’s I. [BLANK_AUDIO] We is a little different. [BLANK_AUDIO] Let’s focus on the I gesture. We’ll use delta y as our first feature here. >> Wait a … Read more