## 9 – KALMAN QUIZ Predicting The Peak 01 RENDER V1

Now, here’s a question that’s really, really hard. When we graph the new Gaussian. Graph one is very wide. It’s very peaky. So if I were to measure where the peak of the new Gaussian is, this will be a very narrow and skinny Gaussian. This would be one that was width is in between … Read more

## 8 – KALMAN QUIZ Shifting The Mean 02 RENDER V1

The answer is over here in the middle, it’s between the two old means, the mean of the prior, and the mean of the measurement. It’s slightly further on the measurement side because the measurement was more certain as to where the vehicle is than the prior. The more certain we are, the more we … Read more

## 7 – KALMAN QUIZ Shifting The Mean 01 RENDER 1 V2

In Kalman filters, we iterate measurement and motion is often called the measurement update and it’s often called prediction. And the update will use Bayes rule, which is nothing else but a product or a multiplication. In this update, we use total probability which is a convolution or simply an addition. Let’s talk first about … Read more

## 6 – Maximize Gaussian Solution – Artificial Intelligence for Robotics

The answer is assess with the same value as mu, in which case this expression over here becomes zero, and we get the maximum. We get the peak of the Gaussian. We set x to the same value as mu, to 10, and the output is 0.2 approximately.

## 5 – Maximize Gaussian – Artificial Intelligence for Robotics

Starting with the following source code, I’m looking for a completion of this one line over here that returns the Gaussian function with arguments mu = 10, sigma2 = 4, and x = 8, and I want the output to be approximately 0.12. Here’s my solution. This is the constant: 1/sprt(2pisigma2). Then I multiply with … Read more

## 4 – KALMAN Gaussian Intro RENDER 1 1 V3

You remember our Markov model, where the word was divided into discrete grids, and we assigned to each grid to the probability. Such a representation of probability of a space is called a histogram, and that it divides the continuous space into discrete, into finally many grid cells, and approximates the posterior distribution by a … Read more

## 3 – KALMAN Tracking Intro RENDER V2

So, I’d like to take my students onto a little journey to Stanford and show them our self-driving car that uses sensors to sense the environment. So let me dive into the class very much right now. Through our last class, we talked about localization. We had a robot that lived in an environment and … Read more

## 2 – Introduction

Welcome to my second class on Kalman filters. I want to take you on a little tour to where it all began–Stanford University. Behind me is Vale, Stanford’s Research Center. Let’s go inside. This is Junior, Standord’s most recent self-driving car. It’s the child of Stanley, whom you can find in the National Museum of … Read more

## 19 – KALMAN QUIZ Kalman Prediction V1

So, now we understand a lot about the 1D-Kalman Filter. You’ve programmed one, you understand how to incorporate measurements, you understand how to incorporate motion, and you really implement something that’s actually really cool. Which is a full common filter for the 1D case. Now in reality, we often have many Ds, and then things … Read more

## 18 – Kalman Filter Code Solution – Artificial Intelligence for Robotics

This piece of code implements the entire Kalman filter. It goes through all the measurement elements and quietly assumes there are as many measurements as motions indexed by n. It updates the mu and sigma using this recursive formula over here. If we plug in the nth measurement and the measurement uncertainty, it does the … Read more

## 17 – Kalman Filter Code – Artificial Intelligence for Robotics

So now let’s put everything together. Let’s write a main program that takes these 2 functions, update and predict, and feeds into a sequence of measurements and motions. In the example I’ve chosen here are the measurements of 5., 6., 7., 9., and 10. The motions are 1., 1., 2., 1., 1. This all would … Read more

## 16 – Predict Function Solution – Artificial Intelligence for Robotics

And yes, it’s as easy as this. We just add the two means and the two variances. It’s amazing, this entire program over here implements a one-dimensional Kalman filter.

## 15 – Predict Function – Artificial Intelligence for Robotics

[Thrun] Let’s program this. I’m giving you a skeleton code. This is the same update function as before. Now I would like you to do the predict function, which takes our current estimate and its variance and the motion and its uncertainty and computes the new updated prediction: mean and variance. So for example, if … Read more

## 14 – KALMAN QUIZ Gaussian Motion 01 RENDER V2

So, let’s step a step back and look at what we’ve achieved. We knew there was a measurement update, an emotion update, which is also called prediction. We know that the measurement update is implemented by multiplication which is the same as Bayes rule. The motion update is done by total probability or an addition. … Read more

## 13 – New Mean and Variance Solution – Artificial Intelligence for Robotics

Here’s my answer. This is the expression for the mean. This is the one for the variance. I run it, and I get the exact same answer. I run it again for my other example of equal variances and 10 and 12 as means, and miraculously, the correct answer comes out– 11 for the new … Read more

## 12 – KALMAN QUIZ Parameter Update 02 RENDER V2

The answer for the new mean is just the one in the middle, and the reason is both weights over here are equivalent. So, we can take the mean between MU and NU, which is 11. Then the sigma square is two. If you take one over four plus one over four, then we get … Read more

## 11 – KALMAN QUIZ Parameter Update 01 RENDER V3

Suppose we multiply two Gaussians, as in Bayes rule, a prior and a measurement probability. The prior has a mean of Mu and a variance of Sigma square, and the measurement has a mean of Nu, and a covariance of r-square. Then, the new mean, Mu prime, is the weighted sum of the old means. … Read more

## 10 – KALMAN QUIZ Predicting The Peak 02 RENDER V1

And very surprisingly, the resulting Gaussian is more certain than the two-component Gaussians. That is the covariance is smaller than either of the two covariances in the installation. Intuitively speaking, this is the case because we actually gain information. The two Gaussians together with high information content in either Gaussian installation. So it look like … Read more

## 1 – 矩阵介绍

In this next unit, I really want to take you to a world of amazing tools that we use to build self-driving cars. And these tools are called matrices, linear algebra or vectors. And they might look a bit scary at first, but they’re very, very intuitive. In my experience, many students struggle with those … Read more

## 9 – MLND SL DT 08 Entropy Formula 2 MAIN V2

Well, it seems that the first bucket is the best one, because no matter what we do, we’ll always pick red, red, red, red so we’ll win every time. We can see that although it’s not very easy to win in any of the other two, it’s easier to pick red, red, red, blue in … Read more