So, now we understand a lot about the 1D-Kalman Filter. You’ve programmed one, you understand how to incorporate measurements, you understand how to incorporate motion, and you really implement something that’s actually really cool. Which is a full common filter for the 1D case. Now in reality, we often have many Ds, and then things become more involved. So, I’m going to just tell you how things work with an example, and why it’s great to estimate in higher-dimensional state spaces. Suppose you have a two-dimensional state space of x and y, like a camera image, or in our case we might have a car that uses a radar to detect the location of a vehicle over time. Then what the 2D-Kalman filter affords to a something really amazing. And here’s how it goes. Suppose at time t equals zero, you observe the object of interest to be at this coordinate. This might be another current traffic for the Google self-driving car. One time-step later you see it over here. Another time-step later you see it right over here. Where would you now expect at time t equals three the object to be? Let me give you three different places. And the answer is here. What the Kalman Filter does for you if you do estimation in higher dimensional spaces, is to not just go onto x and y spaces, but allows you to implicitly figure about the velocity of the object is, and then uses velocity estimate to make a really good prediction about the future. Now, notice the sensor itself only sees position. It never sees the actual velocity, the velocity is inferred from seeing multiple positions. So, one of the most amazing things about Kalman Filters in tracking applications is that it’s able to figure out, even though it never directly measures it, the velocity of the object, and from there is able to make predictions about future locations that incorporate velocity. That is just really really really great. And it’s one of the reasons why Kalman filters are such a popular algorithm in artificial intelligence, and in control theory at large.