# 6 – More Kalman Filters RENDER V2

When you put all this together, you find that all these possibilities on the Gaussian over here link to a Gaussian that’s just like this. This is a really interesting two-dimensional Gaussian which you should really think about. Clearly, if it was to project this Gaussian uncertainty into the space of possible locations, I can’t predict a thing. It’s impossible to predict where the object is, and the reason is I don’t know the velocity. Also, clearly, if I project this Gaussian into the space of x_dot, then it’s impossible to say what the velocity is. A single observation or single prediction is insufficient to make that observation. However, what we know is our location is correlated to the velocity. The faster I move the farther on the right is the location, and this Gaussian expresses this. If, I for example, figured out that my velocity was 2 and then was able under this Gaussian to really nail that my location is 3. That is really remarkable. You still haven’t figured where you are, we haven’t figured out how fast we’re moving, but we’ve learned so much about the relation of these two things with this tilted Gaussian. To understand how powerful this is, let’s now fall in the second observation at time t equals 2. This observation tells us nothing about the velocity and only something about the location. So, if I were to draw this as a Gaussian, it’s a Gaussian just like this, which is sending about location but not about the velocity. But if I now multiply my prior from the prediction step with the measurement probability, then miraculously, I get a Gaussian that sits right over here. And this Gaussian now has a really good estimate what my velocity is and a really good estimate where I am. If I take this Gaussian and predict one step forward, and I find myself right over here. This is exactly the effect we have over here. As I opted this, I get a Gaussian like this. I predict right over here. Think about this. This is a really deep insight in how Kalman Filter work. In particular, we’ve only been able to observe one variable, we’ve been able to multiple observations to infer this other variable. The way we’ve been able to infer this is that there’s a set of physical equations which say that my location after time step is my old location plus my velocity. This set of equation has been able to propagate constraints from subsequent measurements back to this unobserved variable velocity, so we’re able to estimate the velocity as well. This is really key to understanding Kalman Filter. It is key to understand how Google self-driving car estimates location of other cars, and is able to make predictions even if it’s unable to measure velocity directly. So, there’s a big lesson here. The variables of a Kalman Filter, they often called states because they reflect states of the physical world like where’s the other car and how fast it’s moving. They separate into two subsets, the observables, like the momentary location, and the hidden, which in our example is the velocity which I can never directly observe. But because those two things interact, subsequent observations of the observed variables give us information about these hidden variables so we can also estimate what these hidden variables are. So from multiple observations of the places of the object, the location, we can estimate how fast it’s moving. This is actually true for all the different filters but because Kalman Filter happened to be very efficient to calculate, when you have a problem like this, you tend to often use just the Kalman filter.