4 – KALMAN Gaussian Intro RENDER 1 1 V3

You remember our Markov model, where the word was divided into discrete grids, and we assigned to each grid to the probability. Such a representation of probability of a space is called a histogram, and that it divides the continuous space into discrete, into finally many grid cells, and approximates the posterior distribution by a histogram over the original distribution, and the histogram is a mere approximation to this continuous distribution. In Kalman Filters, the distribution is given by what’s called a Gaussian. Gaussian is a continuous function over the space of locations and the area underneath sums up to 1. So, use Gaussian again and if you call the space x, then the Gaussian is characterized by two parameters, the mean, often abbreviated with the Greek letter Mu, and the width of the Gaussian, often called the variance. For reasons I don’t wanna go into is often written as a quadratic variable, Sigma square. So, any Gaussian in 1D, which means the parameter space over here is one-dimensional, is characterized by Mu and Sigma square. So, rather than estimating the entire distribution as a histogram, our task in common phases is to maintain a Mu and a Sigma square as our best estimate of the location of the object we are trying to find. The exact formula is an exponential of a quadratic function where we take the exponent of this complicated expression over here. The quadratic difference of our query point x, relative to the mean Mu, divided by Sigma square, multiply by minus a half. Now if x equals Mu, then the numerator becomes 0, and if x of 0, which is one. It turns out we have to normalize this by a constant, 1 over the square root of 2 Pi Sigma square. But for everything we talk about today, this constant won’t matter, so we can ignore it. What matters is we have an exponential of a quadratic function over here. So, let me draw you a couple of functions and you tell me which one you believe are Gaussian.

%d 블로거가 이것을 좋아합니다: