We’ll now discuss the Kalman filter, which is used for time series in self-driving cars and even flying cars. But first, let’s look back at regression and autoregressive moving averages to see how Kalman filters are different. Recall that with autoregressive moving averages, we must choose the lag number for autoregression and also the lag number for moving averages. Our choice of the lag parameters can affect how the model performs. What if, instead of trying to find the best lag parameters, we had a single state that represented all of the relevant information from the past? In other words, instead of choosing values for P and Q, what if we had a set of variables at T minus 1 to represent the past? I’ll give you a hint, Kalman filters. Also, you may recall that financial data has a lot of noise relative to its useful signal information. It’s often the case that we want to measure a specific thing that can only measure something else that’s related. For example, we may wish to measure oil production levels but can only measure oil pipeline flows near the production sites. So, how do we make predictions when we have noisy indirect measurements? If you guessed Kalman filters, then you’re right. When using Kalman filters, we can assume that the stock returns properties can be summarized by set of values. We call these set of values the state. Within the state of the time series, there is some hidden property that we can’t measure directly. We can think of this hidden property as a smooth curve which represents the value of the stock return if there was no noise. On the other hand, where we actually measure the stock return includes this hidden state plus noise. So, what we have to work with is a more jagged curve with some randomness in it. The Kalman filter is designed to handle this kind of real life noisy data. The Kalman filter repeats the following steps in a loop. The first step is called the predict step, the second step is called the measurement update step. First, the Kalman filter predicts the hidden state or value of the stock return as a probability distribution. Next, it takes measurements such as the actual stock return data and then updates its belief about the hidden state. Note that the Kalman filter stores the relevant information in what’s called the state. Also notice how the Kalman filter is dynamically updating its underlying model every time it performs a measurement. The Kalman filter uses both the previous time period’s state and the measurement of the latest stock return to predict the next state. So all the relevant prior history of the time series is stored in the T minus 1 state, and there’s no need to look at the earlier time periods.