8 – M4 L4 10 Estimation Error V4

Let’s think for a moment why what we’ve come up with now is superior to the way we were doing optimization before. Remember how before we had this large covariance matrix of assets that we were using to estimate portfolio variance. Well, now we have a smaller number of factors. Common commercial risk models have around 70 factors. So, this matrix is now a 70 by 70 matrix, as opposed to the potentially several thousand by several thousand matrix we would have had if we were using a covariance matrix of the assets. There are literally just fewer elements in this matrix. This means that we have reduced the number of quantities we are trying to estimate. How many fewer elements are there? Well, let’s count them. Let’s say the dimension of our asset covariance matrix is n. Remember that covariance matrices are symmetric. There are n elements on the diagonal. If we look at the remaining elements, we see we have a matrix of dimension n minus 1 by n. But each element is in this matrix exactly twice. So, the number of unique elements is n times n minus 1 divided by 2. So, we have n elements along the diagonal and a half of n times n minus 1 number of elements that are not along the diagonal. We’ll rearrange the formula a bit to get n times n plus 1 all divided by 2. So, in total, we have n times n plus 1 divided by 2 quantities to estimate. Let’s say n is 3,000. This is then around 4.5 million quantities. If n is instead 70, then this is more like 2.5 thousand quantities. There’s a big difference in scale there. There are many fewer opportunities to introduce estimation error. The fact that we are now estimating many fewer parameters is a good thing and an important reason why we use factor models of risk. Another thing to remember is that each of the elements in this matrix is an estimate of variance or covariance of random variables. In practice, we estimate variances and covariances using time series of data. If you have n assets, and you want to estimate the covariance matrix of those n assets, then the number of days of return that you need, t, has to be greater than n, and ideally much greater than n. There’s more we need to talk about to explain why the number of data points needs to be much greater than the number of variables n. But one problem with insufficient data is that there won’t be enough observations to accurately estimate all of the variances and covariances, which means that the covariance matrix, based on the sample, would be significantly different from the population covariance matrix. Also, PCA, using such a covariance matrix, would not be able to produce meaningful principal components. If you have 3,000 securities, you need at least 3,000 days of data, which is about 12 years. But we also know that variance and covariance probably change over time. So, does using 12 years of data to predict the variance for the next month even make sense? Another reason people use the risk factor model formulation of the covariance matrix is that you get around that problem. N is a smaller number, and so t can be a smaller number.

%d 블로거가 이것을 좋아합니다: