I’m going to give you a glimpse as to why it works. Suppose we have two roller positions x0 and x1 and we know that 10-apart with some Gaussian noise, we know that the Gaussian noise and exploitation moves thr rightward position 10-off the leftward position, but there is some uncertainty. When we talked about common filters, we talked about Gaussians and this uncertainty might look as follows. There is a constant, exponential in the expression that x1 minus x0 should relax to 10, but might deviate from it. This Gaussian constraint over here characterizes a constraint x1, x0 and wishes them to be exactly 10-apart. The Gaussian is maximum with this equation is fulfilled, but if the residual is not equals zero, there’s still a probability associated with it. Let’s now model a second motion. Say x2 is five apart. We now get an even bigger Gaussian relative to the very first one, but the local constraint over here reads just like the constraint over there. So, let me just write it down. x2 minus x1 minus five square over sigma square. Now, the total probability of this entire thing over here is the product of these two things. If we want to maximize the product, we can play a number of interesting tricks. First, the constant has no bearing on the maximum, just on the absolute value. So, if we want to find the best values for x0 and x1 and so on, we can drop the constant. Secondly, we can drop the exponential if we’re willing to turn the product into an addition, and remember, we edit things in omega and in sigma, that’s why. And finally, we can actually drop the minus a half. Turns out it also plays no role in the maximization of this expression. So, it turns out what you added, where constraints just like these and you even added them as a certain strength of one over sigma square. In particular, if you really believe that a constraint is true, you should add a larger value in this matrix over here and on the right side, you should multiply the right constraint with an even larger value. For definitely, taking the expression like this and multiplying the sigma square, you get something of this nature over here where one over sigma regulates how confident you are for a small sigma, one over sigma becomes large, so five is much larger than one. That means you have much more confidence. Let’s go back to the code and modify the code. So, the last measurement has a really high confidence. So, I wanted to multiply the last measurement between x2 and our landmark with the factor of five, in your code.