6 – 05 Total Probability V4

Let me begin my story in a world where a robot resides and let’s assume the robot has no clue where it is. Then we will model this with a function, I’m going to draw onto this diagram over here. Where the vertical axis is the probability for any location in this world and the horizontal axis corresponds to all the places in this one-dimensional world. The way I am going to model the robot’s current belief about where it might be, it’s confusion is by a uniform function that assigns equal weights to every possible place in this world. There is the state of maximum confusion. Now to localize the bot has they have some distinctive features, let’s assume there’s three different landmarks in the world. There’s a door over here, there’s a door over here and a third one way back here. And for the sake of the argument let’s assume they are all look alike. They’re not distinguishable, but you can distinguish the door from the none door area from the wall. Now let’s see how the robot can localize itself by assuming it senses, and it senses that it’s standing right next to a door. All it knows now that is located likely next to a door. How will this affect our belief? Here is the critical step for localization. If you understand this step you understand localization. The measurement of a door transforms our belief function, defined over possible locations to a new function. It looks very much like this. For the three locations adjacent to doors, we now have an increased belief of being there whereas all the other locations have a decreased belief. This is a probability distribution that assigns high probability for being next to a door and it’s called the posterior belief with the word posterior means it’s after a measurement has been taking. Now the key aspects of this belief is that we still don’t know where we are, there’s three possible locations and in fact, it might be that the sensors were erroneous and we accidentally saw a door where there is none, there’s still a residual probability of being at these places over here, but these three bumps to get them really express our current best belief of where we are. This representation is absolutely core to probability and to movable localization. Now let’s assume the robot moves. Say it moves to the right by a certain distance, then we can shift the belief according to the motion. Anyway this might look like is about like this. This pump over here, made it to here. This guy went over here and this guy over here. Obviously this robot it knows it’s heading direction, it’s moving to the right in this example. But it knows roughly how far it moved. Now robot motion is somewhat uncertain, we can never be certain where the robot moved. These things we will be a little bit flatter than these guys over here. The process of moving those beliefs to right are essentially called a convolution, and let’s now assume the robot senses again and for the sake of the argument let’s assume it sees itself right next to a door again, so the measurement is the same as before. Now the most amazing thing happens. We end up multiplying our belief which is now prior to the second measurement, with a function looks very much like this one over here, which has a peak at each door and outcomes it believes it looks like the following. There’s a couple of minor bumps but the only really big bump is this one over here. This one corresponds to this guy over here in the prior and it’s the only place in this prior that really corresponds to the measurement of a door whereas all the other places of doors have a low prior belief. As a result this function is really interesting, it’s a distribution that focuses most of its weight under the correct hypothesis of the bot being on the second door and it provides very little belief to places far away from doors. At this point our robot has localized itself. If you understood this, you understand probability and you understand localization. Congratulations, you understand probability and localization. You might not know yet, but that’s really a core aspect of understanding a whole bunch of things I’m going to teach you in the class today.

%d 블로거가 이것을 좋아합니다: