27 – 27 Sense And Move V2

So wow! You’ve basically programmed the Google self-driving car localization even though you might not quite know yet. So, let me tell you where we are. We talked about measurement updates, and we talked about motion, and we coded these two routines sense and move. Now, localization is nothing else but the iteration of sends and move. There’s an initial belief that is tossed into this loop, if your sense first to come to the left side, and then localization cycles through these move sense, move sense, move sense, move sense, move sense cycle. Every time the robot moves, it loses information with awareness. That’s because word motion is inaccurate. Every time it’s senses, its gains information. That is manifest by the fact that after motion, the probability distribution is a little bit flatter and a bit more spread out. After sensing, it’s focused a little bit more. In fact, as a footnote, there’s a measure of information called entropy. Here is one of the many ways you can write it. It’s the expected log likelihood of the probability of each grid cell, and without going to detail, this is a measure of information that the distribution has, and it can be shown that the update step, the motion step makes the entropy go down, and the measurement step makes it go up. So, you’re really losing and gaining information. I would now love to implement this in our code. So, in addition to the two measurements we had before, red and green, I’m gonna give you two motions, one and one, which means our moves right and, right again. Can you compute the posterior distribution if there was first senses red, then moves right by one, then senses green, then moves right again? Let’s start with a uniform prior distribution.

Dr. Serendipity에서 더 알아보기

지금 구독하여 계속 읽고 전체 아카이브에 액세스하세요.

Continue reading