1 – More Uniform Cost

Let’s try to understand a little better how uniform cost search works. We started on start state, and then we start expanding out from there looking at different paths. And what we end up doing is expanding in terms of contours like on a topological map. Where first we expand out to a certain distance, then to a farther distance and then to a farther distance. Now, at some point, we meet up with the goal. Let’s say the goal is here. Now, we found a path from the start to the goal. But notice that the search really wasn’t directed at any way towards the goal. It was expanding out everywhere in this space. And depending on where the goal is, we should expect to have to explore half of the space on average before we find the goal. If the space this small, that can be fine. But when spaces are large, that won’t get us to the goal fast enough. Unfortunately, there’s really nothing we can do with what we know to do better than that. And so if we want to improve, if we want to be able to find the goal faster, we’re going to have to add more knowledge. The type of knowledge that’s proven most useful, in search is an estimate of the distance from the start state to the goal. So let’s say we’re dealing with a route finding problem, and we can move in any direction, up or down, right or left. And we’ll take as our estimate, the straight line distance between a state and goal. And we’ll try to use that estimate to find our way to the goal fastest. Now, an algorithm called greedy best-first search does exactly that. It expands first the path that’s closest to the goal according to the estimate. So what are the contours look like in this approach? Well, we start here, and then we look at all the neighboring states. And the ones that appear to be closest to the goal, we would expand first. So we’d start expanding like this and like this, and like this, and like this, and that would lead us directly to the goal. So now, instead of exploring whole circles that go out everywhere in the search space, our search is directed towards the goal. In this case, it gets us immediately towards the goal. But that won’t always be the case if there are obstacles along the way. Consider this search space, we have a start state and a goal. And there’s an impassable barrier. Now a greedy best-first search will start expanding out as before, trying to get towards the goal. And when it reaches the barrier, what will it do next? Well, it will try to increase along a path that’s getting closer and closer to the goal. So it won’t consider going back this way, which is farther from the goal. Rather, it will continue expanding out along this lines, which always get closer and closer to the goal, and eventually will find its way towards the goal. So does find the path, and it does it by expanding a small number of nodes, but it’s willing to accept a path which is longer than other paths. Now if we can explore it in the other direction, we could have found a much simpler path, a much shorter path, by just popping over the barrier and then going directly to the goal. But greedy best-first search wouldn’t have done that because that would have involved getting to this point, which is this distance from the goal, and then considering states which were farther from the goal. What we would really like is an algorithm that combines the best parts of greedy search, which explores a small number of nodes in many cases, and uniform cost search, which is guaranteed to find the shortest path. We’ll show how to do that next using an algorithm called the A* algorithm. [BLANK_AUDIO]

Dr. Serendipity에서 더 알아보기

지금 구독하여 계속 읽고 전체 아카이브에 액세스하세요.

Continue reading