## 5 – 25 Situation Calculus 3 V2

So I’ve talked about the possibility axioms and the successor-state axioms that’s most of what’s in situation calculus. And that’s used to describe an entire domain like the airport cargo domain. And now we describe a particular problem within that domain by describing the initial state. And typically we call that S0, the initial situation. … Read more

## 4 – 24 Situation Calculus 2 V4

Now, there’s a convention in situation calculus that predicates like ‘at’, we said plane ‘p’ was at airport ‘x’ in situation ‘s’. These types of predicates that can vary from one situation to another are called fluents from the word fluent, having to do with fluidity or change over time. And the convention is that … Read more

## 3 – 23 Situation Calculus 1 V3 (1)

Now I want to talk about one more representation for planning, called Situation Calculus. [INAUDIBLE], suppose we wanted to have the goal of moving all the cargo from airport A to airport B, regardless of how many pieces of cargo there are. You can’t express the notion of all in propositional languages like classical planning, … Read more

## 2 – 22 Sliding Puzzle Example

To understand the idea of heuristics, let’s talk about another domain. Here we have this sliding puzzle domain. Remember, we can slide around these little tiles, and we try to reach a goal state. 16 puzzle is kind of big, so let’s show you the state space for the smaller 8 puzzle. And here’s just … Read more

## 1 – PlanSpaceSearch

Now there’s one more type of search for plans that we can do with the classical planning language that we couldn’t do before. And this is, searching through the space of plans rather than searching through the space of states. In forward search, we were searching through concrete world states. In backward search, we were … Read more

## 5 – Regression vs Progression

Let’s show an example of where a backwards search makes sense. I’m going to describe a world in which there’s one action, the action of buying a book, and the precondition is we have to know which book it is and let’s identify them by ISBN number. We can buy ISBN number b, and the … Read more

## 4 – Regression Search

Another way to search is called backwards or regression search, in which we start at the goal. So we take the description of the goal state, C1 is at JFK and C2 is at SFO. So that’s the goal state. And notice that that’s the complete goal state. It’s not that I left out all … Read more

## 3 – Progression Search

So the simplest way to do planning is really the exact same way that we did it in problem solving. That we start off in initial state, so P1 was at SFO, say, and cargo C1 was also at SFO. And all the other things that were in that initial state, and then we start … Read more

## 2 – Classical Planning 2

And here we see a more complete representation of a problem solving domain in the language of classical planning. And here’s the Fly action schema, I made it a little bit more explicit with from and to airports rather than x or y. And we want to deal with transporting cargo. So in addition to … Read more

## 1 – Classical Planning-01

Now I want to describe a notation which we call classical planning which is a representation language for dealing with states and actions and plans. And it’s also an approach for dealing with the problem of complexity by factoring the world into variables. So under classical planning the state space consists of all the possible … Read more

## 9 – Infinite Sequences

In this new notation, instead of writing plans as a linear sequence of, say, suck, move right, and suck, I’m going to write them as a tree structure. So we start off in this belief state here, which we’ll diagram like this. And then we do a suck action. And we end up in a … Read more

## 8 – Stocastic Environment Problem Solution

And the answer is that any plan that would work in the deterministic world might work in the stochastic world, if everything works out okay. And all of these plans meet that criteria. But no finite plan is guaranteed to always to work. Because a successful plan has to include at least one move action. … Read more

## 7 – Stochastic Environment Problem

Now let’s move on to stochastic environments. Let’s consider a robot that has slippery wheels, so that sometimes when you make a movement, a left or a right action, the wheels slip and you stay in the same location. And sometimes they work and you arrive where you expected to go. And let’s assume that … Read more

## 6 – Partially Observable Vacuum Cleaner Example

We’ve been considering sensorless planning in a deterministic world. Now, I want to turn our attention to partially observable planning, but still in a deterministic world. Suppose we have what’s called local sensing. That is our vacuum can see what location it is in, and it can see what’s going on in the current location, … Read more

## 5 – Sensorless Vacuum CleanerProblemSolution

The answer is, that the state of knowing that your current square is clean corresponds to this state, this belief state with four possible world states. If I then execute the right action followed by the sack action, then I end up in this belief state, and that satisfies the goal. I know I’m in … Read more

## 4 – Sensorless Vacuum CleanerProblem

This is the belief state space for the sensorless vacuum problem. So we started off here. We drew the circle around this belief state, so we don’t know anything about where we are. But the amazing thing is if we execute actions, we can gain knowledge about the world even without sensing. So let’s say … Read more

## 3 – Vacuum C Example

Here’s a state space diagram for a simple problem. It involves a room with two locations, the left we call A and the right we call B. And in that environment there’s a vacuum cleaner, and there may or may not be dirt in either of the two locations. And so that gives us eight … Read more

## 2 – Planning Vs Execution

Now why do we have to intervene in planning and execution? Mostly because of properties of the environment that make it difficult to deal with. The most important one is if the environment is stochastic, that is if we don’t know for sure what an action is going to do. If we know what everything … Read more

## 14 – Tracking The Predict Update Cycle

Here’s an example of tracking the predict update cycle. And this is in a world in which the actions are guaranteed to work as advertised. That is, if you suck, you clean up the current location, and if you move right or left, the wheels actually turn and you do move. But, we can call … Read more

## 13 – Problem Solving Via Mathematical Notation

Now some people like manipulating trees. And some people like a more sort of formal mathematical notation. So if you’re one of those, I’m going to give you another way to think about whether or not we have a solution. And let’s start with the problem solving where a plan consists of a straight line … Read more