Shelly, is there any idea of reducing the randomness of a time in generic algorithm like say in simulated? >> Well, not limitations that we have shown so far. But the fitness function naturally reduces the randomness as each generation gets closer to the solution. However, we could argue that the method could do better. … Read more
But there is one problem, what if there’s a critical part of the solution that is none of the parents? >> That could be an issue. Could also imagine that a critical piece might be in a board that never gets selected as a parent. Like poor number 4 here, and that critical piece gets … Read more
Here’s the answer. Note that the children are worse than some of the parents. In progressive generations, we might continue to see this. But with mutation step, we might get closer to the goal. Without the mutation step, we run the risk of never actually reaching the goal.
Try to apply what you’ve learned so far to complete this iteration of the genetic algorithm. First, fill in the fitness value or the number of non attacking pairs of queens for each board state in the initial population. Then, choose parents according to their fitness score and fill them in this column. Order them … Read more
Okay, let me handle this one. Given a parent 32752411. Which corresponds to a board that looks like this. We’ve randomly selected the spot between the third and fourth column. To be the crossover point. For the first child, we’ll take the first part of the first parent. Which we’ll mark in blue. Looking at … Read more
Another specific we need to know before doing our genetic algorithms’ example is that there are 28 possible pairs of attacking queens on this eight by eight board. >> How did you figure that out? >> Well, we have eight queens, and we want to examine every possible pair that could attack each other. So … Read more
Here is the answer. [BLANK_AUDIO]
Here’s another example board for 8-Queens. Give us a string that represents the position of each piece on this board.
We’re going to use the n-Queens problem again to talk about genetic algorithms, but before we do that, we need to create a convention to represent a given board. >> We’ll use the one in the book. Since there can only be one queen in each column, we can encode a board with the number … Read more
Here is the version of the algorithm we’re going to use. Just like hill climbing, we’re going to iterate with simulated annealing, looking for points close to our current position that might have an improved value. However, we’re going to select our next position randomly from the points in the region near us. If a … Read more
We’ve been trying to emphasize algorithms that are of a particular importance. >> A star and alpha-beta pruning on the minimax algorithm are some examples. >> Now we’re going to introduce another key concept, simulated annealing. >> Hold on, simulated annealing? That implies that there’s real annealing too. >> That’s right, we’re going to steal … Read more
Particle 1 starts at seven and with a step size of two, it reaches the peak of 10, with one step. Particle 2 starts at eight, it goes towards the peak, but then it oscillates around the peak because the step size is too large. Particle 3 starts at 10, follows this upward gradient, but … Read more
Now assume your algorithm has a step size of 2. It still stops when no positive gradient is found. What value would these particles return? Particle 1 starts at x equal to 7. Particle 2 starts at x equal to 8, and particle 3 starts at x equal to 10. Select their appropriate answer for … Read more
Well then why not just keep the step size large? >> A couple of reasons. First, with a really large step size, we can miss sharp hills completely. Going back to our first starting place. If my step size is this big, I would skip the hill I intended to take. And would start going … Read more
Does hill climbing have any other problems we should worry about? >> Suppose we start at the left edge of the graph, using small steps we gradually climb higher until we get to the shoulder here. However, if we stop when we no longer see a way to improve we can get stuck at this … Read more
Particle 1 starts at x=1 and follows a positive gradient up, but then it gets stuck at the shoulder where y=2. Particle 2 starts at x=8 and follows a positive gradient up, but it gets stuck at this local maximum at y=6. Particle 3 starts at x=9 and follows the positive gradient in the right … Read more
We are going to consider multiple starting points for the hill climbing algorithm in this quiz. By convention, we’ll call these particles. For each of these particles, tell us their value assuming your algorithm has a step size of one and that it stops when no positive gradient is found.
This time I start here and I go to to the left because that is the positive gradient. And I’m going to get to the top of the global maximum. >> But what’s the chance that you’re going to get to global maximum on your second attempt? >> Better than just with my first attempt, … Read more
Suppose I start here. I look to the left and it goes down. I look to the right and it’s going up. So I’ll take a step in that direction. >> And what do you do when you reach the top? >> Well we will get stuck, just like we did with the end queens … Read more
While we´re on this topic, I’d like to talk about local beam search. Because we’ll use it later in the course. >> Okay, it’s your show after all. >> The local beam search, instead of just using one position, which I’ll call particle, just because that’s how I think of it. We’ll keep track of … Read more