So, what does it look like when you apply optical flow not just to a point, but a set of points in a video? Take the small image of a robot that moves to the right and then down into the right and the next two image frames. The goal of optical flow is, for each image frame, to compute approximate motion vectors based on how the image intensity, the patterns of dark and light pixels have changed over time. The first step is to find matching feature points in between two images using a method by cog or corner detection that looks for matching patterns of intensity. In this case, we will directly detect some endpoints and maybe the sensors on the robot. Then optical flow calculates a motion vector, UV for each key point in the first image frame that points to where that key point can be found in the next image. These motion vectors are to the right between frames one and two, and then down into the right between frames two and three. This is what optical flow will look like for a set of points and you can calculate this flow frame by frame until you build up the path of an object over time. You can imagine using this data to measure the velocity of that object. You can also apply this technique to every pixel in an image to create a field of motion vectors. Optical flow is used in a variety of applications from slow motion graphics to autonomous vehicle navigation, and it will be especially useful to keep these motion vectors in mind as we approach the task of locating a robot as it moves through an environment.