The first step in ORB feature detection is to find the key points in an image, which is done by the FAST Algorithm. FAST stands for Features from Accelerated Segments Test, and it quickly select key points by comparing the brightness levels in a given pixel area. Given a pixel, which I’ll call p in an image, FAST compares the brightness of p to a set of 16 surrounding pixels that are in a small circle around p. Each pixel in this circle is then sorted into three classes, brighter than p, darker than p, or similar to p. I refer to the brightness of a pixel as Ip, which you can think of as the intensity of pixel p. So, if the brightness of a pixel is Ip, then for a given threshold h, brighter pixels will be those whose brightness exceeds Ip plus h. Darker pixels will be those whose brightness is below Ip minus h, and similar pixels will be those whose brightness lie in-between those values. Once the pixels are classified, pixel p is selected as a key point if more than eight connected pixels on the circle are either darker or brighter than p. The reason FAST is so efficient, is that it takes advantage of the fact that the same result can be achieved by comparing p to only four equidistant pixels in the circle, instead of all 16 surrounding pixels. For example, we only have to compare p to pixels 1, 5, 9, and 13. In this case, p is selected as a key point if there are at least a pair of consecutive pixels that are either brighter or darker than p. This optimization reduces the time required to search an entire image for key points by a factor of four. But what type of information are these key points providing us? What’s so meaningful about comparing the brightness of neighboring pixels? Well, let’s look at some of the key points found by FAST on this image of a cat. There are key points at the edge of the eye, there’s another group of key points at the edge of the nose. As we can see the key points are located in regions where there is a change in intensity. Such regions, usually determined an edge of some kind, like in the cat’s paws. Edges define the boundaries of the cat, and the boundaries of its facial components. And so these key points give us a way to identify this cat, as opposed to any other object or background in the image. So, the key points found by FAST, give us information about the location of object defining edges in an image. However, one thing to note is that these key points only give us the location of an edge, and don’t include any information about the direction of the change of intensity. So, we can now distinguish between horizontal and vertical edges, for example. And we’ll see later that this directionality can be useful in some cases. Now that we know how ORB uses FAST to locate the key points in an image, let’s take a look at how ORB uses the brief algorithm to convert these key points into feature vectors.