When we built an edge detector, we looked at the difference in intensity between neighboring pixels, and an edge was detected if there was a big and abrupt change in intensity in any one direction- up or down, or left or right, or diagonal. Recall that the change in intensity in an image is also referred to as the image gradient and we can also detect corners by relying on these gradient measurements. We know that corners are the intersection of two edges and we can detect them by taking a window which is generally a square area that contains a group of pixels and looking at where the gradient is high in all directions. Each of these gradient measurements has an associated magnitude which is a measurement of the strength of the gradient, and a direction which is the direction of the change in intensity. And both of these values can be calculated by Sobel operators. Sobel operators take the intensity change or gradient of an image in the x and y direction separately. Here I’ve pictured those gradients for our mountain image. I’ll call these G_x and G_y, G for gradient. You may notice that these look a little different than our curl convolution before because they haven’t been turned into binary threshold of images, and in this case that’s what we want. Then we need to get the magnitude and direction of the total gradient from these two values, and to do that we actually convert these values from image space X Y coordinates to polar coordinates with a magnitude rho and direction theta. This may seem familiar from the Hough transformation. At any pixel location, you can think of G_x and G_y as the lengths of two sides of a gradient triangle. G_x is the length of the bottom side and G_y the length of the right side. Then the total magnitude row of this gradient is the diagonal on this triangle or the square root of the sum of these two gradients, and the direction theta of the gradient is calculated as the inverse tangent of G_y over G_x. The resulting gradient magnitude image should look something like this, with the biggest gradients corresponding to the brightest lines. Now, what mini corner detectors do is to take a window and shift it over areas in a gradient image moving the small window up and down and left and right. When there is a corner and we shift this window slightly, there’s a big variation in the direction and magnitude of the calculated gradients and this large variation identifies a corner. I’ll be walking through coding a simple corner detector that takes advantage of this knowledge and finds corner’s based on identifying locations with the largest variation in gradient for a shifting window. Here I read in an image of a chessboard at an angle, copying it and converting it to RGB color space as usual. I’ve chosen this example because it will make it easy to see if we’ve implemented corner detection accurately. Corner detection relies on changes in intensity, so I’ll first convert this image to grayscale, I’ll then convert these to floating point values that the Harris corner detector will use. Next, I’ll create a corner detector called a Harris corner detector using the OpenCV function cornerHarris. This takes in the grayscale float values followed by the size of the neighborhood to look at when identifying potential corners. Two means a two by two pixel Square, and since the corners are well marked in this example, a small window like this will work well. Then it takes in the size of the Sobel operator, three which is our typical kernel size. And lastly, a constant value that helps determine which points are considered corners. A value of 0.04 Is typical. A slightly lower value for this constant will result in more corners detected and this produces an output image I’ll call dst for destination. This image should have the corners marked as bright points and non-corners as darker pixels. Let’s plot what this looks like. In this image, it’s actually very hard to see the bright corner points. So I’ll perform one more operation on these corners, which will be to dilate them. To do this I’ll use OpenCV’s dilate function and apply it to our detected corners. In computer vision dilation enlarges bright regions or regions in the foreground like these corners so that we’ll be able to see them better, and I’ll display this dilated result. And now you can see the corners fairly well as these bright points in the image. The last couple of steps will be to select and display the strongest corners. To select the strongest corners I’ll define a threshold value for them to pass. This threshold will vary depending on the image. But in this case I’ll use a lower threshold that’s at least one tenth of the maximum corner detection value. Next, code to display the corners. I’ll first create a copy of our image to draw our corners on and if a corner is larger than our defined threshold, I’ll draw it on our image. Here I’m using OpenCV to draw a small green circle on our strong corners on our image copy, and I’ll display the result. And here if we zoom in you can see that most of our corners were detected. We’re actually missing a couple right here and here, so we might want to change our threshold value. Let’s lower this to just 1 percent of the maximum value of our corners and plot it again. Now you can see that all the corners on the chess board are detected. It’s actually pretty interesting to see where these green circles appear. For example, there’s no corner detected at the very bottom right corner of the board because there’s no change in intensity. Both the board and the background are white. But at every black and white intersection point we detect a corner. You could imagine using these corner points to get information about the chessboard dimensions or using a subset of these points to perform a perspective transformation. Corners alone can be useful for many types of analysis and geometric transformations.