In image processing, filters are used to filter out unwanted or irrelevant information in an image or to amplify features like object boundaries or other distinguishing traits. High-pass filters are used to make an image appear sharper and enhance high-frequency parts of an image, which are areas where the levels of intensity in neighboring pixels rapidly change like from very dark to very light pixels. Since we’re looking at patterns of intensity, the filters we’ll be working with will be operating on grayscale images that represent this information and display patterns of lightness and darkness in a simple format. Let’s take a closer look at this panda image as an example. What do you think will happen if we apply a high-pass filter? Well, where there is no change or a little change in intensity in the original picture, such as in these large areas of dark and light, a high-pass filter will black these areas out and turn the pixels black, but in these areas where a pixel is way brighter than its immediate neighbors, the high-pass filter will enhance that change and create a line. You can see that this has the effect of emphasizing edges. Edges or just areas in an image where the intensity changes very quickly and these edges often indicate object boundaries. Now, let’s see how exactly a filter like this works. The filters I’ll be talking about are in the form of matrices often called convolution kernels, which are just grids of numbers that modify an image. Here’s an example of a high-pass filter that does edge detection. It’s a three by three kernel whose elements all sum to zero. It’s important that for edge detection all of the elements sum to zero because this filter is computing the difference or change between neighboring pixels. Differences are calculated by subtracting pixel values from one another. In this case, subtracting the value of the pixels that surround a center pixel, and if these kernel values did not add up to zero, that would mean that this calculated difference will be either positively or negatively weighted, which will have the effect of brightening or darkening the entire filtered image respectively. To apply this filter, an input image F(xy) is convolved with this kernel, which I’ll call k. This is called kernel convolution and convolution is represented by an asterisk, not to be mistaken for a multiplication. Kernel convolution is an important operation in computer vision applications and it’s the basis for convolutional neural networks. It involves taking a kernel, which is our small grid of numbers, and passing it over an image pixel by pixel transforming it based on what these numbers are and we’ll see that by changing these numbers, we can create many different effects from edge detection to blurring an image. I’ll walk through an example using this three by three edge detection filter. To better see the pixel operations, I’ll zoom in on this panda right by its ear to see the grayscale pixel values. First, for every pixel in this greyscale image, we put our kernel over it so that the pixel is in the center of the kernel, and I’m just choosing this pixel as an example. Then we look at the three by three grid of pixels centered around that one pixel. We then take the numbers in our kernel and multiply them with their corresponding pixel in pairs. So this pixel in the top left corner, 120, is multiplied by the kernel corner zero and next to that, we multiply the value 140 by negative one, and the next, another 120 by zero. We do that for all nine pixel kernel value pairs. Notice that the center pixel with a value of 220 will be multiplied by four, the center kernel value. Finally, these values are all summed up to get a new pixel value, 60. This value means a very small edge has been detected, which we can see by looking at this three by three area in the image. It changes from light at the bottom to a little darker on top, but it changes very gradually. These multipliers in our kernel are often called weights because they determine how important or how weighty a pixel is in forming a new output image. In this case, for edge detection, the center pixel is the most important followed by its closest pixels on the top and bottom and to its left and right, which are negative weights that increase the contrast in the image. The corners are the farthest away from the center pixel and in this example, we don’t give them any weight. So this weighted sum becomes the value for the corresponding pixel at the same location XY in the output image, and you do this for every pixel position in the original image until you have a complete output image that’s about the same size as the input image with new filtered pixel values. The only thing you need to consider, other than this weighted sum, is what to do at the edges and corners of your image since the kernel cannot be nicely laid over three by three pixel values everywhere. Next, let’s get a little more practice with these kinds of high-pass filters then get into coding our own.