# 4 – Low-pass Filters

Earlier you saw an example of noise in an image of San Francisco City Hall. This noise is generally seen as speckles or discoloration in an image and it doesn’t contain any useful information. It might even mess with processing steps such as an edge detection when high pass filters can amplify noise if it’s not removed first. The most common way to remove noise is by using a low pass filter. These filters block certain high frequency content and effectively blur or smooth the appearance of an image, and this reduces high frequency noise. An example where this is very useful is in medical images which typically have noise that’s produced either by the imagery machinery or by a moving human subject. Let’s take a closer look at this cross-sectional image of a human brain. We can clearly see the outline of a skull and brain but we also see a lot of sort of salt and pepper speckle, and this is high frequency noise. You can imagine if we applied a high pass filter with the goal of detecting edges, we would detect and amplify a lot of this spotty noise. We can reduce this noise by taking a kind of average between neighboring pixels so that there are not such big jumps in intensity especially in small areas. This averaging of pixels in space is equivalent to implementing a low pass filter that blocks high frequency noise. Let’s see an example of a common kernel that does this. The first and simplest is the averaging filter. It’s a three by three kernel that weights are sent to pixel and its surrounding pixels all the same. Low pass filters typically take an average and not a difference as high pass filters do. So their components should all add up to one. This will preserve the image brightness and make sure that it doesn’t get brighter or darker overall. But we can see that the components of this kernel add up to nine. So we need to normalize by dividing the total value of the kernel by nine. Then our total sum becomes one. Now let’s go back to our main image and see what a convolution looks like between the image F of XY and the kernel K. Here’s a zoomed in portion of the image. To perform convolution we put our three by three kernel on top of each pixel in our image. I’ll choose this dark one with a value of 40 as our center pixel. Then looking at all the values in this three by three square, we perform multiplication in pairs using the weights in our kernel to multiply all the pixel values. In this case, we’re multiplying all the values by one and summing them up. Our last step will be dividing by nine to normalize the image and get a corresponding output pixel value, 85. We can see that this is just an average of the center pixel and its surrounding neighbor pixels. Since the surrounding pixels are mostly brighter than the center pixel, the new output pixel value is brighter too. If we do this with all the pixels in this image, we’ll get an average to smoothed out image with fewer abrupt changes in intensity. This can be useful for blowing out noise or making a background area within a certain intensity range look more uniform. In fact, this sort of filter is even used in Photoshop to soften and blur parts of an image. Next, let’s see how to do this in code.