Your first steps in building a day and night image classifier or to visualize the input images and standardize them to be the same size. To do that, we imported our usual resources and loaded the image datasets. And we created a standardized list of all the images and their labels. Finally, we could visualize the standardized data. Here, I’m selecting the first image in our standardized list and its label. I then display the selected image and some information about it. You can see its dimensions and its label, one for day. And finally, we’re ready to get all of these images and start separating them into two classes, day and night. We’ll separate day and night images based on the level of average brightness. This will be a single value and notice that day images have a higher average brightness than night images. Now, to calculate the average brightness of an image, we’ll be using HSV colorspace. Specifically, we’ll use the value channel, which is a measure of brightness, and add up the pixel values in the value channel. Then, we’ll divide that sum by the area of the image to get the average value of the image. So, first, I’ll convert a test image to HSV colorspace. I want to look at a couple of day images and a couple of night images to find the differences between the two. In this example, I’m plotting the ‘H,’ ‘S,’ and ‘V’ channels individually. So here’s a day image and the colored channels ‘H,’ ‘S,’ and ‘V’. We can see the ‘V’ channel is especially high in the sky, and this classification is based on assuming that a day sky is brighter than a night sky. So our next step will be to find the average brightness using the value channel. Now, I’m going to define a function for finding the average value of an image. This function, “average brightness,” will take in an rgb_image. And the first step will be to convert it to HSV colorspace. Next, I want to add up all of the pixel values in the value channel, and I’ll do that using numpy sum function. This will accept the ‘V’ channel of our HSV image and adds up all of the pixel values. Then I’ll calculate the area of my image, which I know is 600 by 1100, since we standardized each image. And to find the average brightness of the image, we divide this brightness sum by the area of our image. Then this function will return the average. So this gives us one value, the average brightness or the average value of the image. Our next step will be to look at day and night images and their average brightness values. The goal will be to look at their average brightness and see if we can find a value that clearly separates day and night images. So let’s first look at our standardized image number zero, which we know is day image. We see that its average brightness is around 175. Now, let’s look at a night image. This is a pretty dark image and the average brightness value is just about 35. And we want to look at a variety of these images. Here’s another day image and its average brightness is around 143. Now, with just a couple of these values in mind, you might be thinking about how you can use the average brightness to predict a label for each image, zero for night or one for day. And it will be up to you to find that threshold. The next step will be to feed this data into a classifier. And the classifier might be as simple as a conditional statement that checks if the average brightness is above some threshold that you define. Now, this average brightness value is considered a feature. And a feature is just a measurable component of an image that ideally helps distinguish it from other images. And we’ll soon learn more about testing the accuracy of a model like this. Next, we’ll learn a bit more about features and why they are useful for self-driving cars.