Now we have created a complete classifier that takes in an RGB image and outputs a predicted label for any image. The next step is to look at the accuracy of our model. The accuracy of any classification model is found by comparing predicted and true labels. If the predicted label matches the true label, then this is a correctly classified image. If not, it is misclassified. The accuracy is given by the number of correctly classified images over the total number of images. And since we are using a pretty simple brightness feature here, we may not expect this classifier to be 100 percent accurate. We will also be testing our classifier on new images. This is called a test set of data, and test data is previously unseen image data. The data you have seen and that you use to help build a classifier is called training data. In training data, are images like these. The idea in creating these two sets training and test is to have one set that you can analyze and learn from and one that you can actually test your classifier on. You could imagine creating a classifier that can classify all of these training images correctly, but you actually want to build a classifier that recognizes general patterns and data so that when it is faced with a real world scenario, it will still work. And so, we will load in a new test set of data to see how a classification model might work in the real world, and we’ll use this to determine the accuracy of the model. We load in the test images and standardize them, and we finally shuffle them so that the order will not play a role in testing accuracy. So, to determine the accuracy, I am going to iterate through this test data, and I am going to do this with a function called, get_misclassified_images, and I will pass in my test images. I will start with an empty list misclassified images and labels, and I’ll iterate through each image in our test images. I’ll extract the true data, the image and its true label, and I will run a classifier code to get a predicted label using our estimate label function and passing in the image. So for any image, we are using our classification code to produce a predicted label for that image. Then, I’ll compare the predicted label and the true label. If these matched, this image is classified correctly. But if they do not match, this image has been misclassified, and I’ll add the misclassified Images that predicted the end their true labels to a list called, misclassified_images_labels. So again, I’m appending the image, its predicted label, and its true label. Finally, our trim this list of all our misclassified images and their labels. Next, I’ll do my accuracy calculations. First, I’ll run this function, get_misclassified_images, on our standardized test list of images, and I’ll store all them as classified images and labels in a list, Misclassified. Then I am going to store the total number of images by getting the length of our standardized test list. The number of correctly classified images will be this total number minus the number of misclassified Images. Finally, we can calculate the accuracy, which as you recall, is our number of correctly classified images over our total number of images, then I am going to print these stats app. And if we run this, we get .925 or 92.5 percent accuracy. That is actually not bad. But with more features, I bet you could improve this algorithm. And to see how to improve, it is useful to take a look at the misclassified Images and what they were mistakenly labeled as, and it will be up to you to look at these images and think about how to improve the classification model.