Now you’ve seen an example of an image broken down into a 2D grid of grayscale pixel values that has a width and a height, but color images are little different. Color images are interpreted as 3D cubes of values with width, height, and depth. The depth is the number of color channels. Most color images can be represented by combinations of only three colors, red, green, and blue values. These are known as RGB images and for RGB images the depth is three. It’s helpful to think of the depth as three stacked 2D colored layers. One layer is red, one green, and one blue. And when stacked together they create a complete color image. Now, color images contain more information than grayscale images and they can add unnecessary complexity and take up more space in memory. However, color images are also really useful for certain classification tasks. For example, say you want to classify lane lines in this image of a road. One of these lines is yellow and one is white, but which is which? You might see a slight difference in the grayscale intensity of the lane lines but the difference is so small and it varies under different lighting conditions. So this grayscale image does not provide enough information to distinguish between the yellow and white lane lines. Let’s see the color image for comparison. Here we can clearly see the difference between the white and yellow lane lines. And so we can tell the machine to recognize this difference too. So because this identification task is dependent on color, it’s important that we work with color images. In general, when you think of a computer vision application like identifying lane lines or cars or people, you can decide whether color information and color images are useful, by thinking about your own vision. If the identification problem is easier in color for us humans, it’s likely easier for an algorithm to see color images too.