Several approaches for understanding and visualizing convolutional networks have been developed in the literature. Partly, as a response to the common criticism that the learned features in a neural network are not interpretable. Now, you’ve seen a view of the most commonly used feature visualization techniques from looking at filter weights to looking at layer activations. In addition to giving you insight into what features a CNN has learned to extract from an image, these techniques should give you a greater understanding of how CNNs work. For classification CNN, you can actually see as you look at deeper layers in a model that the CNN transforms an input image into a smaller, more distilled representation of the content of that image. By the last layer of a CNN, it has learned to extract high-level features that still contain enough information about an image to classify it. These techniques are actually the basis for applications like Style Transfer and DeepDream that compose images based on layer activations and extracted features. Perhaps most importantly, feature visualization gives you a way to show and communicate to other people what your networks have learned.