10 – 10 Visualizing Activations V1 RENDER V2

For intermediate layers, like the second convolutional layer in a CNN, visualizing the learned weights in each filter doesn’t give us easy to read information. So, how can we visualize what these deeper layers are seeing? Well, what can give us useful Information is to look at the feature maps of these layers as the network looks at specific images. This is called layer activation, and it means looking at how a certain layer of feature maps activates when it sees a specific input image such as an image of a face. The filters in deeper convolutional layers will often show high levels or bright spots of activation in localized areas. The important thing is to see that these maps aren’t just producing blobby, noisy outputs. They should produce a noticeably different response for different classes of images. You’ve already seen a few examples of activations like these. But next, let’s take a look at how we can extract activation maps from a trained CNN.

%d 블로거가 이것을 좋아합니다: