Finally, we ask the question, what is network fixating on? What is that you look at? And you can do a sensitivity analysis. You can wiggle your input image a little bit and see how much the output class of wiggles. And for some wiggles, if you wiggle them a little bit the output does not change very much. But for others it changes a lot. That is called a sensitivity analysis and that gives you a feeling for why does no one ever pay attention. What factors are the images normally used to arrive at its conclusion. Here is a melanoma and you can see on the right side in this diagram, the darker dot the more important that feature is. If you look in detail here, you will see that this melanoma has already spread a little bit and is comprised of multiple black dots. And the fact that there are multiple black dots seems to have a big impact on the vote of the [inaudible] as shown by the corresponding dark areas in the right image. Take some time and study this image over here and look at the different types of skin patches and what region of those patches play a big role. All these images were classified correctly, and in some cases you will see that the entire image matters; in others, very specific features of being emphasis with an [inaudible] classification. The network is smart enough to really understand what part of the image is essential for finding cancer.