A typical convolutional neural network might consist of a series of convolution layers. Followed by fully connected layers and ultimately a soft max activation function. This is a great architecture for a classification task like, is this a picture of a hotdog? But what if we want to change our task ever so slightly. We want to answer the question, “where in this picture is the hotdog?”. The question is much more difficult to answer since fully connected layers don’t preserve spatial information. But turns out, if you change the C from connected to convolutional, we can integrate convolutions directly into the layer to create fully convolutional networks or FCN’s for short. FCN’s help us answer where is the hotdog question. Because while doing the convolution they preserve the spatial information throughout the entire network. Additionally, since convolutional operations fundamentally don’t care about the size of the input, a fully convolutional network will work on images of any size. In a classic convolutional network with fully connected final layers, the size of the input is constrained by the size of the fully connected layers. Passing different size images through the same sequence of convolutional layers and flattening the final output. These outputs would be of different sizes, which doesn’t bode very well for matrix multiplication. We will dive more into details of fully convolutional networks next.