The architecture of the net was very standard. It was taken from Google. It was a recurrent convolution. And our network is shown over here. Different layers. In fact, we experimented both with completely untrained networks and networks that had been pre-trained by googling other images. And then they would feed in the image that we care about, the skin lesion, and train the class label. We didn’t quite use 2,000 translabels. Many of these had misspellings and were a replica of the same. But we just did about 757 class labels. If you care about it, you can read in the paper how we did this. And then we trained the network to detect a specific class label.