1 – Conclusion

Congratulations. You’ve gone through the entire deep neural networks introduction. I am super, super proud of you. Now, you know what neural networks are, how to build them, train them, and optimize their performance. You are fully equipped to go through the rest of the class learning interesting cutting edge applications, such as image recognition … Read more

4-4-1-19. Outro

Congratulations. You’ve gone through the entire deep neural networks introduction. I am super, super proud of you. Now, you know what neural networks are, how to build them, train them, and optimize their performance. You are fully equipped to go through the rest of the class learning interesting cutting edge applications, such as image recognition … Read more

4-4-1-18. Quiz: TensorFlow Dropout

TensorFlow Dropout Figure 1: Taken from the paper “Dropout: A Simple Way to Prevent Neural Networks from Overfitting” (https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf) Dropout is a regularization technique for reducing overfitting. The technique temporarily drops units (artificial neurons) from the network, along with all of those units’ incoming and outgoing connections. Figure 1 illustrates how dropout works. TensorFlow provides … Read more

4-4-1-17. Finetuning

Loading the Weights and Biases into a New Model Sometimes you might want to adjust, or “finetune” a model that you have already trained and saved. However, loading saved Variables directly into a modified model can generate errors. Let’s go over how to avoid these problems. Naming Error TensorFlow uses a string identifier for Tensors … Read more

4-4-1-16. Save and Restore TensorFlow Models

Save and Restore TensorFlow Models Training a model can take hours. But once you close your TensorFlow session, you lose all the trained weights and biases. If you were to reuse the model in the future, you would have to train it all over again! Fortunately, TensorFlow gives you the ability to save your progress … Read more

4-4-1-15. Deep Neural Network in TensorFlow

Deep Neural Network in TensorFlow You’ve seen how to build a logistic classifier using TensorFlow. Now you’re going to see how to use the logistic classifier to build a deep neural network. Step by Step In the following walkthrough, we’ll step through TensorFlow code written to classify the letters in the MNIST database. If you … Read more

4-4-1-14. Quiz: TensorFlow ReLUs

TensorFlow ReLUs TensorFlow provides the ReLU function as tf.nn.relu(), as shown below. The above code applies the tf.nn.relu() function to the hidden_layer, effectively turning off any negative weights and acting like an on/off switch. Adding additional layers, like the output layer, after an activation function turns the model into a nonlinear function. This nonlinearity allows the network to solve more complex … Read more

4-4-1-13. Two-layer Neural Network

Multilayer Neural Networks In the previous lessons and the lab, you learned how to build a neural network of one layer. Now, you’ll learn how to build multilayer neural networks with TensorFlow. Adding a hidden layer to a network allows it to model more complex functions. Also, using a non-linear activation function on the hidden … Read more

4-4-1-12. Lab: NotMNIST in TensorFlow

TensorFlow Neural Network Lab In this lab, you’ll use all the tools you learned from *Introduction to TensorFlow* to label images of English letters! The data you are using, notMNIST, consists of images of a letter from A to J in different fonts. The above images are a few examples of the data you’ll be training … Read more

4-4-1-11. Pre-Lab: NotMNIST in TensorFlow

TensorFlow Neural Network Lab TensorFlow Lab We’ve prepared a Jupyter notebook that will guide you through the process of creating a single layer neural network in TensorFlow. You’ll implement data normalization, then build and train the network with TensorFlow. Getting the notebook The notebook and all related files are available from our GitHub repository. Either clone … Read more

4-4-1-10. Epochs

Epochs An epoch is a single forward and backward pass of the whole dataset. This is used to increase the accuracy of the model without requiring more data. This section will cover epochs in TensorFlow and how to choose the right number of epochs. The following TensorFlow code trains a model using 10 epochs. Running … Read more

4-4-1-9. Quiz: Mini-batch

Mini-batching In this section, you’ll go over what mini-batching is and how to apply it in TensorFlow. Mini-batching is a technique for training on subsets of the dataset instead of all the data at one time. This provides the ability to train a model, even if a computer lacks the memory to store the entire … Read more

4-4-1-8. Quiz: TensorFlow Cross Entropy

Cross Entropy in TensorFlow As with the softmax function, TensorFlow has a function to do the cross entropy calculations for us. Let’s take what you learned from the video and create a cross entropy function in TensorFlow. To create a cross entropy function in TensorFlow, you’ll need to use two new functions: tf.reduce_sum() tf.log() Reduce … Read more

4-4-1-7. Quiz: TensorFlow Softmax

TensorFlow Softmax The softmax function squashes it’s inputs, typically called logits or logit scores, to be between 0 and 1 and also normalizes the outputs such that they all sum to 1. This means the output of the softmax function is equivalent to a categorical probability distribution. It’s the perfect function to use as the output activation for … Read more

4-4-1-6. Quiz: TensorFlow Linear Function

Linear functions in TensorFlow The most common operation in neural networks is calculating the linear combination of inputs, weights, and biases. As a reminder, we can write the output of the linear operation as Here, $\mathbf{W}$ is a matrix of the weights connecting two layers. The output $\mathbf{y}$, the input $\mathbf{x}$, and the biases $\mathbf{b}$ are all vectors. Weights and Bias … Read more

4-4-1-5. Quiz: TensorFlow Math

TensorFlow Math Getting the input is great, but now you need to use it. You’re going to use basic math functions that everyone knows and loves – add, subtract, multiply, and divide – with tensors. (There’s many more math functions you can check out in the documentation.) Addition You’ll start with the add function. The tf.add() function does … Read more

4-4-1-4. Quiz: TensorFlow Input

Input In the last section, you passed a tensor into a session and it returned the result. What if you want to use a non-constant? This is where tf.placeholder() and feed_dict come into place. In this section, you’ll go over the basics of feeding data into TensorFlow. tf.placeholder() Sadly you can’t just set x to your dataset and put it in … Read more

4-4-1-3. Hello, Tensor World!

Hello, Tensor World! Let’s analyze the Hello World script you ran. For reference, I’ve added the code below. Tensor In TensorFlow, data isn’t stored as integers, floats, or strings. These values are encapsulated in an object called a tensor. In the case of hello_constant = tf.constant(‘Hello World!’), hello_constant is a 0-dimensional string tensor, but tensors come in a … Read more

4-4-1-2. Installing TensorFlow

Throughout this lesson, you’ll apply your knowledge of neural networks on real datasets using TensorFlow (link for China), an open source Deep Learning library created by Google. You’ll use TensorFlow to classify images from the notMNIST dataset – a dataset of images of English letters from A to J. You can see a few example images below. … Read more

4-4-1-1. Intro

Hi! It’s Luis again! Intro to TensorFlow Now that you are an expert in Neural Networks with Keras, you’re more than ready to learn TensorFlow. In the following sections of this Nanodegree Program, you will be using Keras and TensorFlow alternately. Keras is great for building neural networks quickly, but it abstracts a lot of … Read more