9 – Color Thresholds

So now that we’ve seen how to treat images as grids of pixels and as functions of x and y, let’s see how to use this information. We’ll start by learning how to use information about the colors in an image to isolate a particular area. This is an example of the computer vision pipeline … Read more

8 – Color Images

Now you’ve seen an example of an image broken down into a 2D grid of grayscale pixel values that has a width and a height, but color images are little different. Color images are interpreted as 3D cubes of values with width, height, and depth. The depth is the number of color channels. Most color … Read more

7 – Images as Grids of Pixels

One of your first tasks will be to classify a binary set of data; images taken during the day or night. But before you can complete this task, you first have to learn about how images are seen by machines. Let’s take this image of a car. This is actually a self-driving car on the … Read more

6 – Image Formation

So how does a computer actually see image data? Images or just 2D representations of 3D world scenes. For example, if you take a picture of an apple which we know is a 3-D object, you’ll get a 2D image that represents that apple. The image contains detail about the color and the shape of … Read more

5 – AffdexMe Demo

The best way to understand how emotion AI works is by example. Would you like to see a live demo? Sure. Why not? Everyone wants their computers to understand them better. All right, so here’s the demo. Basically, what’s happening here is that the algorithm is looking for faces and it’s detecting your face by, … Read more

4 – 09. Training a Model

So, I’ve just described a computer vision pipeline that takes in a sequence of images and through a series of steps can recognize different facial expressions and emotions. But it still seems kind of mysterious. Can you talk a bit about how exactly a model like this can be trained to recognize different facial expressions? … Read more

3 – 08. Computer Vision Pipeline

Let me walk you through a sequence of steps that you need to analyze facial expressions and emotions. Other computer vision tasks have different desired outputs and corresponding algorithms, but they use a similar overall pipeline. First off, a computer receives visual input from an imaging device like a camera. This is typically captured as … Read more

2 – 05. Emotional Intelligence

Today I’m with Dr. Rana el Kaliouby, Co-founder and CEO at Affectiva, a company that uses computer vision to build systems that are emotionally intelligent. Rana, I’ve been talking to our students about the role vision plays at a basic level in AI systems by helping to recognize objects and behavior but your work is … Read more

16 – Nd113 C7 46 L Evaluation Metrics

Now we have created a complete classifier that takes in an RGB image and outputs a predicted label for any image. The next step is to look at the accuracy of our model. The accuracy of any classification model is found by comparing predicted and true labels. If the predicted label matches the true label, … Read more

15 – Nd113 C7 45 L Classification V1

Let’s go back to our notebook and complete our day and night classifier. After we extract this brightness feature, the average value of an image, we want to turn this into a predicted label that classifies any image. We’ve looked at the average brightness of both day and night images, and you should have an … Read more

14 – Nd113 C7 32 L Average Brightness V2

Your first steps in building a day and night image classifier or to visualize the input images and standardize them to be the same size. To do that, we imported our usual resources and loaded the image datasets. And we created a standardized list of all the images and their labels. Finally, we could visualize … Read more

13 – Nd113 C7 29 L Features

When you approach a classification challenge, you may ask yourself, “How can I tell these images apart? What traits do these images have that differentiate them? And how can I write code to represent their differences?” Adding onto that, how can I ignore irrelevant or overly similar parts of these images? You may have thought … Read more

12 – Labeled Data and Accuracy

After exploring the day and night image data, you may have noticed a part of the data that we haven’t yet gone over. A label associated with each image. So, what exactly is a label? And why do we need It? A label is kind of like a tag that’s attached to a specific image. … Read more

11 – Color Spaces and Transforms

So we’ve seen how to detect a blue screen background. But this detection assumed that the scene was very well lit and that the screen was a very consistent blue. What would happen if the lighting changed and part of the wall was in shadow or washed out and bright? The simple blue color threshold … Read more

10 – Coding A Blue Screen

Let’s code up a simple blue screen color selection in Python. We’ll be using a new library in this lesson called OpenCV, which is commonly used in computer vision applications and will help us create custom applications of our own. We’ll be starting with an image of a pizza on a blue screen background. We’ll … Read more

1 – 01 Pattern Recognition V1

The first step in recognizing patterns in images is learning how images are seen by computers. For example, say you want to find the boundary of an object. A common task is separating an object like a human from a background. For us, it’s easy to see where the background ends and the human begins. … Read more