5 – K-means Implementation

Here I’ve read in an image of a monarch butterfly. And I want to segment this image into a few pieces, just enough to separate the background green scenery and the orange and black butterfly. To perform K-means segmentation, I’m going to focus on one distinguishing feature, the color of each pixel. I’ll need to … Read more

4 – K-means Clustering

One commonly used image segmentation technique is k-means clustering. It’s a machine learning technique that separates an image into segments by clustering or grouping together data points that have similar traits. As an example, let’s look at this image of oranges in a bowl. If I asked k-means to break up this image into two … Read more

3 – Image Contours

Edge detection algorithms are often used to detect the boundaries of objects. But, after performing edge detection you’ll often be left with sets of edges that highlight not only object boundaries but also interesting features and lines. And to do image segmentation, you’ll want only complete closed boundaries that marked distinct areas and objects in … Read more

2 – Corner Detectors

When we built an edge detector, we looked at the difference in intensity between neighboring pixels, and an edge was detected if there was a big and abrupt change in intensity in any one direction- up or down, or left or right, or diagonal. Recall that the change in intensity in an image is also … Read more

1 – Types Of Features

Let’s look at this image of a mountain. This is Mt. Rainier in Washington state. Most features on this and any other image fall into one of three categories: edges, corners, and blobs. Edges we’re already very familiar with, they’re are just areas in an image where the intensity abruptly changes. Also known as areas … Read more

9 – Haar Cascades

Let’s build on top of our knowledge about feature extraction and object recognition, and think about how we might be able to simplify and speed up this whole pipeline. In this example, we’ll be building a face detector working with an algorithm called Haar cascades. This algorithm works by training on many positive images (images … Read more

8 – Hough Line Detection

In this notebook, I’ve read in and made a copy of an image of a hand holding a mobile phone. Now, let’s say we want to isolate this screen area. By using the Hough transform, we should be able to detect the lines that form the screen boundary. To perform Hough line detection, I’ll first … Read more

7 – Hough Transform

The simplest boundary you can detect is a line, and more complex boundaries are often made up of several lines. For example, in document or photo scanning, documents are typically rectangular, and so their boundary can be thought of as four lines placed together. And when you do edge detection, you’ll find that edges when … Read more

6 – Canny Edge Detection

Now, we’ve seen the importance of using both low pass and high pass filters for accurate edge detection. But even with these used together, edge detection is still a very complex problem. We have to think about what level of intensity change constitutes an edge, and how we can consistently detect and represent both thin … Read more

5 – Gaussian Blur

Instead of using an all around averaging filter, we may want to filter that both blurs and image and better preserves the edges in it. And for that we use gaussian blur. This is perhaps the most frequently used low-pass filter in computer vision applications. It’s essentially a weighted average that gives the most weight … Read more

4 – Low-pass Filters

Earlier you saw an example of noise in an image of San Francisco City Hall. This noise is generally seen as speckles or discoloration in an image and it doesn’t contain any useful information. It might even mess with processing steps such as an edge detection when high pass filters can amplify noise if it’s … Read more

3 – Creating a Filter

So let’s see how to create our own high pass filter. First I’ll show you how to define your own custom kernel and later we’ll use openCV functions to help us build commonly used filters. Here’s an image of a building. It’s actually San Francisco City Hall. I read it in and as usual I’ve … Read more

2 – High-pass Filters

In image processing, filters are used to filter out unwanted or irrelevant information in an image or to amplify features like object boundaries or other distinguishing traits. High-pass filters are used to make an image appear sharper and enhance high-frequency parts of an image, which are areas where the levels of intensity in neighboring pixels … Read more

1 – Nd113 C7 36 L Filters And Finding Edges V1

Now, we’ve seen how to use color to help isolate a desired portion of an image and even help classify an image. In addition to taking advantage of color information, we also have knowledge about patterns of grayscale intensity in an image. Intensity is a measure of light and dark similar to brightness, and we … Read more

9 – Color Thresholds

So now that we’ve seen how to treat images as grids of pixels and as functions of x and y, let’s see how to use this information. We’ll start by learning how to use information about the colors in an image to isolate a particular area. This is an example of the computer vision pipeline … Read more

8 – Color Images

Now you’ve seen an example of an image broken down into a 2D grid of grayscale pixel values that has a width and a height, but color images are little different. Color images are interpreted as 3D cubes of values with width, height, and depth. The depth is the number of color channels. Most color … Read more

7 – Images as Grids of Pixels

One of your first tasks will be to classify a binary set of data; images taken during the day or night. But before you can complete this task, you first have to learn about how images are seen by machines. Let’s take this image of a car. This is actually a self-driving car on the … Read more

6 – Image Formation

So how does a computer actually see image data? Images or just 2D representations of 3D world scenes. For example, if you take a picture of an apple which we know is a 3-D object, you’ll get a 2D image that represents that apple. The image contains detail about the color and the shape of … Read more

5 – AffdexMe Demo

The best way to understand how emotion AI works is by example. Would you like to see a live demo? Sure. Why not? Everyone wants their computers to understand them better. All right, so here’s the demo. Basically, what’s happening here is that the algorithm is looking for faces and it’s detecting your face by, … Read more

4 – 09. Training a Model

So, I’ve just described a computer vision pipeline that takes in a sequence of images and through a series of steps can recognize different facial expressions and emotions. But it still seems kind of mysterious. Can you talk a bit about how exactly a model like this can be trained to recognize different facial expressions? … Read more