Let me walk you through a sequence of steps that you need to analyze facial expressions and emotions. Other computer vision tasks have different desired outputs and corresponding algorithms, but they use a similar overall pipeline. First off, a computer receives visual input from an imaging device like a camera. This is typically captured as a sequence of images or frames. Each frame is then sent through some preprocessing steps that enhance the quality and detail of the image. You may also perform other transformations here such as changing from color to grayscale. Next, to these images are analyzed and our software recognizes certain facial features of interest like the curve of the mouth and shape of the eyes, and then data about these features is fed into a so-called trained model that from previously known data can recognize patterns in these facial expressions and finally identify a certain emotion. It does so with a certain probability that’s reported back. Finally, having recognized an emotion, an application can then act on this and interact with a human in a way that takes their emotional state into account.