Web browsers get more powerful by the day. Websites and web applications are also increasing in complexity. Operations that required a supercomputer some decades ago now runs on a smartphone. One of those things is face detection.
The ability to detect and analyze a face is super useful, as it enables us to add clever features. Think of automatically blurring faces (like Google Maps does), panning and scaling a webcam feed to focus on people (like Microsoft Teams), validating a passport, adding silly filters (like Instagram and Snapchat), and much more. But before we can do all that, we first need to find the face!
Face-api.js is a library that enables developers to use face detection in their apps without requiring a background in machine learning.
The code for this tutorial is available on GitHub.
Face Detection with Machine Learning
Table of Contents
Detecting objects, like a face, is quite complex. Think about it: perhaps we could write a program that scans pixels to find the eyes, nose, and mouth. It can be done, but to make it totally reliable is practically unachievable, given the many factors to account for. Think of lighting conditions, facial hair, the vast variety of shapes and colors, makeup, angles, face masks, and so much more.
Neural networks, however, excel at these kinds of problems and can be generalized to account for most (if not all) conditions. We can create, train, and use neural networks in the browser with TensorFlow.js, a popular JavaScript machine learning library. However, even if we use an off-the-shelf, pre-trained model, we’d still get a little bit into the nitty-gritty of supplying the information to TensorFlow and interpreting the output. If you’re interested in the technical details of machine learning, check out “A Primer on Machine Learning with Python”.
Enter face-api.js. It wraps all of this into an intuitive API. We can pass an img
, canvas
, or video
DOM element and the library will return one or a set of results. Face-api.js can detect faces, but also estimate various things in them, as listed below.
- Face detection: get the boundaries of one or multiple faces. This is useful for determining where and how big the faces are in a picture.
- Face landmark detection: get the position and shape of the eyebrows, eyes, nose, mouth and lips, and chin. This can be used to determine facing direction or to project graphics on specific regions, like a mustache between the nose and lips.
- Face recognition: determine who’s in the picture.
- Face expression detection: get the expression from a face. Note that the mileage may vary for different cultures.
- Age and gender detection: get the age and gender from a face. Note that for “gender” classification, it classifies a face as feminine or masculine, which doesn’t necessarily reveal their gender.
Before you use any of this beyond experiments, please take note that artificial intelligence excels at amplifying biases. Gender classification works well for cisgendered people, but it can’t detect the gender of my nonbinary friends. It will identify white people most of the time but frequently fails to detect people of color.
Be very thoughtful about using this technology and test thoroughly with a diverse testing group.
Installation
We can install face-api.js via npm:
npm install face-api.js
However, to skip setting up build tools, I’ll include the UMD bundle via unpkg.org:
/* globals faceapi */ import 'https://unpkg.com/face-api.js@0.22.2/dist/face-api.min.js';
After that, we’ll need to download the correct pre-trained model(s) from the library’s repository. Determine what we want to know from faces, and use the Available Models section to determine which models are required. Some features work with multiple models. In that case, we have to choose between bandwidth/performance and accuracy. Compare the file size of the various available models and choose whichever you think is best for your project.
Unsure which models you need for your use? You can return to this step later. When we use the API without loading the required models, an error will be thrown, stating which model the library expects.
We’re now ready to use the face-api.js API.
Examples
Let’s build some stuff!
For the examples below, I’ll load a random image from Unsplash Source with this function:
function loadRandomImage() { const image = new Image(); image.crossOrigin = true; return new Promise((resolve, reject) => { image.addEventListener('error', (error) => reject(error)); image.addEventListener('load', () => resolve(image)); image.src = 'https://source.unsplash.com/512x512/?face,friends'; }); }
Continue reading Face Detection on the Web with Face-api.js on SitePoint.