To make the most of the tool, read through how to use it and the background first.
Facial Aesthetics Meets Machine Learning
- Image Guidelines
- QOVES Facial Assessment Tool
- How It Works
- Change log
- Future Versions
Asking someone to describe an attractive face is difficult, but somehow we all know it when we see it. Visual Machine Learning works in a very similar way, making this the perfect proof-of-concept application which the QOVES team has spent months in developing.
It provides a quick way to assess faces and although it might not be as perfectly accurate as a human, it allows us to help more people with free content.
Read through the image guidelines below to upload the correct type of shot. Once uploaded, the top 10 most likely facial flaws will appear on the right-hand side. Hovering over each tag will give a short snippet of what the flaw means in simple terms and clicking it will take you to a more in-depth article with recommendations for a surgeon that can help correct it.
The probability score is not always accurate. It’s a confidence interval meaning that the computer is only xx% sure this flaw is present by comparing visually similar images.
Your uploaded images are not stored anywhere on the QOVES website. The server request is made every time you upload an image, but the image is lost after as it is temporary. We do not have access to your uploads.
If you’ve been watching our Youtube videos, you’ll know how critical we are of other beauty algorithms and their inherent biases in not including all races, ethnicities and face types.
We’ve tried to design our backend algorithm to be as inclusive as possible. This involves using computer-generated faces to avoid copyright infringement and gives us far more control in sculpting different face shapes. Granted, our tool currently only looks for the most basic of facial flaws (superficial skin flaws) which apply to everyone to some degree, we’re working on implementing more race-specific analysis, as everyone is different.
We have big things planned for our tool with the end goal being to analyze over 100 cosmetic flaws, disproportions and skin malaises.
For the next few releases, we intend to keep adding to the list of observable facial flaws with greater support for a wider range of races, face shapes and age groups.
15/7/20 – V1.0
- 13 Facial concepts added (mostly related to skin ageing)
- GUI improvements
- More ways to upload images
- Greater inclusion of extremely young and old faces
Learn About Convoluted Neural Networks
We use a Convoluted Neural Network as the backbone of our machine learning architecture. In as simple terms as possible, the algorithm takes a 3×3 pixel snapshot of the image, groups them together and assigns it a value. It keeps moving right until it’s covered every pixel in the image and these values are mapped onto a new image and the process repeats until we’re left with a very simplified representation of what the image is.
This method of mapping images allows us to extract only the important, defining features of what we’re looking at. A human lets the computer know what flaws the image might contain and the algorithm looks for visually similar matches.
Our tool was trained using frontal photographs of thousands of computer-generated faces. For optimal results, we recommend that you upload a photo similar to the examples on the right:
- Clear, smooth background, like a wall
- Neutral facial expression
- Have the photo be taken by someone else
- Eye-level close up
- Less than 3mb upload limit
- Natural, or illuminated lighting, (no harsh downlighting)
- Facing the camera
- Bare skin (no makeup)
- One subject in the photo only
Qoves Facial Assessment Tool
Upload Your Image
Upload a neutral portrait of yourself for analysis. Images are not saved by QOVES.
Click The Tags
Click on a tag to learn more and gain a starting point into your aesthetics journey.
Read through our scientific review of products and treatments to find out if they really work.
Facial Analysis Tool
Black Magic Or Computer Science
How The Machine Learning Algorithm Works
Our machine learning backend uses a Convolutional Neural Network (CNN) which is just one of the many different machine architectures possible for simulating artificial intelligence. We chose this type of network because it closely mimics how the human brain makes decisions about faces, although it’s still dubious how closely it resembles the complexity of humans. This type of network is also much more CPU efficient by using pooling techniques and weightings to simplify many matrices into one.
A Convoluted Network, unlike a Conventional Network, searches for top-level features of a human face. In other words, it breaks down pixels into simpler and simpler maps as shown on the right, assigning a weighting to each square based on colour values, bit depth and other parameters. The resulting square matrix is a fraction of the image size but still contains enough byte data to identify crucial facial features such as deep nasolabial folds where the pixels are notably darker.
Example of a convolutional layer sampling an image. towardsdatascience.com
We can compare visually similar convolution maps with known ones, tagged by a human rater. With enough training, the algorithm can identify similarities in complicated facial features that human raters themselves wouldn’t see similarities in.
Evaluation Matrix highlighting the probability and accuracy of a certain flaw appearing. The X axis represents the predicted flaw while the Y represents the actual flaw.
How Accurate Is It?
From the evaluation matrix, certain flaws are more accurately detected than others, as is natural. Part of their accuracy and precision depends on the available data for training the model and partly on the recurrence of these flaws when the model is tested. For instance, a flaw that does not appear too often such as ‘scleral show’ may be more difficult to detect when it does appear on a face and has a greater risk of being a false negative. Other flaws that do appear often such as nasolabial lines are more accurate up to 93% in this case.
The overall accuracy of this current version was rated to be 72% with difficult flaws weighing down the success of the easier ones. This was developed as a proof of concept, but we’re working currently to improve the accuracy to 98% to be used as a proper diagnostic tool for aesthetic consultation.