![]() |
DeepAR iOS
v5.2.0
|
Represents data structure containing all the information available about the detected face. More...
#include <DeepAR.h>
Public Attributes | |
BOOL | detected |
Determines whether the face is detected or not. | |
float | translation [3] |
The X, Y and Z translation values of the face in the scene. | |
float | rotation [3] |
The pitch, yaw and roll rotation values in euler angles (degrees) of the face in the scene. | |
float | poseMatrix [16] |
Translation and rotation in matrix form (D3D style, column-major order). | |
float | landmarks [68 *3] |
Detected face feature points in 3D space (X, Y, Z). Read more here. | |
float | landmarks2d [68 *3] |
Detected face feature points in 2D screen space coordinates (X, Y). More... | |
float | faceRect [4] |
A rectangle containing the face in screen coordinates (X, Y, Width, Height). | |
float | emotions [5] |
Estimated emotions for the face. More... | |
float | actionUnits [63] |
The array of action units. | |
int | numberOfActionUnits |
The number of action units. | |
Represents data structure containing all the information available about the detected face.
float FaceData::emotions[5] |
Estimated emotions for the face.
Each emotion has a value in [0.0, 1.0] range. The 1.0 value means 100% detected emotion.
We differentiate 5 different emotions:
float FaceData::landmarks2d[68 *3] |
Detected face feature points in 2D screen space coordinates (X, Y).
Usually more precise than 3D points but no estimation for Z translation. Read more here about feature points here.