The Face State analysis engine provides additional information about detected faces, for example:
This section describes the parameters that you can set in the configuration section for a Face State task.
Configuration Parameter | Description |
---|---|
FrameRate | The number of frames to analyze per second of video. |
Input | The track that you want to process. |
Type | The analysis engine to use. Set this parameter to FaceState . |
Output track | Type | Description | Output1This column indicates whether the information contained in the track is included by default in the output created by an output task (when you don't set the Input parameter for the output task). |
---|---|---|---|
Result
|
FaceStateResult | Contains a record for each detected face. Each record contains information such as facial expression, whether the eyes are open, and whether the person is wearing spectacles. | Yes |
ResultWithSource
|
FaceStateAndImage | Contains the same information as the Result track, but also includes the best source frame. |
No |
Field name | Type | Description |
---|---|---|
id | UUIDData | A universally unique identifier to identify the face |
face | FaceData | Information about the detected face |
expression | String | A string describing the facial expression ("happy" or "neutral") |
eyesopen | Boolean | Indicates whether the person's eyes are open |
spectacles | Boolean | Indicates whether the person is wearing spectacles |
Field name | Type | Description |
---|---|---|
id | UUIDData | A universally unique identifier to identify the face |
face | FaceData | Information about the detected face |
expression | String | A string describing the facial expression ("happy" or "neutral") |
eyesopen | Boolean | Indicates whether the person's eyes are open |
spectacles | Boolean | Indicates whether the person is wearing spectacles |
image | ImageData | The source frame |
|