Link Search Menu Expand Document

Facial Detection

  1. Overview
  2. Facial Expression Detection
    1. Inference Engine and Algorithm
    2. Running Facial Expression Detection
      1. Using Images for Inference
        1. Default Image
        2. Custom Image
      2. Using Video Source for Inference
        1. Video File
        2. Video Camera or Webcam
    3. Extra Parameters
  3. Facial and Eyes Detection
    1. Inference Engine and Algorithm
    2. Running Facial Expression Detection
      1. Using Images for Inference
        1. Default Image
        2. Custom Image
      2. Using Video Source for Inference
        1. Video File
        2. Video Camera or Webcam
    3. Extra Parameters
  4. References

Overview

Emotion recognition is the process of identifying human emotion. People vary widely in their accuracy at recognizing the emotions of others. Use of technology to help people with emotion recognition is a relatively nascent research area. Generally, the technology works best if it uses multiple modalities in context. To date, the most work has been conducted on automating the recognition of facial expressions from video, spoken expressions from audio, written expressions from text, and physiology as measured by wearables 1 .

Facial Expression Detection

Inference Engine and Algorithm

tfliteframework

This demo uses:

  • TensorFlow Lite as an inference engine 2 ;
  • MobileNet as default algorithm 3 .

More details on eIQ™ page.

Running Facial Expression Detection

Using Images for Inference

Default Image
  1. Run the Facial Expression Detection demo using the following line:
    # pyeiq --run facial_expression_detection
    
    • This runs inference on a default image: facial_detection
Custom Image
  1. Pass any image as an argument:
    # pyeiq --run facial_expression_detection --image=/path_to_the_image
    

Using Video Source for Inference

Video File
  1. Run the Facial Expression Detection using the following line:
    # pyeiq --run facial_expression_detection --video_src=/path_to_the_video
    
Video Camera or Webcam
  1. Specify the camera device:
    # pyeiq --run facial_expression_detection --video_src=/dev/video<index>
    

Extra Parameters

  1. Use –help argument to check all the available configurations:
    # pyeiq --run facial_expression_detection --help
    

Facial and Eyes Detection

Inference Engine and Algorithm

opencvframework

This demo uses:

  • OpenCV as engine 4 ;
  • Haar Cascades as default algorithm 5 .

More details on eIQ™ page.

Running Facial Expression Detection

Using Images for Inference

Default Image
  1. Run the Face and Eyes Detection demo using the following line:
    # pyeiq --run face_and_eyes_detection
    
    • This runs inference on a default image: face_detection
Custom Image
  1. Pass any image as an argument:
    # pyeiq --run face_and_eyes_detection --image=/path_to_the_image
    

Using Video Source for Inference

Video File
  1. Run the Face and Eyes Detection using the following line:
    # pyeiq --run face_and_eyes_detection --video_src=/path_to_the_video
    
Video Camera or Webcam
  1. Specify the camera device:
    # pyeiq --run face_and_eyes_detection --video_src=/dev/video<index>
    

Extra Parameters

  1. Use –help argument to check all the available configurations:
    # pyeiq --run face_and_eyes_detection --help
    

References

  1. https://en.wikipedia.org/wiki/Emotion_recognition 

  2. https://www.tensorflow.org/lite 

  3. https://arxiv.org/abs/1704.04861 

  4. https://github.com/opencv/opencv 

  5. https://github.com/opencv/opencv/tree/master/data/haarcascades