double-take: [BUG] Aiserver Face Processing, no face found in image
Creating issue here based on the recommendation of Mr. s.krashevich on CodeProject.AI discussion forum. I hope I have submitted all information correctly. Please let me know if any information is missing. It’s my first time posting on GitHub, hence formatting might be off a little, please correct me if I have done something wrong!
Describe the bug I currently have the CUDA enabled version of the CodeProject.AI server running in a docker container, alongside Frigate and double-take. In the config section of double-take, Aiserver shows as green so it can be reached. It does find faces as well. Just that its very random, most often double-take matches page shows a red bounding box on the face, lists aiserver and gives a 0 confidence level and labels it as unknown when the face is super clear and in the database. Clicking the refresh button does nothing. In the CPAI’s logs, Im noticing a lot of: Face Processing: No Face found in image in red. In the snapshots that had this problem, I directly downloaded the image from the matches page and manually went to the CPAI project explorer and submitted the image for recognition and it does an excellent job. It’s just that when double-take submits automatically, not all the time but very randomly, it fails to find a face in the image…
Note: After typing all of this out, I’m doubting if CPAI is running out of GPU memory as well. Please let me know if you think so…
Version of Double Take v1.13.10 Im using the latest pull.
Expected behavior Well the expected behavior is it to have identified every face.
Screenshots If applicable, add screenshots to help explain your problem.
Hardware
- Architecture or platform: Intel i5 8500 with NVIDIA GeForce GTX 750 1GB with 8GB RAM.
- OS: Ubuntu 20.04.06 LTS 64-bit
- Browser Firefox
- Docker image skrashevich/double-take:latest
Additional context
My Double-take config file:
# Double Take
# Learn more at https://github.com/skrashevich/double-take/#configuration
# frigate settings (default: shown below)
frigate:
url: <url>
# if double take should send matches back to frigate as a sub label
# NOTE: requires frigate 0.11.0+
update_sub_labels: true
# stop the processing loop if a match is found
# if set to false all image attempts will be processed before determining the best match
stop_on_match: true
# ignore detected areas so small that face recognition would be difficult
# quadrupling the min_area of the detector is a good start
# does not apply to MQTT events
min_area: 0
# object labels that are allowed for facial recognition
labels:
- person
attempts:
# number of times double take will request a frigate latest.jpg for facial recognition
latest: 10
# number of times double take will request a frigate snapshot.jpg for facial recognition
snapshot: 10
# process frigate images from frigate/+/person/snapshot topics
mqtt: true
# add a delay expressed in seconds between each detection loop
delay: 0
image:
# height of frigate image passed for facial recognition
height: 500
# only process images from specific cameras
cameras:
- CAM1
- CAM2
# - garage
# only process images from specific zones
#zones: []
# - camera: garage
# zone: driveway
# override frigate attempts and image per camera
#events: []
# front-door:
# attempts:
# # number of times double take will request a frigate latest.jpg for facial recognition
# latest: 5
# # number of times double take will request a frigate snapshot.jpg for facial recognition
# snapshot: 5
# # process frigate images from frigate/<camera-name>/person/snapshot topic
# mqtt: false
# # add a delay expressed in seconds between each detection loop
# delay: 1
# image:
# # height of frigate image passed for facial recognition (only if using default latest.jpg and snapshot.jpg)
# height: 1000
# # custom image that will be used in place of latest.jpg
# latest: http://camera-url.com/image.jpg
# # custom image that will be used in place of snapshot.jpg
# snapshot: http://camera-url.com/image.jpg
# detector settings (default: shown below)
detectors:
aiserver:
url: <url>
# number of seconds before the request times out and is aborted
timeout: 20
# require opencv to find a face before processing with detector
opencv_face_required: false
# only process images from specific cameras, if omitted then all cameras will be processed
# cameras:
# - front-door
# - garage
# enable mqtt subscribing and publishing (default: shown below)
mqtt:
host: <host>
username: <username>
password: <password>
# client_id: frigate
topics:
# mqtt topic for frigate message subscription
frigate: frigate/events
# mqtt topic for home assistant discovery subscription
homeassistant: homeassistant
# mqtt topic where matches are published by name
matches: double-take/matches
# mqtt topic where matches are published by camera name
cameras: double-take/cameras
# camera settings (default: shown below)
detect:
match:
# save match images
save: true
# include base64 encoded string in api results and mqtt messages
# options: true, false, box
base64: false
# minimum confidence needed to consider a result a match
confidence: 65
# hours to keep match images until they are deleted
purge: 168
# minimum area in pixels to consider a result a match
min_area: 2000
unknown:
# save unknown images
save: true
# include base64 encoded string in api results and mqtt messages
# options: true, false, box
base64: false
# minimum confidence needed before classifying a match name as unknown
confidence: 40
# hours to keep unknown images until they are deleted
purge: 8
# minimum area in pixels to keep an unknown result
min_area: 0
cameras:
CAM1:
# apply masks before processing image
masks:
# # list of x,y coordinates to define the polygon of the zone
coordinates:
- 1920,0,1920,328,1638,305,1646,0
# # show the mask on the final saved image (helpful for debugging)
visible: true
# # size of camera stream used in resizing masks
size: 1920x1080
# override global detect variables per camera
detect:
match:
# # save match images
save: true
# # include base64 encoded string in api results and mqtt messages
# # options: true, false, box
base64: true
# # minimum confidence needed to consider a result a match
confidence: 60
# # minimum area in pixels to consider a result a match
min_area: 2000
unknown:
# # save unknown images
save: true
# # include base64 encoded string in api results and mqtt messages
# # options: true, false, box
base64: true
# # minimum confidence needed before classifying a match name as unknown
confidence: 40
# # minimum area in pixels to keep an unknown result
min_area: 0
CAM2:
# apply masks before processing image
masks:
# # list of x,y coordinates to define the polygon of the zone
coordinates:
- 0,0,2688,0,2688,1520,0,1520
# # show the mask on the final saved image (helpful for debugging)
visible: true
# # size of camera stream used in resizing masks
size: 2688x1520
# override global detect variables per camera
detect:
match:
# # save match images
save: true
# # include base64 encoded string in api results and mqtt messages
# # options: true, false, box
base64: false
# # minimum confidence needed to consider a result a match
confidence: 65
# # minimum area in pixels to consider a result a match
min_area: 2000
unknown:
# # save unknown images
save: true
# # include base64 encoded string in api results and mqtt messages
# # options: true, false, box
base64: false
# # minimum confidence needed before classifying a match name as unknown
confidence: 40
# # minimum area in pixels to keep an unknown result
min_area: 0
notify:
telegram:
token: <token>
chat_id: <id>
Then my frigate config file:
mqtt:
# Optional: Enable mqtt server (default: shown below)
enabled: True
# Required: host name
host: <url>
# Optional: port (default: shown below)
port: 1883
# Optional: topic prefix (default: shown below)
# NOTE: must be unique if you are running multiple instances
topic_prefix: frigate
# Optional: client id (default: shown below)
# NOTE: must be unique if you are running multiple instances
client_id: frigate
# Optional: user
# NOTE: MQTT user can be specified with an environment variables that must begin with 'FRIGATE_'.
# e.g. user: '{FRIGATE_MQTT_USER}'
user: <username>
# Optional: password
# NOTE: MQTT password can be specified with an environment variables that must begin with 'FRIGATE_'.
# e.g. password: '{FRIGATE_MQTT_PASSWORD}'
password: <password>
# Optional: Detectors configuration. Defaults to a single CPU detector
detectors:
# Required: name of the detector
detector_name:
# Required: type of the detector
# Frigate provided types include 'cpu', 'edgetpu', and 'openvino' (default: shown below)
# Additional detector types can also be plugged in.
# Detectors may require additional configuration.
# Refer to the Detectors configuration page for more information.
type: cpu
# Optional: Database configuration
database:
# The path to store the SQLite DB (default: shown below)
path: /media/frigate/frigate.db
# Optional: logger verbosity settings
logger:
# Optional: Default log verbosity (default: shown below)
default: info
# Optional: Component specific logger overrides
logs:
frigate.event: debug
# Optional: ffmpeg configuration
# More information about presets at https://docs.frigate.video/configuration/ffmpeg_presets
ffmpeg:
# Optional: global ffmpeg args (default: shown below)
global_args: -hide_banner -loglevel warning -threads 2
# Optional: global hwaccel args (default: shown below)
# NOTE: See hardware acceleration docs for your specific device
hwaccel_args: preset-vaapi
# Optional: global input args (default: shown below)
input_args: preset-rtsp-generic
# Optional: global output args
output_args:
# Optional: output args for detect streams (default: shown below)
detect: -threads 2 -f rawvideo -pix_fmt yuv420p
# Optional: output args for record streams (default: shown below)
record: preset-record-generic
# Optional: output args for rtmp streams (default: shown below)
rtmp: preset-rtmp-generic
# Optional: Detect configuration
# NOTE: Can be overridden at the camera level
detect:
# Optional: width of the frame for the input with the detect role (default: shown below)
#width: 1280
# Optional: height of the frame for the input with the detect role (default: shown below)
#height: 720
# Optional: desired fps for your camera for the input with the detect role (default: shown below)
# NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
fps: 5
# Optional: enables detection for the camera (default: True)
enabled: True
# Optional: Number of frames without a detection before Frigate considers an object to be gone. (default: 5x the frame rate)
max_disappeared: 25
# Optional: Configuration for stationary object tracking
stationary:
# Optional: Frequency for confirming stationary objects (default: shown below)
# When set to 0, object detection will not confirm stationary objects until movement is detected.
# If set to 10, object detection will run to confirm the object still exists on every 10th frame.
interval: 0
# Optional: Number of frames without a position change for an object to be considered stationary (default: 10x the frame rate or 10s)
threshold: 50
# Optional: Define a maximum number of frames for tracking a stationary object (default: not set, track forever)
# This can help with false positives for objects that should only be stationary for a limited amount of time.
# It can also be used to disable stationary object tracking. For example, you may want to set a value for person, but leave
# car at the default.
# WARNING: Setting these values overrides default behavior and disables stationary object tracking.
# There are very few situations where you would want it disabled. It is NOT recommended to
# copy these values from the example config into your config unless you know they are needed.
max_frames:
# Optional: Default for all object types (default: not set, track forever)
default: 3000
# Optional: Object specific values
objects:
person: 1000
# Optional: Object configuration
# NOTE: Can be overridden at the camera level
objects:
# Optional: list of objects to track from labelmap.txt (default: shown below)
track:
- person
- eye glasses
- bottle
- cup
- chair
- desk
- laptop
- mouse
- keyboard
- cell phone
- book
- scissors
# Optional: mask to prevent all object types from being detected in certain areas (default: no mask)
# Checks based on the bottom center of the bounding box of the object.
# NOTE: This mask is COMBINED with the object type specific mask below
#mask: 0,0,1000,0,1000,200,0,200
# Optional: filters to reduce false positives for specific object types
#filters:
#person:
# Optional: minimum width*height of the bounding box for the detected object (default: 0)
#min_area: 5000
# Optional: maximum width*height of the bounding box for the detected object (default: 24000000)
#max_area: 100000
# Optional: minimum width/height of the bounding box for the detected object (default: 0)
#min_ratio: 0.5
# Optional: maximum width/height of the bounding box for the detected object (default: 24000000)
#max_ratio: 2.0
# Optional: minimum score for the object to initiate tracking (default: shown below)
#min_score: 0.5
# Optional: minimum decimal percentage for tracked object's computed score to be considered a true positive (default: shown below)
#threshold: 0.7
# Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
# Checks based on the bottom center of the bounding box of the object
#mask: 0,0,1000,0,1000,200,0,200
# Optional: Record configuration
# NOTE: Can be overridden at the camera level
record:
# Optional: Enable recording (default: shown below)
# WARNING: If recording is disabled in the config, turning it on via
# the UI or MQTT later will have no effect.
enabled: true
# Optional: Number of minutes to wait between cleanup runs (default: shown below)
# This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
expire_interval: 60
# Optional: Retention settings for recording
retain:
# Optional: Number of days to retain recordings regardless of events (default: shown below)
# NOTE: This should be set to 0 and retention should be defined in events section below
# if you only want to retain recordings of events.
days: 30
# Optional: Mode for retention. Available options are: all, motion, and active_objects
# all - save all recording segments regardless of activity
# motion - save all recordings segments with any detected motion
# active_objects - save all recording segments with active/moving objects
# NOTE: this mode only applies when the days setting above is greater than 0
mode: all
# Optional: Event recording settings
events:
# Optional: Number of seconds before the event to include (default: shown below)
pre_capture: 5
# Optional: Number of seconds after the event to include (default: shown below)
post_capture: 5
# Optional: Objects to save recordings for. (default: all tracked objects)
objects:
- person
# Optional: Restrict recordings to objects that entered any of the listed zones (default: no required zones)
required_zones: []
# Optional: Retention settings for recordings of events
retain:
# Required: Default retention days (default: shown below)
default: 10
# Optional: Mode for retention. (default: shown below)
# all - save all recording segments for events regardless of activity
# motion - save all recordings segments for events with any detected motion
# active_objects - save all recording segments for event with active/moving objects
#
# NOTE: If the retain mode for the camera is more restrictive than the mode configured
# here, the segments will already be gone by the time this mode is applied.
# For example, if the camera retain mode is "motion", the segments without motion are
# never stored, so setting the mode to "all" here won't bring them back.
mode: motion
# Optional: Per object retention days
objects:
person: 15
# Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.2.0)
go2rtc:
streams:
CAM1:
- <url>
CAM2:
- <url>
# Optional: in-feed timestamp style configuration
# NOTE: Can be overridden at the camera level
timestamp_style:
# Optional: Position of the timestamp (default: shown below)
# "tl" (top left), "tr" (top right), "bl" (bottom left), "br" (bottom right)
position: "tl"
# Optional: Format specifier conform to the Python package "datetime" (default: shown below)
# Additional Examples:
# german: "%d.%m.%Y %H:%M:%S"
format: "%d/%m/%Y %H:%M:%S"
# Optional: Color of font
color:
# All Required when color is specified (default: shown below)
red: 255
green: 255
blue: 255
# Optional: Line thickness of font (default: shown below)
thickness: 2
# Optional: Effect of lettering (default: shown below)
# None (No effect),
# "solid" (solid background in inverse color of font)
# "shadow" (shadow for font)
effect: "solid"
# Required
cameras:
# Required: name of the camera
CAM1:
# Optional: Enable/Disable the camera (default: shown below).
# If disabled: config is used but no live stream and no capture etc.
# Events/Recordings are still viewable.
enabled: True
# Required: ffmpeg settings for the camera
ffmpeg:
# Required: A list of input streams for the camera. See documentation for more information.
inputs:
# Required: the path to the stream
# NOTE: path may include environment variables, which must begin with 'FRIGATE_' and be referenced in {}
- path: rtsp://127.0.0.1:8554/CAM1
input_args: preset-rtsp-restream
# Required: list of roles for this stream. valid values are: detect,record,rtmp
# NOTICE: In addition to assigning the record and rtmp roles,
# they must also be enabled in the camera config.
roles:
- detect
#- record
- rtmp
# Optional: timeout for highest scoring image before allowing it
# to be replaced by a newer image. (default: shown below)
best_image_timeout: 60
# Optional: zones for this camera
#zones:
# Required: name of the zone
# NOTE: This must be different than any camera names, but can match with another zone on another
# camera.
#front_steps:
# Required: List of x,y coordinates to define the polygon of the zone.
# NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box.
#coordinates: 545,1077,747,939,788,805
# Optional: List of objects that can trigger this zone (default: all tracked objects)
#objects:
# - person
# Optional: Zone level object filters.
# NOTE: The global and camera filters are applied upstream.
#filters:
#person:
#min_area: 5000
#max_area: 100000
#threshold: 0.7
# Optional: Configuration for the jpg snapshots published via MQTT
mqtt:
# Optional: Enable publishing snapshot via mqtt for camera (default: shown below)
# NOTE: Only applies to publishing image data to MQTT via 'frigate/<camera_name>/<object_name>/snapshot'.
# All other messages will still be published.
enabled: True
# Optional: print a timestamp on the snapshots (default: shown below)
timestamp: false
# Optional: draw bounding box on the snapshots (default: shown below)
bounding_box: false
# Optional: crop the snapshot (default: shown below)
crop: True
# Optional: height to resize the snapshot to (default: shown below)
height: 500
# Optional: jpeg encode quality (default: shown below)
#quality: 70
# Optional: Restrict mqtt messages to objects that entered any of the listed zones (default: no required zones)
#required_zones: []
# Optional: Configuration for how camera is handled in the GUI.
ui:
# Optional: Adjust sort order of cameras in the UI. Larger numbers come later (default: shown below)
# By default the cameras are sorted alphabetically.
order: 0
# Optional: Whether or not to show the camera in the Frigate UI (default: shown below)
dashboard: True
# Required: name of the camera
CAM2:
# Optional: Enable/Disable the camera (default: shown below).
# If disabled: config is used but no live stream and no capture etc.
# Events/Recordings are still viewable.
enabled: True
# Required: ffmpeg settings for the camera
ffmpeg:
# Required: A list of input streams for the camera. See documentation for more information.
inputs:
# Required: the path to the stream
# NOTE: path may include environment variables, which must begin with 'FRIGATE_' and be referenced in {}
- path: rtsp://127.0.0.1:8554/CAM2
input_args: preset-rtsp-restream
# Required: list of roles for this stream. valid values are: detect,record,rtmp
# NOTICE: In addition to assigning the record and rtmp roles,
# they must also be enabled in the camera config.
roles:
- detect
#- record
- rtmp
# Optional: timeout for highest scoring image before allowing it
# to be replaced by a newer image. (default: shown below)
best_image_timeout: 60
# Optional: zones for this camera
#zones:
# Required: name of the zone
# NOTE: This must be different than any camera names, but can match with another zone on another
# camera.
#front_steps:
# Required: List of x,y coordinates to define the polygon of the zone.
# NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box.
#coordinates: 545,1077,747,939,788,805
# Optional: List of objects that can trigger this zone (default: all tracked objects)
#objects:
# - person
# Optional: Zone level object filters.
# NOTE: The global and camera filters are applied upstream.
#filters:
#person:
#min_area: 5000
#max_area: 100000
#threshold: 0.7
# Optional: Configuration for the jpg snapshots published via MQTT
mqtt:
# Optional: Enable publishing snapshot via mqtt for camera (default: shown below)
# NOTE: Only applies to publishing image data to MQTT via 'frigate/<camera_name>/<object_name>/snapshot'.
# All other messages will still be published.
enabled: True
# Optional: print a timestamp on the snapshots (default: shown below)
timestamp: false
# Optional: draw bounding box on the snapshots (default: shown below)
bounding_box: false
# Optional: crop the snapshot (default: shown below)
crop: True
# Optional: height to resize the snapshot to (default: shown below)
height: 500
# Optional: jpeg encode quality (default: shown below)
quality: 100
# Optional: Restrict mqtt messages to objects that entered any of the listed zones (default: no required zones)
#required_zones: []
# Optional: Configuration for how camera is handled in the GUI.
ui:
# Optional: Adjust sort order of cameras in the UI. Larger numbers come later (default: shown below)
# By default the cameras are sorted alphabetically.
order: 1
# Optional: Whether or not to show the camera in the Frigate UI (default: shown below)
dashboard: True
# Optional
ui:
# Optional: Set the default live mode for cameras in the UI (default: shown below)
live_mode: mse
# Optional: Set a timezone to use in the UI (default: use browser local time)
timezone: Asia/Colombo
# Optional: Use an experimental recordings / camera view UI (default: shown below)
use_experimental: False
# Optional: Set the time format used.
# Options are browser, 12hour, or 24hour (default: shown below)
time_format: 12hour
# Optional: Set the date style for a specified length.
# Options are: full, long, medium, short
# Examples:
# short: 2/11/23
# medium: Feb 11, 2023
# full: Saturday, February 11, 2023
# (default: shown below).
date_style: full
# Optional: Set the time style for a specified length.
# Options are: full, long, medium, short
# Examples:
# short: 8:14 PM
# medium: 8:15:22 PM
# full: 8:15:22 PM Mountain Standard Time
# (default: shown below).
time_style: medium
# Optional: Ability to manually override the date / time styling to use strftime format
# https://www.gnu.org/software/libc/manual/html_node/Formatting-Calendar-Time.html
# possible values are shown above (default: not set)
strftime_fmt: "%Y/%m/%d %H:%M"
# Optional: Telemetry configuration
telemetry:
# Optional: Enable the latest version outbound check (default: shown below)
# NOTE: If you use the HomeAssistant integration, disabling this will prevent it from reporting new versions
version_check: True
A snippet of the log from CPAI:
15:40:58:Face Processing: Queue request for Face Processing command 'recognize' (...64c617) took 279ms
15:40:58:Face Processing: No face found in image
15:40:58:Face Processing: Queue request for Face Processing command 'recognize' (...173772) took 345ms
15:41:00:Face Processing: No face found in image
15:41:00:Face Processing: Queue request for Face Processing command 'recognize' (...97254c) took 208ms
15:41:00:Face Processing: No face found in image
15:41:00:Face Processing: Queue request for Face Processing command 'recognize' (...c95ead) took 279ms
15:41:00:Face Processing: No face found in image
15:41:00:Face Processing: Queue request for Face Processing command 'recognize' (...ce7a76) took 238ms
15:41:01:Face Processing: No face found in image
15:41:01:Face Processing: Queue request for Face Processing command 'recognize' (...9b4a6b) took 200ms
15:41:04:Face Processing: No face found in image
15:41:04:Face Processing: Queue request for Face Processing command 'recognize' (...25b123) took 192ms
15:41:04:Face Processing: No face found in image
15:41:04:Face Processing: Queue request for Face Processing command 'recognize' (...8268b4) took 316ms
15:41:04:Face Processing: No face found in image
15:41:04:Face Processing: Queue request for Face Processing command 'recognize' (...1037b8) took 388ms
15:41:05:Face Processing: No face found in image
15:41:05:Face Processing: Queue request for Face Processing command 'recognize' (...63d37e) took 194ms
15:41:05:Face Processing: No face found in image
15:41:05:Face Processing: Queue request for Face Processing command 'recognize' (...861c7e) took 169ms
15:41:08:Face Processing: No face found in image
15:41:08:Face Processing: Queue request for Face Processing command 'recognize' (...45158a) took 218ms
15:41:08:Face Processing: No face found in image
15:41:08:Face Processing: Queue request for Face Processing command 'recognize' (...d54edc) took 207ms
15:41:11:Face Processing: No face found in image
15:41:11:Face Processing: Queue request for Face Processing command 'recognize' (...5fb51d) took 217ms
15:41:15:Face Processing: No face found in image
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 16 (4 by maintainers)
To add, if you launched the container first on CPU. and your print outs like this one from both those commands in my previous post
purge the codeai docker containers and all related images. I dont think double-take is the problem here. Restart from the beginning going for the codeai cuda docker first. Making sure you get all the needed commands included into your docker run command. If you need to, please provide the run command or compose file being used and ill take a look at it