Conditional actions¶
DepthAI SDK provides a way to perform actions based on some conditions. For example, you can perform an action when a certain number of objects is detected in the frame. This functionality can be achieved by using Trigger-Action API.
Overview¶
Trigger-Action API is a way to define a set of conditions and actions that should be performed when these conditions are met. DepthAI SDK provides a set of predefined conditions and actions, but you can also define your own.
Basic concepts:
Trigger - a condition that should be met to perform an action.
Action - an action that should be performed when a trigger is met.
Note
Trigger-Action API is implemented in the depthai.trigger_action
module.
Triggers¶
The base class for all triggers is Trigger
.
In order to create a trigger, you need to use the Trigger
class and pass the following parameters:
input
- a component that should be used as a trigger source.condition
- a function that should returnTrue
orFalse
based on the trigger source.cooldown
- defines how often a trigger can be activated (in seconds).
The set of predefined triggers:
DetectionTrigger
- a trigger that is activated when a certain number of objects is detected in the frame.
Actions¶
An action can be represented by either a function or a class derived from Action
class.
The custom action should implement activate()
and optionally on_new_packets()
methods.
The set of predefined actions:
RecordAction
- records a video of a given duration when a trigger is activated.
Usage¶
The following example shows how to create a trigger that is activated when at least 1 person is detected in the frame. When the trigger is activated, it records a 15 seconds video (5 seconds before the trigger is activated and 10 seconds after).
from depthai_sdk import OakCamera
from depthai_sdk.trigger_action.actions.record_action import RecordAction
from depthai_sdk.trigger_action.triggers.detection_trigger import DetectionTrigger
with OakCamera() as oak:
color = oak.create_camera('color', encode='jpeg')
stereo = oak.create_stereo('400p')
nn = oak.create_nn('mobilenet-ssd', color)
trigger = DetectionTrigger(input=nn, min_detections={'person': 1}, cooldown=30)
action = RecordAction(inputs=[color, stereo.out.disparity], dir_path='./recordings/',
duration_before_trigger=5, duration_after_trigger=10)
oak.trigger_action(trigger=trigger, action=action)
oak.visualize(nn)
oak.start(blocking=True)