DepthAI SDK API
Helps in setting up processing pipeline |
|
Helps in setting up neural networks |
|
Helps in displaying preview from OAK cameras |
|
Helps in creating videos from OAK cameras |
|
Helps in downloading neural networks as MyriadX blobs |
|
For FPS calculations |
|
For frame handling |
|
For various most-common tasks |
Managers
- class depthai_sdk.managers.BlobManager
Manager class that handles MyriadX blobs.
- __init__(blobPath=None, configPath=None, zooName=None, zooDir=None, progressFunc=None)
- Parameters
blobPath (pathlib.Path, Optional) – Path to the compiled MyriadX blob file
configPath (pathlib.Path, Optional) – Path to model config file that is used to download the model
zooName (str, Optional) – Model name to be taken from model zoo
zooDir (pathlib.Path, Optional) – Path to model zoo directory
progressFunc (func, Optional) – Custom method to show download progress, should accept two arguments - current bytes and max bytes.
- getBlob(shaves=6, openvinoVersion=None, zooType=None)
This function is responsible for returning a ready to use MyriadX blob once requested. It will compile the model automatically using our online blobconverter tool. The compilation process will be ran only once, each subsequent call will return a path to previously compiled blob
- Parameters
shaves (int, Optional) – Specify how many shaves the model will use. Range 1-16
openvinoVersion (depthai.OpenVINO.Version, Optional) – OpenVINO version which will be used to compile the MyriadX blob
zooType (str, Optional) – Specifies model zoo type to download blob from
- Returns
Path to compiled MyriadX blob
- Return type
- Raises
SystemExit – If model name is not found in the zoo, this method will print all available ones and terminate
RuntimeError – If conversion failed with unknown status
Exception – If some unknown error will occur (reraise)
- class depthai_sdk.managers.EncodingManager
Manager class that handles video encoding
- __init__(encodeConfig, encodeOutput=None)
- Parameters
encodeConfig (dict) – Encoding config consisting of keys as preview names and values being the encoding FPS
encodeOutput (pathlib.Path, Optional) – Output directory for the recorded videos
- createEncoders(pm)
Creates VideoEncoder nodes using Pipeline manager, based on config provided during initialization
- Parameters
pm (depthai_sdk.managers.PipelineManager) – Pipeline Manager instance
- createDefaultQueues(device)
Creates output queues for VideoEncoder nodes created in
create_encoders
function. Also, opems up the H.264 / H.265 stream files (e.g.color.h265
) where the encoded data will be stored.- Parameters
device (depthai.Device) – Running device instance
- parseQueues()
Parse the output queues, consuming the available data packets in it and storing them inside opened stream files
- close()
Closes opened stream files and tries to perform FFMPEG-based conversion from raw stream into mp4 video.
If successful, each stream file (e.g.
color.h265
) will be available along with a ready to use video file (e.g.color.mp4
).In case of failure, this method will print traceback and commands that can be used for manual conversion
- class depthai_sdk.managers.NNetManager
Manager class handling all NN-related functionalities. It’s capable of creating appropriate nodes and connections, decoding neural network output automatically or by using external handler file.
- __init__(inputSize, nnFamily=None, labels=[], confidence=0.5, sync=False)
- Parameters
inputSize (tuple) – Desired NN input size, should match the input size defined in the network itself (width, height)
nnFamily (str, Optional) – type of NeuralNetwork to be processed. Supported:
"YOLO"
andmobilenet
labels (list, Optional) – Allows to display class label instead of ID when drawing nn detections.
confidence (float, Optional) – Specify detection nn’s confidence threshold
sync (bool, Optional) – Store NN results for preview syncing (to be used with SyncedPreviewManager
- sourceChoices = ('color', 'left', 'right', 'rectifiedLeft', 'rectifiedRight', 'host')
List of available neural network inputs
- Type
- openvinoVersion = None
OpenVINO version, available only if parsed from config file (see
readConfig()
)
- inputQueue = None
DepthAI input queue object that allows to send images from host to device (used only with
host
source)
- outputQueue = None
DepthAI output queue object that allows to receive NN results from the device.
- buffer = {}
nn data buffer, disabled by default. Stores parsed nn data with packet sequence number as dict key
- Type
- readConfig(path)
Parses the model config file and adjusts NNetManager values accordingly. It’s advised to create a config file for every new network, as it allows to use dedicated NN nodes (for MobilenetSSD and YOLO) or use custom handler to process and display custom network results
- Parameters
path (pathlib.Path) – Path to model config file (.json)
- Raises
ValueError – If path to config file does not exist
RuntimeError – If custom handler does not contain
draw
orshow
methods
- createNN(pipeline, nodes, blobPath, source='color', useDepth=False, minDepth=100, maxDepth=10000, sbbScaleFactor=0.3, fullFov=True, useImageManip=True)
Creates nodes and connections in provided pipeline that will allow to run NN model and consume it’s results.
- Parameters
pipeline (depthai.Pipeline) – Pipeline instance
nodes (types.SimpleNamespace) – Object cointaining all of the nodes added to the pipeline. Available in
depthai_sdk.managers.PipelineManager.nodes
blobPath (pathlib.Path) – Path to MyriadX blob. Might be useful to use together with
depthai_sdk.managers.BlobManager.getBlob()
for dynamic blob compilationsource (str, Optional) – Neural network input source, one of
sourceChoices
useDepth (bool, Optional) – If set to True, produced detections will have spatial coordinates included
minDepth (int, Optional) – Minimum depth distance in centimeters
maxDepth (int, Optional) – Maximum depth distance in centimeters
sbbScaleFactor (float, Optional) – Scale of the bounding box that will be used to calculate spatial coordinates for detection. If set to 0.3, it will scale down center-wise the bounding box to 0.3 of it’s original size and use it to calculate spatial location of the object
fullFov (bool, Optional) – If set to False, manager will include crop offset when scaling the detections. Usually should be set to True (if you don’t perform aspect ratio crop or when keepAspectRatio flag on camera/manip node is set to False
useImageManip (bool, Optional) – If set to False, manager will not create an image manip node for input image scaling - which may result in an input image being not adjusted for the NeuralNetwork node. Can be useful when we want to limit the amount of nodes running simultaneously on device
- Returns
Configured NN node that was added to the pipeline
- Return type
- Raises
RuntimeError – If source is not a valid choice or when input size has not been set.
- getLabelText(label)
Retrieves text assigned to specific label
- Parameters
label (int) – Integer representing detection label, usually returned from NN node
- Returns
Label text assigned to specific label id or label id
- Return type
- Raises
RuntimeError – If source is not a valid choice or when input size has not been set.
- parse(blocking=False)
- decode(inNn)
Decodes NN output. Performs generic handling for supported detection networks or calls custom handler methods
- Parameters
inNn (depthai.NNData) – Integer representing detection label, usually returned from NN node
- Returns
Decoded NN data
- Raises
RuntimeError – if outputFormat specified in model config file is not recognized
- draw(source, decodedData)
Draws NN results onto the frames. It’s responsible to correctly map the results onto each frame requested, including applying crop offset or preparing a correct normalization frame, then draws them with all information provided (confidence, label, spatial location, label count).
Also, it’s able to call custom nn handler method
draw
to hand over drawing the results- Parameters
source (depthai_sdk.managers.PreviewManager | numpy.ndarray) –
Draw target. If supplied with a regular frame, it will draw the count on that frame
If supplied with
depthai_sdk.managers.PreviewManager
instance, it will print the count label on all of the frames that it storesdecodedData – Detections from neural network node, usually returned from
decode()
method
- createQueues(device)
Creates output queue for NeuralNetwork node and, if using
host
as asource
, it will also create input queue.- Parameters
device (depthai.Device) – Running device instance
- closeQueues()
Closes output queues created by
createQueues()
- sendInputFrame(frame, seqNum=None)
Sends a frame into
inputQueue
object. Handles scaling down the frame, creating a properdepthai.ImgFrame
and sending it to the queue. Be sure to usehost
as asource
and callcreateQueues()
prior input queue.- Parameters
frame (numpy.ndarray) – Frame to be sent to the device
seqNum (int, Optional) – Sequence number set on ImgFrame. Useful in synchronization scenarios
- Returns
scaled frame that was sent to the NN (same width/height as NN input)
- Return type
- Raises
RuntimeError – if
inputQueue
isNone
(unable to send the image)
- class depthai_sdk.managers.PipelineManager
Manager class handling different
depthai.Pipeline
operations. Most of the functions wrap up nodes creation and connection logic onto a set of convenience functions.- __init__(openvinoVersion=None, poeQuality=100, lowCapabilities=False, lowBandwidth=False)
- pipeline
Ready to use requested pipeline. Can be passed to
depthai.Device
to start execution- Type
- nodes
Contains all nodes added to the
pipeline
object, can be used to conveniently access nodes by their name
- openvinoVersion = None
OpenVINO version which will be used in pipeline
- poeQuality = None
PoE encoding quality, can decrease frame quality but decrease latency
- Type
int, Optional
- lowBandwidth = False
If set to
True
, manager will MJPEG-encode the packets sent from device to host to lower the bandwidth usage. Can break if more than 3 encoded outputs requested- Type
- lowCapabilities = False
If set to
True
, manager will try to optimize the pipeline to reduce the amount of host-side calculations (useful for RPi or other embedded systems)- Type
- setNnManager(nnManager)
Assigns NN manager. It also syncs the pipeline versions between those two objects
- Parameters
nnManager (depthai_sdk.managers.NNetManager) – NN manager instance
- createDefaultQueues(device)
Creates default queues for config updates
- Parameters
device (depthai.Device) – Running device instance
- closeDefaultQueues()
Creates default queues for config updates
- Parameters
device (depthai.Device) – Running device instance
- createColorCam(previewSize=None, res=<SensorResolution.THE_1080_P: 0>, fps=30, fullFov=True, orientation=None, colorOrder=<ColorOrder.BGR: 0>, xout=False, xoutVideo=False, xoutStill=False, control=True, pipeline=None, args=None)
Creates
depthai.node.ColorCamera
node based on specified attributes- Parameters
previewSize (tuple, Optional) – Size of the preview -
(width, height)
res (depthai.ColorCameraProperties.SensorResolution, Optional) – Camera resolution to be used
fps (int, Optional) – Camera FPS set on the device. Can limit / increase the amount of frames produced by the camera
fullFov (bool, Optional) – If set to
True
, full frame will be scaled down to nn size. If toFalse
, it will first center crop the frame to meet the NN aspect ratio and then scale down the image.orientation (depthai.CameraImageOrientation, Optional) – Custom camera orientation to be set on the device
colorOrder (depthai.ColorCameraProperties, Optional) – Color order to be used
xout (bool, Optional) – If set to
True
, a dedicateddepthai.node.XLinkOut
will be created for this nodexoutVideo (bool, Optional) – If set to
True
, a dedicateddepthai.node.XLinkOut
will be created for video output of this nodexoutStill (bool, Optional) – If set to
True
, a dedicateddepthai.node.XLinkOut
will be created for still output of this nodeargs (Object, Optional) – Arguments from the ArgsManager
- Return type
- createLeftCam(res=None, fps=30, orientation=None, xout=False, control=True, pipeline=None, args=None)
Creates
depthai.node.MonoCamera
node based on specified attributes, assigned todepthai.CameraBoardSocket.LEFT
- Parameters
res (depthai.MonoCameraProperties.SensorResolution, Optional) – Camera resolution to be used
fps (int, Optional) – Camera FPS set on the device. Can limit / increase the amount of frames produced by the camera
orientation (depthai.CameraImageOrientation, Optional) – Custom camera orientation to be set on the device
xout (bool, Optional) – If set to
True
, a dedicateddepthai.node.XLinkOut
will be created for this nodeargs (Object, Optional) – Arguments from the ArgsManager
- Return type
- createRightCam(res=None, fps=30, orientation=None, xout=False, control=True, pipeline=None, args=None)
Creates
depthai.node.MonoCamera
node based on specified attributes, assigned todepthai.CameraBoardSocket.RIGHT
- Parameters
res (depthai.MonoCameraProperties.SensorResolution, Optional) – Camera resolution to be used
fps (int, Optional) – Camera FPS set on the device. Can limit / increase the amount of frames produced by the camera
orientation (depthai.CameraImageOrientation, Optional) – Custom camera orientation to be set on the device
xout (bool, Optional) – If set to
True
, a dedicateddepthai.node.XLinkOut
will be created for this nodeargs (Object, Optional) – Arguments from the ArgsManager
- Return type
- updateIrConfig(device, irLaser=None, irFlood=None)
Updates IR configuration
- createDepth(dct=245, median=None, sigma=0, lr=True, lrcThreshold=5, extended=False, subpixel=False, useDisparity=False, useDepth=False, useRectifiedLeft=False, useRectifiedRight=False, runtimeSwitch=False, alignment=None, control=True, pipeline=None, args=None)
Creates
depthai.node.StereoDepth
node based on specified attributes- Parameters
dct (int, Optional) – Disparity Confidence Threshold (0..255). The less confident the network is, the more empty values are present in the depth map.
median (depthai.MedianFilter, Optional) – Median filter to be applied on the depth, use with
depthai.MedianFilter.MEDIAN_OFF
to disable median filteringsigma (int, Optional) – Sigma value for bilateral filter (0..65535). If set to
0
, the filter will be disabled.lr (bool, Optional) – Set to
True
to enable Left-Right ChecklrcThreshold (int, Optional) – Sets the Left-Right Check threshold value (0..10)
extended (bool, Optional) – Set to
True
to enable the extended disparitysubpixel (bool, Optional) – Set to
True
to enable the subpixel disparityuseDisparity (bool, Optional) – Set to
True
to create output queue for disparity framesuseDepth (bool, Optional) – Set to
True
to create output queue for depth framesuseRectifiedLeft (bool, Optional) – Set to
True
to create output queue for rectified left framesuseRectifiedRigh (bool, Optional) – Set to
True
to create output queue for rectified right framesruntimeSwitch (bool, Optional) – Allows to change the depth configuration during the runtime but allocates resources for worst-case scenario (disabled by default)
alignment (depthai.CameraBoardSocket, Optional) – Aligns the depth map to the specified camera socket
args (Object, Optional) – Arguments from the ArgsManager
- Raises
RuntimeError – if left of right mono cameras were not initialized
- Return type
- captureStill()
- triggerAutoFocus()
- triggerAutoExposure()
- triggerAutoWhiteBalance()
- updateColorCamConfig(exposure=None, sensitivity=None, saturation=None, contrast=None, brightness=None, sharpness=None, autofocus=None, autowhitebalance=None, focus=None, whitebalance=None)
Updates
depthai.node.ColorCamera
node config- Parameters
exposure (int, Optional) – Exposure time in microseconds. Has to be set together with
sensitivity
(Usual range: 1..33000)sensitivity (int, Optional) – Sensivity as ISO value. Has to be set together with
exposure
(Usual range: 100..1600)saturation (int, Optional) – Image saturation (Allowed range: -10..10)
contrast (int, Optional) – Image contrast (Allowed range: -10..10)
brightness (int, Optional) – Image brightness (Allowed range: -10..10)
sharpness (int, Optional) – Image sharpness (Allowed range: 0..4)
autofocus (dai.CameraControl.AutoFocusMode, Optional) – Set the autofocus mode
autowhitebalance (dai.CameraControl.AutoFocusMode, Optional) – Set the autowhitebalance mode
focus (int, Optional) – Set the manual focus (lens position)
whitebalance (int, Optional) – Set the manual white balance
- updateLeftCamConfig(exposure=None, sensitivity=None, saturation=None, contrast=None, brightness=None, sharpness=None)
Updates left
depthai.node.MonoCamera
node config- Parameters
exposure (int, Optional) – Exposure time in microseconds. Has to be set together with
sensitivity
(Usual range: 1..33000)sensitivity (int, Optional) – Sensivity as ISO value. Has to be set together with
exposure
(Usual range: 100..1600)saturation (int, Optional) – Image saturation (Allowed range: -10..10)
contrast (int, Optional) – Image contrast (Allowed range: -10..10)
brightness (int, Optional) – Image brightness (Allowed range: -10..10)
sharpness (int, Optional) – Image sharpness (Allowed range: 0..4)
- updateRightCamConfig(exposure=None, sensitivity=None, saturation=None, contrast=None, brightness=None, sharpness=None)
Updates right
depthai.node.MonoCamera
node config- Parameters
exposure (int, Optional) – Exposure time in microseconds. Has to be set together with
sensitivity
(Usual range: 1..33000)sensitivity (int, Optional) – Sensivity as ISO value. Has to be set together with
exposure
(Usual range: 100..1600)saturation (int, Optional) – Image saturation (Allowed range: -10..10)
contrast (int, Optional) – Image contrast (Allowed range: -10..10)
brightness (int, Optional) – Image brightness (Allowed range: -10..10)
sharpness (int, Optional) – Image sharpness (Allowed range: 0..4)
- updateDepthConfig(dct=None, sigma=None, median=None, lrcThreshold=None)
Updates
depthai.node.StereoDepth
node config- Parameters
dct (int, Optional) – Disparity Confidence Threshold (0..255). The less confident the network is, the more empty values are present in the depth map.
median (depthai.MedianFilter, Optional) – Median filter to be applied on the depth, use with
depthai.MedianFilter.MEDIANOFF
to disable median filteringsigma (int, Optional) – Sigma value for bilateral filter (0..65535). If set to
0
, the filter will be disabled.lrc (bool, Optional) – Enables or disables Left-Right Check mode
lrcThreshold (int, Optional) – Sets the Left-Right Check threshold value (0..10)
- addNn(nn, xoutNnInput=False, xoutSbb=False)
Adds NN node to current pipeline. Usually obtained by calling
depthai_sdk.managers.NNetManager.createNN
method first- Parameters
nn (depthai.node.NeuralNetwork) – prepared NeuralNetwork node to be attached to the pipeline
xoutNnInput (bool) – Set to
True
to create output queue for NN’s passthough framesxoutSbb (bool) – Set to
True
to create output queue for Spatial Bounding Boxes (area that is used to calculate spatial location)
- createSystemLogger(rate=1)
Creates
depthai.node.SystemLogger
node together with XLinkOut- Parameters
rate (int, Optional) – Specify logging rate (in Hz)
- createEncoder(cameraName, encFps=30, encQuality=100)
Creates H.264 / H.265 video encoder (
depthai.node.VideoEncoder
instance)- Parameters
- Raises
ValueError – if cameraName is not a supported camera name
RuntimeError – if specified camera node was not present
- enableLowBandwidth(poeQuality)
Enables low-bandwidth mode
- Parameters
poeQuality (int, Optional) – PoE encoding quality, can decrease frame quality but decrease latency
- setXlinkChunkSize(chunkSize)
- setCameraTuningBlob(path)
- class depthai_sdk.managers.PreviewManager
Manager class that handles frames and displays them correctly.
- frames = {}
Contains name -> frame mapping that can be used to modify specific frames directly
- Type
- __init__(display=[], nnSource=None, colorMap=None, depthConfig=None, dispMultiplier=2.65625, mouseTracker=False, decode=False, fpsHandler=None, createWindows=True)
- Parameters
display (list, Optional) – List of
depthai_sdk.Previews
objects representing the streams to displaymouseTracker (bool, Optional) – If set to
True
, will enable mouse tracker on the preview windows that will display selected pixel valuefpsHandler (depthai_sdk.fps.FPSHandler, Optional) – if provided, will use fps handler to modify stream FPS and display it
nnSource (str, Optional) – Specifies NN source camera
colorMap (cv2 color map, Optional) – Color map applied on the depth frames
decode (bool, Optional) – If set to
True
, will decode the received frames assuming they were encoded with MJPEG encodingdispMultiplier (float, Optional) – Multiplier used for depth <-> disparity calculations (calculated on baseline and focal)
depthConfig (depthai.StereoDepthConfig, optional) – Configuration used for depth <-> disparity calculations
createWindows (bool, Optional) – If True, will create preview windows using OpenCV (enabled by default)
- collectCalibData(device)
Collects calibration data and calculates
dispScaleFactor
accordingly- Parameters
device (depthai.Device) – Running device instance
- createQueues(device, callback=None)
Create output queues for requested preview streams
- Parameters
device (depthai.Device) – Running device instance
callback (func, Optional) – Function that will be executed with preview name once preview window was created
- closeQueues()
Closes output queues for requested preview streams
- prepareFrames(blocking=False, callback=None)
This function consumes output queues’ packets and parses them to obtain ready to use frames. To convert the frames from packets, this manager uses methods defined in
depthai_sdk.previews.PreviewDecoder
.- Parameters
blocking (bool, Optional) – If set to
True
, will wait for a packet in each queue to be availablecallback (func, Optional) – Function that will be executed once packet with frame has arrived
- showFrames(callback=None)
Displays stored frame onto preview windows.
- Parameters
callback (func, Optional) – Function that will be executed right before
cv2.imshow
- has(name)
Determines whether manager has a frame assigned to specified preview
- Returns
True
if contains a frame,False
otherwise- Return type
- get(name)
Returns a frame assigned to specified preview
- Returns
Resolved frame, will default to
None
if not present- Return type
Previews
- class depthai_sdk.previews.PreviewDecoder
- static jpegDecode(data, type)
- static nnInput(packet, manager=None)
Produces NN passthough frame from raw data packet
- Parameters
packet (depthai.ImgFrame) – Packet received from output queue
manager (depthai_sdk.managers.PreviewManager, optional) – PreviewManager instance
- Returns
Ready to use OpenCV frame
- Return type
- static color(packet, manager=None)
Produces color camera frame from raw data packet
- Parameters
packet (depthai.ImgFrame) – Packet received from output queue
manager (depthai_sdk.managers.PreviewManager, optional) – PreviewManager instance
- Returns
Ready to use OpenCV frame
- Return type
- static left(packet, manager=None)
Produces left camera frame from raw data packet
- Parameters
packet (depthai.ImgFrame) – Packet received from output queue
manager (depthai_sdk.managers.PreviewManager, optional) – PreviewManager instance
- Returns
Ready to use OpenCV frame
- Return type
- static right(packet, manager=None)
Produces right camera frame from raw data packet
- Parameters
packet (depthai.ImgFrame) – Packet received from output queue
manager (depthai_sdk.managers.PreviewManager, optional) – PreviewManager instance
- Returns
Ready to use OpenCV frame
- Return type
- static rectifiedLeft(packet, manager=None)
Produces rectified left frame (
depthai.node.StereoDepth.rectifiedLeft
) from raw data packet- Parameters
packet (depthai.ImgFrame) – Packet received from output queue
manager (depthai_sdk.managers.PreviewManager, optional) – PreviewManager instance
- Returns
Ready to use OpenCV frame
- Return type
- static rectifiedRight(packet, manager=None)
Produces rectified right frame (
depthai.node.StereoDepth.rectifiedRight
) from raw data packet- Parameters
packet (depthai.ImgFrame) – Packet received from output queue
manager (depthai_sdk.managers.PreviewManager, optional) – PreviewManager instance
- Returns
Ready to use OpenCV frame
- Return type
- static depthRaw(packet, manager=None)
Produces raw depth frame (
depthai.node.StereoDepth.depth
) from raw data packet- Parameters
packet (depthai.ImgFrame) – Packet received from output queue
manager (depthai_sdk.managers.PreviewManager, optional) – PreviewManager instance
- Returns
Ready to use OpenCV frame
- Return type
- static depth(depthRaw, manager=None)
Produces depth frame from raw depth frame (converts to disparity and applies color map)
- Parameters
depthRaw (numpy.ndarray) – OpenCV frame containing raw depth frame
manager (depthai_sdk.managers.PreviewManager, optional) – PreviewManager instance
- Returns
Ready to use OpenCV frame
- Return type
- static disparity(packet, manager=None)
Produces disparity frame (
depthai.node.StereoDepth.disparity
) from raw data packet- Parameters
packet (depthai.ImgFrame) – Packet received from output queue
manager (depthai_sdk.managers.PreviewManager, optional) – PreviewManager instance
- Returns
Ready to use OpenCV frame
- Return type
- static disparityColor(disparity, manager=None)
Applies color map to disparity frame
- Parameters
disparity (numpy.ndarray) – OpenCV frame containing disparity frame
manager (depthai_sdk.managers.PreviewManager, optional) – PreviewManager instance
- Returns
Ready to use OpenCV frame
- Return type
- class depthai_sdk.previews.Previews
Enum class, assigning preview name with decode function.
Usually used as e.g.
Previews.color.name
when specifying color preview name.Can be also used as e.g.
Previews.color.value(packet)
to transform queue output packet to color camera frame- nnInput = functools.partial(<function PreviewDecoder.nnInput>)
- color = functools.partial(<function PreviewDecoder.color>)
- left = functools.partial(<function PreviewDecoder.left>)
- right = functools.partial(<function PreviewDecoder.right>)
- rectifiedLeft = functools.partial(<function PreviewDecoder.rectifiedLeft>)
- rectifiedRight = functools.partial(<function PreviewDecoder.rectifiedRight>)
- depthRaw = functools.partial(<function PreviewDecoder.depthRaw>)
- depth = functools.partial(<function PreviewDecoder.depth>)
- disparity = functools.partial(<function PreviewDecoder.disparity>)
- disparityColor = functools.partial(<function PreviewDecoder.disparityColor>)
- class depthai_sdk.previews.MouseClickTracker
Class that allows to track the click events on preview windows and show pixel value of a frame in coordinates pointed by the user.
Used internally by
depthai_sdk.managers.PreviewManager
- selectPoint(name)
Returns callback function for
cv2.setMouseCallback
that will update the selected point on mouse click event from frame.Usually used as
mct = MouseClickTracker() # create preview window cv2.setMouseCallback(window_name, mct.select_point(window_name))
- Parameters
name (str) – Name of the frame
- Returns
Callback function for
cv2.setMouseCallback
FPS
- class depthai_sdk.fps.FPSHandler
Class that handles all FPS-related operations. Mostly used to calculate different streams FPS, but can also be used to feed the video file based on it’s FPS property, not app performance (this prevents the video from being sent to quickly if we finish processing a frame earlier than the next video frame should be consumed)
- __init__(cap=None, maxTicks=100)
- Parameters
cap (cv2.VideoCapture, Optional) – handler to the video file object
maxTicks (int, Optional) – maximum ticks amount for FPS calculation
- nextIter()
Marks the next iteration of the processing loop. Will use
time.sleep
method if initialized with video file object
- tick(name)
Marks a point in time for specified name
- Parameters
name (str) – Specifies timestamp name
- tickFps(name)
Calculates the FPS based on specified name
- fps()
Calculates FPS value based on
nextIter()
calls, being the FPS of processing loop- Returns
Calculated FPS or
0.0
(default in case of failure)- Return type
- drawFps(frame, name)
Draws FPS values on requested frame, calculated based on specified name
- Parameters
frame (numpy.ndarray) – Frame object to draw values on
name (str) – Specifies timestamps’ name
Utils
- depthai_sdk.utils.cosDist(a, b)
Calculates cosine distance - https://en.wikipedia.org/wiki/Cosine_similarity
- depthai_sdk.utils.frameNorm(frame, bbox)
Mapps bounding box coordinates (0..1) to pixel values on frame
- Parameters
frame (numpy.ndarray) – Frame to which adjust the bounding box
bbox (list) – list of bounding box points in a form of
[x1, y1, x2, y2, ...]
- Returns
Bounding box points mapped to pixel values on frame
- Return type
- depthai_sdk.utils.toPlanar(arr, shape=None)
Converts interleaved frame into planar
- Parameters
arr (numpy.ndarray) – Interleaved frame
shape (tuple, optional) – If provided, the interleaved frame will be scaled to specified shape before converting into planar
- Returns
Planar frame
- Return type
- depthai_sdk.utils.toTensorResult(packet)
Converts NN packet to dict, with each key being output tensor name and each value being correctly reshaped and converted results array
Useful as a first step of processing NN results for custom neural networks
- Parameters
packet (depthai.NNData) – Packet returned from NN node
- Returns
Dict containing prepared output tensors
- Return type
- depthai_sdk.utils.merge(source, destination)
Utility function to merge two dictionaries
a = { 'first' : { 'all_rows' : { 'pass' : 'dog', 'number' : '1' } } } b = { 'first' : { 'all_rows' : { 'fail' : 'cat', 'number' : '5' } } } print(merge(b, a)) # { 'first' : { 'all_rows' : { 'pass' : 'dog', 'fail' : 'cat', 'number' : '5' } } }
- depthai_sdk.utils.loadModule(path)
Loads module from specified path. Used internally e.g. to load a custom handler file from path
- Parameters
path (pathlib.Path) – path to the module to be loaded
- Returns
loaded module from provided path
- Return type
module
- depthai_sdk.utils.getDeviceInfo(deviceId=None, debug=False)
Find a correct
depthai.DeviceInfo
object, either matching provideddeviceId
or selected by the user (if multiple devices available) Useful for almost every app where there is a possibility of multiple devices being connected simultaneously- Parameters
deviceId (str, optional) – Specifies device MX ID, for which the device info will be collected
- Returns
Object representing selected device info
- Return type
- Raises
RuntimeError – if no DepthAI device was found or, if
deviceId
was specified, no device with matching MX ID was foundValueError – if value supplied by the user when choosing the DepthAI device was incorrect
- depthai_sdk.utils.showProgress(curr, max)
Print progressbar to stdout. Each call to this method will write exactly to the same line, so usually it’s used as
print("Staring processing") while processing: showProgress(currProgress, maxProgress) print(" done") # prints in the same line as progress bar and adds a new line print("Processing finished!")
- depthai_sdk.utils.downloadYTVideo(video, outputDir)
Downloads a video from YouTube and returns the path to video. Will choose the best resolutuion if possible.
- Parameters
video (str) – URL to YouTube video
outputDir (pathlib.Path) – Path to directory where youtube video should be downloaded.
- Returns
Path to downloaded video file
- Return type
- Raises
RuntimeError – thrown when video download was unsuccessful
- depthai_sdk.utils.cropToAspectRatio(frame, size)
Crop the frame to desired aspect ratio and then scales it down to desired size :param frame: Source frame that will be cropped :type frame: numpy.ndarray :param size: Desired frame size (width, height) :type size: tuple
- Returns
Cropped frame
- Return type
- depthai_sdk.utils.resizeLetterbox(frame, size)
Transforms the frame to meet the desired size, preserving the aspect ratio and adding black borders (letterboxing) :param frame: Source frame that will be resized :type frame: numpy.ndarray :param size: Desired frame size (width, height) :type size: tuple
- Returns
Resized frame
- Return type
- depthai_sdk.utils.createBlankFrame(width, height, rgb_color=(0, 0, 0))
Create new image(numpy array) filled with certain color in RGB
- Parameters
- Returns
New frame filled with specified color
- Return type