AI models¶
Through the NNComponent, DepthAI SDK abstracts:
AI model sourcing using blobconverter from Open Model Zoo (OMZ) and DepthAI Model Zoo (DMZ).
AI result decoding - currently SDK supports on-device decoding for YOLO and MobileNet based results using YoloDetectionNetwork and MobileNetDetectionNetwork nodes.
Decoding of the
config.json
which allows an easy deployment of custom AI models trained using our notebooks and converted using https://tools.luxonis.com.Formatting of the AI model input frame - SDK uses BGR color order and Planar / CHW (Channel, Height, Width) layout conventions. If model accepts color images, it should accept 3 channels (B,G,R), and if it accepts grayscale images, it should accept 1 channel.
Integration with 3rd party tools/services (Roboflow).
SDK supported models¶
With NNComponent you can easily try out a variety of different pre-trained models by simply changing the model name:
from depthai_sdk import OakCamera
with OakCamera() as oak:
color = oak.create_camera('color')
- nn = oak.create_nn('mobilenet-ssd', color)
+ nn = oak.create_nn('vehicle-detection-0202', color)
oak.visualize([nn], fps=True)
oak.start(blocking=True)
Both of the models above are supported by this SDK, so they will be downloaded and deployed to the OAK device along with the pipeline.
The following table lists all the models supported by the SDK. The model name is the same as the name used in the NNComponent constructor.
Name |
Model Source |
FPS* |
---|---|---|
|
33 |
|
|
33 |
|
|
18 |
|
|
33 |
|
|
32 |
|
|
32 |
|
|
8 |
|
|
31 |
|
|
/ |
|
|
19 |
|
|
14 |
|
|
15 |
|
|
33 |
|
|
12 |
|
|
64+ |
|
|
14 |
|
|
14 |
|
|
29 |
|
|
3.5 |
|
|
33 |
|
|
1.1 |
|
|
32 |
|
|
32 |
|
|
26 |
|
|
32 |
|
|
23 |
|
|
29 |
|
|
22 |
*
- FPS was measured using only color camera (1080P) and 1 NN using callbacks (without visualization)