DepthAI Docs
latest
DepthAI’s Documentation
DepthAI Viewer
Example Use Cases
Tools & API Examples
Ecosystem
First steps with DepthAI
Installing DepthAI
Device setup
DepthAI Viewer
Default model
Next steps
Spatial AI
1. Neural inference fused with depth map
2. Semantic depth
3. Stereo neural inference
AI / ML / NN
Converting model to MyriadX blob
Deploying Custom Models
Use a Pre-trained OpenVINO model
Custom training
AI vision tasks
Model Performance
Depth perception
Passive stereo depth perception
Active stereo depth perception
Time-of-Flight depth perception
Computer Vision
Run your own CV functions on-device
On-device programming
Run your own CV functions on-device
On-device Pointcloud NN model
Using Script node
Creating custom NN models
Creating custom OpenCL kernel
C++/Python API
DepthAI SDK
Hardware Products
FAQs & How-To
Why Does DepthAI Exist?
What is DepthAI?
What is SpatialAI? What is 3D Object Localization?
How is DepthAI Used? In What Industries is it Used?
What Distinguishes OAK-D From Other Cameras?
How Does DepthAI Provide Spatial AI Results?
What is the Gen2 Pipeline Builder?
What is megaAI?
Which Model Should I Order?
How hard is it to get DepthAI running from scratch? What Platforms are Supported?
Is OAK camera easy to use with Raspberry Pi?
Can all the models be used with the Raspberry Pi?
Does DepthAI Work on the Nvidia Jetson Series?
Can I Use Multiple DepthAI With One Host?
Is DepthAI OpenVINO Compatible?
Can I Train My Own Models for DepthAI?
Do I Need Depth Data to Train My Own Custom Model for DepthAI?
If I train my own network, which Neural Operations are supported by DepthAI?
What network backbones are supported on DepthAI?
My Model Requires Pre-Processing (normalization, for example). How do I do that in DepthAI?
Can I Run Multiple Neural Models in Parallel or in Series (or Both)?
Can DepthAI do Arbitrary Crop, Resize, Thumbnail, etc.?
Can DepthAI Run Custom CV Code? Say CV Code From PyTorch?
How do I Integrate DepthAI into Our Product?
What Hardware-Accelerated Capabilities Exist in DepthAI and/or megaAI?
Are CAD Files Available?
How to enable depthai to perceive closer distances
What are the Minimum Depths Visible by DepthAI?
What Are The Maximum Depths Visible by DepthAI?
How Does DepthAI Calculate Disparity Depth?
How Do I Calculate Depth from Disparity?
How Do I Display Multiple Streams?
How do I Synchronize Streams and/or Meta Data (Neural Inference Results)
How do I Record (or Encode) Video with DepthAI?
What are the Capabilities of the Video Encoder on DepthAI?
What Is The Stream Latency?
How To Do a Letterboxing (Thumbnailing) on the Color Camera?
Is it Possible to Use the RGB Camera and/or the Stereo Pair as a Regular UVC Camera?
How Do I Force USB2 Mode?
What is “NCS2 Mode”?
What Information is Stored on the OAK cameras
Dual-Homography vs. Single-Homography Calibration
How Do I Get Different Field of View or Lenses for DepthAI and megaAI?
What are the Highest Resolutions and Recording FPS Possible with OAK cameras?
What are the theoretical maximum transmission rate for USB3 Gen1 and Gen2?
What is the best way to get FullHD in good quality?
How to run OAK-D as video device
How Much Compute Is Available? How Much Neural Compute is Available?
How are resources allocated? How do I see allocation?
What Auto-Focus Modes Are Supported? Is it Possible to Control Auto-Focus From the Host?
What is the Hyperfocal Distance of the Auto-Focus Color Camera?
Is it Possible to Control the Exposure and White Balance and Auto-Focus (3A) Settings of the RGB Camera From the Host?
Is it possible to control exposure and ISO with separate cameras?
Am I able to attach alternate lenses to the camera? What sort of mounting system? S mount? C mount?
Can I Power DepthAI Completely from USB?
What is the Screw Mount Specification on OAK-1 and OAK-D?
How to use DepthAI under VirtualBox
What are the SHAVES?
How to increase SHAVES parameter?
Can I Use DepthAI with the New Raspberry Pi HQ Camera?
Can I use DepthAI with Raspberry Pi Zero?
How Much Power Does the DepthAI Raspberry Pi CME Consume?
A strange noise pattern appears on the OAK-D Lite (RGB), how do I resolve this?
How To Unbind and Bind a Device?
How Do I Get Shorter or Longer Flexible Flat Cables (FFC)?
What are CSS MSS UPA and DSS Returned By meta_d2h?
Where are the Github repositories? Is DepthAI Open Source?
How Do I Build the C++ API?
Can I Use an IMU With DepthAI?
Can I Use Microphones with DepthAI?
Where are Product Brochures and/or Datasheets?
How Much Does OAK Devices Weight?
How Can I Cite Luxonis Products in Publications?
Where can I find your Logo?
How Do I Talk to an Engineer?
OAK as a webcam
Using UVC
Webcam workarounds
Troubleshooting
DepthAI can’t connect to an OAK camera
Reporting firmware crash dump
Ping was missed, closing the device connection
ImportError: No module named ‘depthai’
Why is the Camera Calibration running slow?
Permission denied error
DepthAI does not show up under
/dev/video*
like web cameras do. Why?
Intermittent Connectivity with Long (2 meter) USB3 Cables
Forcing USB2 Communication
Output from DepthAI keeps freezing
DepthAI freezes after a few frames
Udev rules on Linux
CTRL-C Is Not Stopping It!
Nothing happening when running a DepthAI script
“DLL load failed while importing cv2” on Windows
python3
depthai_demo.py
returns Illegal instruction
Neural network blob compiled with incompatible openvino version
“realloc(): invalid pointern Aborted” on RPi
[error] Attempted to start camera - NOT detected!
[error] input tensor exceeds available data range
Converting YUV420 to CV2 frame
SLAM with OAK
On-device SuperPoint for localization and SLAM
RAE on-device VIO & SLAM
Syncing frames and IMU messages
OAK on drones
Drone on-device NN-based localization
OAK for Education
Cortic AI Toolkit
Looking for more?
Support
Requesting support
Refunds and returns policy
Tutorials
First steps with DepthAI
Installing DepthAI
Device setup
DepthAI Viewer
Default model
Next steps
Spatial AI
1. Neural inference fused with depth map
3D Object Localization
3D Landmark Localization
2. Semantic depth
3. Stereo neural inference
AI / ML / NN
Converting model to MyriadX blob
Local OpenVINO Model Conversion
What is OpenVINO?
What is the Open Model Zoo?
Install OpenVINO
Download the face-detection-retail-0004 model
Compile the model
Run and display the model output
Reviewing the flow
Model Optimizer
FP16 Data Type
Mean and Scale parameters
Model layout parameter
Color order
Compile Tool
Converting and compiling models
1. Using online blobconverter
2. Using blobconverter package
3. Local compilation
Troubleshooting
Supported layers
Unsupported layer type “layer_type”
Incorrect data types
Deploying Custom Models
1. Face mask recognition model
Converting to .blob
Deploying the model
End result
2. QR code detector
Converting QR code detector to OpenVINO
Using Inference Engine (IE) to evaluate the model
Decoding QR code detector
Testing accuracy degradation due to FP16 quantization
Integrating QR code detector into DepthAI
QR Code end result
Use a Pre-trained OpenVINO model
Run DepthAI Default Model
Run model
Trying Other Models
Spatial AI - Augmenting the Model with 3D Position
Custom training
Overview
The Tutorials
Supporting Notebooks
AI vision tasks
Model Performance
Depth perception
Passive stereo depth perception
Active stereo depth perception
Time-of-Flight depth perception
Computer Vision
Run your own CV functions on-device
Create a custom model with PyTorch
Kornia
On-device programming
Run your own CV functions on-device
Create a custom model with PyTorch
Kornia
On-device Pointcloud NN model
Depth to NN model
Optimizing the Pointcloud model
On-device Pointcloud demo
Using Script node
Creating custom NN models
Creating custom OpenCL kernel
Modules:
C++/Python API
DepthAI SDK
Hardware Products
Contents
FAQs & How-To
Why Does DepthAI Exist?
What is DepthAI?
What is SpatialAI? What is 3D Object Localization?
How is DepthAI Used? In What Industries is it Used?
What Distinguishes OAK-D From Other Cameras?
How Does DepthAI Provide Spatial AI Results?
Monocular Neural Inference fused with Stereo Depth
What is the Max Stereo Disparity Depth Resolution?
Notes
What is the Gen2 Pipeline Builder?
What is megaAI?
Which Model Should I Order?
System on Modules
How hard is it to get DepthAI running from scratch? What Platforms are Supported?
Is OAK camera easy to use with Raspberry Pi?
Can all the models be used with the Raspberry Pi?
Does DepthAI Work on the Nvidia Jetson Series?
Can I Use Multiple DepthAI With One Host?
Is DepthAI OpenVINO Compatible?
Can I Train My Own Models for DepthAI?
Do I Need Depth Data to Train My Own Custom Model for DepthAI?
If I train my own network, which Neural Operations are supported by DepthAI?
What network backbones are supported on DepthAI?
My Model Requires Pre-Processing (normalization, for example). How do I do that in DepthAI?
Can I Run Multiple Neural Models in Parallel or in Series (or Both)?
Can DepthAI do Arbitrary Crop, Resize, Thumbnail, etc.?
Can DepthAI Run Custom CV Code? Say CV Code From PyTorch?
How do I Integrate DepthAI into Our Product?
Use-Case 1: DepthAI are a co-processor to a processor running Linux, MacOS, or Windows.
Use-Case 2: Using DepthAI with a MicroController like ESP32, ATTiny8, etc.
Use-Case 3: Using DepthAI as the Only Processor on a Device.
Hardware for Each Case:
Getting Started with Development
What Hardware-Accelerated Capabilities Exist in DepthAI and/or megaAI?
Available in DepthAI API Today:
On our Roadmap (Most are in development/integration)
Pipeline Builder Gen2
Are CAD Files Available?
How to enable depthai to perceive closer distances
What are the Minimum Depths Visible by DepthAI?
Monocular Neural Inference fused with Stereo Depth
Onboard Camera Minimum Depths
Monocular Neural Inference fused with Stereo Depth Mode
Stereo Neural Inference Mode
Modular Camera Minimum Depths:
Monocular Neural Inference fused with Stereo Depth Mode
Extended Disparity Depth Mode
Left-Right Check Depth Mode
What Are The Maximum Depths Visible by DepthAI?
Subpixel Disparity Depth Mode
How Does DepthAI Calculate Disparity Depth?
What Disparity Depth Modes are Supported?
How Do I Calculate Depth from Disparity?
How Do I Display Multiple Streams?
Is It Possible to Have Access to the Raw Stereo Pair Stream on the Host?
How do I Synchronize Streams and/or Meta Data (Neural Inference Results)
Reducing the Camera Frame Rate
Synchronizing on the Host
How do I Record (or Encode) Video with DepthAI?
What are the Capabilities of the Video Encoder on DepthAI?
What Is The Stream Latency?
How To Do a Letterboxing (Thumbnailing) on the Color Camera?
Is it Possible to Use the RGB Camera and/or the Stereo Pair as a Regular UVC Camera?
How Do I Force USB2 Mode?
What is “NCS2 Mode”?
What Information is Stored on the OAK cameras
Dual-Homography vs. Single-Homography Calibration
How Do I Get Different Field of View or Lenses for DepthAI and megaAI?
What are the Highest Resolutions and Recording FPS Possible with OAK cameras?
What are the theoretical maximum transmission rate for USB3 Gen1 and Gen2?
What is the best way to get FullHD in good quality?
How to run OAK-D as video device
How Much Compute Is Available? How Much Neural Compute is Available?
How are resources allocated? How do I see allocation?
What Auto-Focus Modes Are Supported? Is it Possible to Control Auto-Focus From the Host?
What is the Hyperfocal Distance of the Auto-Focus Color Camera?
Is it Possible to Control the Exposure and White Balance and Auto-Focus (3A) Settings of the RGB Camera From the Host?
Auto-Focus (AF)
Exposure (AE)
White Balance (AWB)
Is it possible to control exposure and ISO with separate cameras?
Am I able to attach alternate lenses to the camera? What sort of mounting system? S mount? C mount?
Can I Power DepthAI Completely from USB?
What is the Screw Mount Specification on OAK-1 and OAK-D?
How to use DepthAI under VirtualBox
What are the SHAVES?
How to increase SHAVES parameter?
Can I Use DepthAI with the New Raspberry Pi HQ Camera?
Can I use DepthAI with Raspberry Pi Zero?
How Much Power Does the DepthAI Raspberry Pi CME Consume?
A strange noise pattern appears on the OAK-D Lite (RGB), how do I resolve this?
How To Unbind and Bind a Device?
How Do I Get Shorter or Longer Flexible Flat Cables (FFC)?
What are CSS MSS UPA and DSS Returned By meta_d2h?
Where are the Github repositories? Is DepthAI Open Source?
Overall
Embedded Use Case
How Do I Build the C++ API?
Can I Use an IMU With DepthAI?
Can I Use Microphones with DepthAI?
Where are Product Brochures and/or Datasheets?
How Much Does OAK Devices Weight?
How Can I Cite Luxonis Products in Publications?
Where can I find your Logo?
How Do I Talk to an Engineer?
OAK as a webcam
Using UVC
Webcam workarounds
Troubleshooting
DepthAI can’t connect to an OAK camera
Reporting firmware crash dump
Ping was missed, closing the device connection
ImportError: No module named ‘depthai’
Why is the Camera Calibration running slow?
Permission denied error
DepthAI does not show up under
/dev/video*
like web cameras do. Why?
Intermittent Connectivity with Long (2 meter) USB3 Cables
Forcing USB2 Communication
Output from DepthAI keeps freezing
DepthAI freezes after a few frames
Udev rules on Linux
CTRL-C Is Not Stopping It!
Nothing happening when running a DepthAI script
“DLL load failed while importing cv2” on Windows
python3
depthai_demo.py
returns Illegal instruction
Neural network blob compiled with incompatible openvino version
“realloc(): invalid pointern Aborted” on RPi
[error] Attempted to start camera - NOT detected!
[error] input tensor exceeds available data range
Converting YUV420 to CV2 frame
SLAM with OAK
On-device SuperPoint for localization and SLAM
RAE on-device VIO & SLAM
Syncing frames and IMU messages
OAK on drones
Drone on-device NN-based localization
OAK ArduPilot integration
Camera vibration
OAK for Education
Cortic AI Toolkit
Looking for more?
Support
Requesting support
DepthAI issue
Connectivity issue
Hardware issue
Image Quality issue
Calibration issue
Converting NN model issue
Refunds and returns policy
DepthAI Docs
»
Index
Edit on GitHub
Index