API Interoperability Example¶
This example shows how to bridge the DepthAI API with the SDK. It first creates the color camera and mobilenet neural network and displays the results. With oak.build() we build the pipeline which is part of the API. We can then manipulate the pipeline just like we would in the API (e.g. add Xlink connections, scripts, …). In this example we manually add a feature tracker since the SDK currently does not support it. We then start the pipeline and display the results.
Note that in this case, the visualizer behavior is non-blocking. This means we need to poll the visualizer in order to get the results.
Demo¶
Setup¶
Please run the install script to download all required dependencies. Please note that this script must be ran from git context, so you have to download the depthai repository first and then run the script
git clone https://github.com/luxonis/depthai.git
cd depthai/
python3 install_requirements.py
For additional information, please follow our installation guide.
Pipeline¶
Source Code¶
Also available on GitHub.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | from depthai_sdk import OakCamera import depthai as dai with OakCamera() as oak: color = oak.create_camera('color') nn = oak.create_nn('mobilenet-ssd', color) oak.visualize([nn.out.passthrough, nn], fps=True) nn.node.setNumInferenceThreads(2) # Configure components' nodes features = oak.pipeline.create(dai.node.FeatureTracker) # Create new pipeline nodes color.node.video.link(features.inputImage) out = oak.pipeline.create(dai.node.XLinkOut) out.setStreamName('features') features.outputFeatures.link(out.input) oak.start() # Start the pipeline (upload it to the OAK) q = oak.device.getOutputQueue('features') # Create output queue after calling start() while oak.running(): if q.has(): result = q.get() print(result) # Since we are not in blocking mode, we have to poll oak camera to # visualize frames, call callbacks, process keyboard keys, etc. oak.poll() |