OakCamera

The OakCamera class abstracts:

  • DepthAI API pipeline building with Components.

  • Stream recording and replaying.

  • Debugging features (such as oak.show_graph()).

  • AI model sourcing and decoding.

  • Message syncing & visualization, and much more.

Note

This class will be in alpha stage until depthai-sdk 2.0.0, so there will likely be some API changes.

Interoperability with DepthAI API

DepthAI SDK was developed with DepthAI API interoperability in mind. Users can access all depthai API nodes inside components, along with the dai.Pipeline (oak.pipeline) and dai.Device (oak.device) objects.

from depthai_sdk import OakCamera
import depthai as dai

with OakCamera() as oak:
    color = oak.create_camera('color')
    nn = oak.create_nn('mobilenet-ssd', color)
    oak.visualize([nn.out.passthrough, nn], fps=True)

    nn.node.setNumInferenceThreads(2) # Configure components' nodes

    features = oak.pipeline.create(dai.node.FeatureTracker) # Create new pipeline nodes
    color.node.video.link(features.inputImage)

    out = oak.pipeline.create(dai.node.XLinkOut)
    out.setStreamName('features')
    features.outputFeatures.link(out.input)

    oak.start() # Start the pipeline (upload it to the OAK)

    q = oak.device.getOutputQueue('features') # Create output queue after calling start()
    while oak.running():
        if q.has():
            result = q.get()
            print(result)
        # Since we are not in blocking mode, we have to poll oak camera to
        # visualize frames, call callbacks, process keyboard keys, etc.
        oak.poll()

Examples

Below there are a few basic examples. See all examples here.

Here are a few demos that have been developed with DepthAI SDK:

  1. age-gender,

  2. emotion-recognition,

  3. full-fov-nn,

  4. head-posture-detection,

  5. pedestrian-reidentification,

  6. people-counter,

  7. people-tracker,

  8. mask-detection,

  9. yolo.

  10. Roboflow.

Preview color and mono cameras

from depthai_sdk import OakCamera

with OakCamera() as oak:
    color = oak.create_camera('color')
    left = oak.create_camera('left')
    right = oak.create_camera('right')
    oak.visualize([color, left, right], fps=True)
    oak.start(blocking=True)

Run MobilenetSSD on color camera

Run face-detection-retail-0004 on left camera

from depthai_sdk import OakCamera

with OakCamera() as oak:
    left = oak.create_camera('left')
    nn = oak.create_nn('face-detection-retail-0004', left)
    oak.visualize([nn.out.main, nn.out.passthrough], scale=2/3, fps=True)
    oak.start(blocking=True)

Deploy models from Roboflow and Roboflow Universe with Depth SDK

from depthai_sdk import OakCamera

# Download & deploy a model from Roboflow universe:
# # https://universe.roboflow.com/david-lee-d0rhs/american-sign-language-letters/dataset/6

with OakCamera() as oak:
    color = oak.create_camera('color')
    model_config = {
        'source': 'roboflow', # Specify that we are downloading the model from Roboflow
        'model':'american-sign-language-letters/6',
        'key':'181b0f6e43d59ee5ea421cd77f6d9ea2a4b059f8' # Fake API key, replace with your own!
    }
    nn = oak.create_nn(model_config, color)
    oak.visualize(nn, fps=True)
    oak.start(blocking=True)

Reference

Got questions?

Head over to Discussion Forum for technical support or any other questions you might have.