onnxinference
This element can apply an ONNX model to video buffers. It attaches the tensor output to the buffer as a ref GstTensorMeta.
To install ONNX on your system, follow the instructions in the README.md in with this plugin.
Example launch command:
Test image file, model file (SSD) and label file can be found here : https://gitlab.collabora.com/gstreamer/onnx-models
GST_DEBUG=ssdobjectdetector:5
gst-launch-1.0 filesrc location=onnx-models/images/bus.jpg !
jpegdec ! videoconvert ! onnxinference execution-provider=cpu model-file=onnx-models/models/ssd_mobilenet_v1_coco.onnx !
ssdobjectdetector label-file=onnx-models/labels/COCO_classes.txt ! videoconvert ! imagefreeze ! autovideosink
Note: in order for downstream tensor decoders to correctly parse the tensor data in the GstTensorMeta, meta data must be attached to the ONNX model assigning a unique string id to each output layer. These unique string ids and corresponding GQuark ids are currently stored in the tensor decoder's header file, in this case gstssdobjectdetector.h. If the meta data is absent, the pipeline will fail.
As a convenience, there is a python script currently stored at https://gitlab.collabora.com/gstreamer/onnx-models/-/blob/master/scripts/modify_onnx_metadata.py to enable users to easily add and remove meta data from json files. It can also dump the names of all output layers, which can then be used to craft the json meta data file.
Hierarchy
GObject ╰──GInitiallyUnowned ╰──GstObject ╰──GstElement ╰──GstBaseTransform ╰──onnxinference
Factory details
Authors: – Aaron Boxer
Classification: – Filter/Effect/Video
Rank – primary
Plugin – onnx
Package – GStreamer Bad Plug-ins
Pad Templates
sink
video/x-raw:
format: { RGB, RGBA, BGR, BGRA }
width: [ 1, 2147483647 ]
height: [ 1, 2147483647 ]
framerate: [ 0/1, 2147483647/1 ]
src
video/x-raw:
format: { RGB, RGBA, BGR, BGRA }
width: [ 1, 2147483647 ]
height: [ 1, 2147483647 ]
framerate: [ 0/1, 2147483647/1 ]
Properties
execution-provider
“execution-provider” GstOnnxExecutionProvider *
ONNX execution provider
Flags : Read / Write
Default value : cpu (0)
Since : 1.24
input-image-format
“input-image-format” GstMlInputImageFormat *
Model input image format
Flags : Read / Write
Default value : hwc (0)
Since : 1.24
input-tensor-offset
“input-tensor-offset” gfloat
offset each tensor value by this value
Flags : Read / Write
Default value : 0
input-tensor-scale
“input-tensor-scale” gfloat
Divide each tensor value by this value
Flags : Read / Write
Default value : 1
model-file
“model-file” gchararray
ONNX model file
Flags : Read / Write
Default value : NULL
Since : 1.24
optimization-level
“optimization-level” GstOnnxOptimizationLevel *
ONNX optimization level
Flags : Read / Write
Default value : disable-all (0)
Since : 1.24
Named constants
GstMlInputImageFormat
GST_ML_INPUT_IMAGE_FORMAT_HWC Height Width Channel (a.k.a. interleaved) format GST_ML_INPUT_IMAGE_FORMAT_CHW Channel Height Width (a.k.a. planar) format
Members
hwc
(0) – Height Width Channel (HWC) a.k.a. interleaved image data format
chw
(1) – Channel Height Width (CHW) a.k.a. planar image data format
Since : 1.20
GstOnnxExecutionProvider
Members
cpu
(0) – CPU execution provider
cuda
(1) – CUDA execution provider
Since : 1.20
GstOnnxOptimizationLevel
Members
disable-all
(0) – Disable all optimization
enable-basic
(1) – Enable basic optimizations (redundant node removals))
enable-extended
(2) – Enable extended optimizations (redundant node removals + node fusions)
enable-all
(3) – Enable all possible optimizations
Since : 1.20
The results of the search are