Dataverse (beta) provides ONNX FP16 version download and simple inference operation.
Hint
You can perform model download on the Model.
Quick Start
Download model.
Click on the video below for a quick overview:
ONNX model inference (onnxruntime>=1.9)
Prepare Input
import cv2import numpy as npimage_path_list = ['path/img1.jpg','path/img2.jpg']image_input = []for image_path in image_path_list: img = cv2.imread(image_path)#bgr image#resize image as the input size of your selected model img_size = cv2.resize(img, (640, 640), interpolation=cv2.INTER_AREA) image_input.append(img_size)inputs ={"INPUT": np.array(image_input)}output_names = ["OUTPUT"]
Onnx Inference
import onnxruntime as ortonnx_file ="/file_path_to/model.onnx"providers = ["CUDAExecutionProvider","CPUExecutionProvider"] # If there is a kernel in the CUDA execution provider ONNX Runtime executes that on GPU. # If not the kernel is executed on CPU.sess = ort.InferenceSession(str(onnx_file), providers= providers)outputs = sess.run(output_names, inputs)
Post-Processing for the output
Output terminology
One detected object: np.array([n_xc, n_yc, n_w, n_h, confidence score, class_idx])
n_xc: normalize box x-center
n_yc: normalize box y-center
n_w: normalize box width
n_h: normalize box height
confidence score
class_idx: class index of training category list
One detection results: array of detected objects
Batch detection results: array of detection results
Post-processing script
# Load categorywithopen("/path/labels.txt")as f: category = f.read() class_list = category.split("\n")img_w =1280# modify as your original image widthimg_h =720# modify as your original image heightprediction_results = outputs[0]for img_results in prediction_results:for obj in img_results: category = class_list[int(obj[5])] confidence_score = obj[4] x =int(obj[0] * img_w) y =int(obj[1] * img_h) w =int(obj[2] * img_w) h =int(obj[3] * img_h)# TODO: save the results as your target format