Model Download (Beta)

Dataverse (beta) provides ONNX FP16 version download and simple inference operation.

Hint

You can perform model download on the Model.

Quick Start

Download model.

Click on the video below for a quick overview:

Download model by dataverse-sdk

Dataverse-sdk install:

Download Steps

# step1: connect to your dataverse host
from dataverse_sdk import *
from dataverse_sdk.connections import get_connection
client = DataverseClient(
    host=DataverseHost.PRODUCTION.value, email="XXX", password="***", 
    service_id="the-service-id-for-your-dataverse-service",
    alias="prod"
)
assert client is get_connection()

# step2: get project and model list
project = client.get_project(project_id=1)
models = project.list_models()

# step3: get your model by model_id, and list the model convert records
model = project.get_model(model_id=30)
print(model.model_records)

# step4: Get the target onnx convert record, and download labels.txt and model.onnx
model_record = model.get_convert_record(convert_record_id=5)
status, label_file_path = model_record.get_label_file(save_path="./labels.txt", timeout=6000)
status, onnx_model_path = model_record.get_onnx_model_file(save_path="./model.onnx", timeout=6000)

ONNX model inference (onnxruntime>=1.9)

Prepare Input

import cv2
import numpy as np

image_path_list = ['path/img1.jpg', 'path/img2.jpg']
image_input = []
for image_path in image_path_list:
    img = cv2.imread(image_path)  #bgr image
    #resize image as the input size of your selected model
    img_size = cv2.resize(img, (640, 640), interpolation=cv2.INTER_AREA)  
    image_input.append(img_size)

inputs = {"INPUT":  np.array(image_input)}
output_names = ["OUTPUT"]

Onnx Inference

import onnxruntime as ort
onnx_file = "/file_path_to/model.onnx"
providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] 
# If there is a kernel in the CUDA execution provider ONNX Runtime executes that on GPU. 
# If not the kernel is executed on CPU.
sess = ort.InferenceSession(str(onnx_file), providers= providers)
outputs = sess.run(output_names, inputs)

Post-Processing for the output

Output terminology

  • One detected object: np.array([n_xc, n_yc, n_w, n_h, confidence score, class_idx])

    • n_xc: normalize box x-center

    • n_yc: normalize box y-center

    • n_w: normalize box width

    • n_h: normalize box height

    • confidence score

    • class_idx: class index of training category list

  • One detection results: array of detected objects

  • Batch detection results: array of detection results

Post-processing script

# Load category
with open("/path/labels.txt") as f:
  category = f.read()
  class_list = category.split("\n")

img_w = 1280 # modify as your original image width
img_h = 720  # modify as your original image height
prediction_results = outputs[0]
for img_results in prediction_results:
    for obj in img_results:
        category = class_list[ int(obj[5])]
        confidence_score = obj[4]
        x = int(obj[0] * img_w)
        y = int(obj[1] * img_h)
        w = int(obj[2] * img_w)
        h = int(obj[3] * img_h)
        # TODO: save the results as your target format

Last updated