Model Convert (Beta)
Optimize their AI models for various deployment environments.
Last updated
Optimize their AI models for various deployment environments.
Last updated
The Model Convert functionality allows users to optimize their AI models for various deployment environments by transforming them into different formats. This process ensures compatibility, performance efficiency, and adaptability across diverse platforms, including edge devices, cloud services, and production systems.
Choose the model you wish to convert from the existing projects, and click "Model Convert".
Supported Formats:
Convert models into widely-used formats such as FP16, INT8 with ONNX, TensorRT.
Each format caters to specific hardware requirements and deployment environments.
Customizable Parameters:
Confidence Threshold: Set the minimum confidence score for valid detections.
NMS IoU Threshold: Set the IoU threshold for non-maximum suppression.
TopK: Adjust the maximum number of predictions returned for each inference.
Quantization: Fine-tune parameters for INT8 quantization for improved speed and reduced memory consumption.
Click convert model to start processing.
The Model Performance Results provides an in-depth evaluation of the converted model's performance. It includes critical information and tools to help you understand the efficiency and accuracy of your model under various configurations.
Model Summary:
Precision: Displays the quantization type (e.g., INT8, FP16).
Type: The converted format (e.g., TensorRT).
Batch Size: The batch size used during inference.
Throughput: The number of inferences processed per second (e.g., 8.6 per second).
Hardware Configuration: Information about the target hardware used for testing, including CPU/GPU specifications.
Status: Indicates whether the model is "Ready" or still under processing.
Performance Graphs:
Overall Performance (Main Size): Displays metrics such as F1 score, Precision, Recall, and mAP at the selected confidence threshold (adjustable via a slider).
F1 Score vs Confidence Threshold: Plots the F1 score against various confidence threshold levels to show the trade-off between precision and recall.
Performance by Class:
Shows metrics (F1 Score, Precision, Recall, mAP) for each class in a detailed bar chart.
Performance by Class Object Size:
Breaks down performance metrics based on object sizes (Small, Main, Large) across different classes.
This Model Convert feature simplifies the often complex task of preparing AI models for deployment, ensuring that your models achieve optimal performance and compatibility with ease.