VisionAI DataVerse
Ask or search…

Model Performance

DataVerse provides a comprehensive overview of your AI model's performance, featuring a range of evaluation metrics and customization options. This guide will help you navigate the Model Performance Page and understand the various metrics, adjustable thresholds, and class-specific performance details available.

Quick Start

Begin with simpler models – they're like your warm-up laps. Once you're in the groove, you can amp up the intensity for better performance.
Click on the video below for a quick overview:

Model Performance

Evaluation Metrics

Dataverse presents key evaluation metrics to assess your model's performance, such as:
  • Precision: The ratio of true positives to the sum of true positives and false positives.
  • Recall: The ratio of true positives to the sum of true positives and false negatives.
  • F1 Score: The harmonic mean of precision and recall, providing a balance between the two metrics.
  • mAP: This mAP@[.5:.95] means average precision over different IoU thresholds, from 0.5 to 0.95, step 0.05 (0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95).
Adjustable Threshold: You can modify the threshold for classifying instances, which affects the balance between precision and recall. By adjusting the threshold, you can fine-tune your model to prioritize either precision (reducing false positives) or recall (reducing false negatives), depending on your specific use case.

Class-specific Performance

The training job detail page also displays performance metrics for each individual class in your dataset. This feature allows you to identify any classes that may require further optimization, such as additional training data or model fine-tuning.
By utilizing the Model Performance Page on Dataverse, you can gain valuable insights into your AI model's performance, allowing you to make data-driven decisions to optimize and improve your model. With the adjustable threshold and class-specific performance metrics, you can tailor your model to meet the unique requirements of your use case.


In addition to the aforementioned evaluation metrics, the Model Performance Page on Dataverse also presents optimizer-related metrics that provide insight into the model's training process. These metrics can help you understand how well the model is learning and whether any adjustments may be necessary for optimal training.
Some of the optimizer-related metrics include:

Epoch / Metrics

This metric showcases the performance of the model at each epoch, which is a complete iteration through the entire training dataset. By monitoring the evaluation metrics such as accuracy, precision, recall, and F1 score during each epoch, you can observe how your model's performance evolves over time. This can help you identify any potential issues, such as overfitting or underfitting, and make necessary adjustments.

Epoch / Loss

The loss function measures the difference between the model's predictions and the actual target values. The Epoch / Loss metric displays the loss value for each epoch, enabling you to track how well the model is converging towards the optimal solution. A decreasing loss value over time indicates that the model is learning effectively. However, if the loss value plateaus or increases, you may need to adjust your model's parameters or architecture.

Epoch / Learning Rate

The learning rate is a crucial hyperparameter that controls the step size of each update in the model's weight optimization process. The Epoch / Learning Rate metric illustrates the learning rate value for each epoch, allowing you to assess whether the learning rate is appropriate for your model. If the learning rate is too high, the model may overshoot the optimal solution, while a learning rate that is too low may cause the model to converge slowly or get stuck in a suboptimal solution. Monitoring this metric can help you fine-tune the learning rate for optimal model training.
By examining both the evaluation metrics and optimizer-related metrics on the Model Performance Page, you can gain a comprehensive understanding of your AI model's performance and training progress. These insights will enable you to make data-driven decisions and adjustments, ensuring the successful development and deployment of your AI model.