Training and Validation Metrics Analysis for Model Performance Optimization

19th training results of SLHUB model

Training Losses (Top Row):

  1. train/box_loss:

    • Decreases steadily over epochs, indicating better bounding box predictions.

    • The smooth line shows the overall trend, suggesting consistent improvement.

  2. train/cls_loss:

    • The classification loss reduces significantly over time, implying better predictions for class labels.

  3. train/dfl_loss:

    • Similar behavior as the box and classification losses; the quality of the model's predictions improves progressively.

  4. metrics/precision(B):

    • Precision starts high and stabilizes close to 1.0. This suggests the model is minimizing false positives effectively.

  5. metrics/recall(B):

    • Recall improves sharply and reaches close to 1.0, indicating the model is capturing nearly all relevant instances.

Validation Losses (Bottom Row):

  1. val/box_loss:

    • Shows a steady decrease, similar to the training box loss. Consistency between train and validation suggests no overfitting here.

  2. val/cls_loss:

    • Decreases significantly, indicating improved classification performance on the validation set.

  3. val/dfl_loss:

    • Follows a similar pattern as train/dfl_loss, with consistent performance across training and validation.

Validation Metrics (Bottom Row):

  1. metrics/mAP50(B):

    • Mean Average Precision (mAP) at IoU threshold 50% increases rapidly and stabilizes close to 1.0, indicating high-quality predictions for detecting objects.

  2. metrics/mAP50-95(B):

    • The mAP across multiple IoU thresholds improves gradually and stabilizes. Some fluctuations early in training are common but stabilize later.

➡ Overall:

  • The model shows consistent improvement across all metrics and losses, with no major signs of overfitting or instability.

  • Precision and recall are high, and validation losses align well with training losses, indicating a well-generalized model.

Last updated