Training and Validation Metrics Analysis for Model Performance Optimization
19th training results of SLHUB model

Training Losses (Top Row):
train/box_loss
:Decreases steadily over epochs, indicating better bounding box predictions.
The smooth line shows the overall trend, suggesting consistent improvement.
train/cls_loss
:The classification loss reduces significantly over time, implying better predictions for class labels.
train/dfl_loss
:Similar behavior as the box and classification losses; the quality of the model's predictions improves progressively.
metrics/precision(B)
:Precision starts high and stabilizes close to 1.0. This suggests the model is minimizing false positives effectively.
metrics/recall(B)
:Recall improves sharply and reaches close to 1.0, indicating the model is capturing nearly all relevant instances.
Validation Losses (Bottom Row):
val/box_loss
:Shows a steady decrease, similar to the training box loss. Consistency between train and validation suggests no overfitting here.
val/cls_loss
:Decreases significantly, indicating improved classification performance on the validation set.
val/dfl_loss
:Follows a similar pattern as
train/dfl_loss
, with consistent performance across training and validation.
Validation Metrics (Bottom Row):
metrics/mAP50(B)
:Mean Average Precision (mAP) at IoU threshold 50% increases rapidly and stabilizes close to 1.0, indicating high-quality predictions for detecting objects.
metrics/mAP50-95(B)
:The mAP across multiple IoU thresholds improves gradually and stabilizes. Some fluctuations early in training are common but stabilize later.
➡ Overall:
The model shows consistent improvement across all metrics and losses, with no major signs of overfitting or instability.
Precision and recall are high, and validation losses align well with training losses, indicating a well-generalized model.
Last updated