Skip to content

Getting cls_logits NaN of Inf during training #1683

@AceMcAwesome77

Description

@AceMcAwesome77

I am training this retinanet 3D detection model with mostly the same parameters as the example in this repo, except with batch_size in the config = 1 because many image volumes are smaller than the training patch size. During training, I am getting this error at random, several epochs into the training:

Traceback of TorchScript, original code (most recent call last):
File "/home/mycomputer/.local/lib/python3.10/site-packages/monai/apps/detection/networks/retinanet_network.py", line 130, in forward
if torch.isnan(cls_logits).any() or torch.isinf(cls_logits).any():
if torch.is_grad_enabled():
raise ValueError("cls_logits is NaN or Inf.")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
else:
warnings.warn("cls_logits is NaN or Inf.")
builtins.ValueError: cls_logits is NaN or Inf.

On the last few training attempts, this failed on epoch 6 on the first two attempts, then failed on epoch 12 on the third attempt. So it can make it though all the training data without failing on any particular case. Does anyone know what could be causing this? If it's exploding gradients, is there something built into MONAI to clip these and prevent the training from crashing? Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions