Skip to content
This repository has been archived by the owner on Nov 21, 2023. It is now read-only.

Can not find the source code of Retinanet loss function. #257

Closed
zhouhao94 opened this issue Mar 9, 2018 · 5 comments
Closed

Can not find the source code of Retinanet loss function. #257

zhouhao94 opened this issue Mar 9, 2018 · 5 comments

Comments

@zhouhao94
Copy link

zhouhao94 commented Mar 9, 2018

I am trying to understand the source code of Retinanet. While I can't find the source code of Retinanet's loss function.
Uploading 2018-03-09 10-59-52屏幕截图.png…

def add_fpn_retinanet_losses(model):

loss_gradients = {}
gradients, losses = [], []

k_max = cfg.FPN.RPN_MAX_LEVEL  # coarsest level of pyramid
k_min = cfg.FPN.RPN_MIN_LEVEL  # finest level of pyramid

model.AddMetrics(['retnet_fg_num', 'retnet_bg_num'])
# ==========================================================================
# bbox regression loss - SelectSmoothL1Loss for multiple anchors at a location
# ==========================================================================
for lvl in range(k_min, k_max + 1):
    suffix = 'fpn{}'.format(lvl)
 _###  #   **bbox_loss = model.net.SelectSmoothL1Loss**_(
        [
            'retnet_bbox_pred_' + suffix,
            'retnet_roi_bbox_targets_' + suffix,
            'retnet_roi_fg_bbox_locs_' + suffix, 'retnet_fg_num'
        ],
        'retnet_loss_bbox_' + suffix,
        beta=cfg.RETINANET.BBOX_REG_BETA,
        scale=1. / cfg.NUM_GPUS * cfg.RETINANET.BBOX_REG_WEIGHT
    )
    gradients.append(bbox_loss)
    losses.append('retnet_loss_bbox_' + suffix)

# ==========================================================================
# cls loss - depends on softmax/sigmoid outputs
# ==========================================================================
for lvl in range(k_min, k_max + 1):
    suffix = 'fpn{}'.format(lvl)
    cls_lvl_logits = 'retnet_cls_pred_' + suffix
    if not cfg.RETINANET.SOFTMAX:
        cls_focal_loss = model.net.SigmoidFocalLoss(
            [
                cls_lvl_logits, 'retnet_cls_labels_' + suffix,
                'retnet_fg_num'
            ],
            ['fl_{}'.format(suffix)],
            gamma=cfg.RETINANET.LOSS_GAMMA,
            alpha=cfg.RETINANET.LOSS_ALPHA,
            scale=(1. / cfg.NUM_GPUS)
        )
        gradients.append(cls_focal_loss)
        losses.append('fl_{}'.format(suffix))

For example, I can't find the source code of 'SelectSmoothL1Loss()' in 'bbox_loss = model.net.SelectSmoothL1Loss()', where the 'model' is an defined detectron model.

Any help? Thanks

@ppwwyyxx
Copy link
Contributor

ppwwyyxx commented Mar 9, 2018

The source code is in caffe2: https://github.com/caffe2/caffe2/blob/master/modules/detectron/select_smooth_l1_loss_op.h

@zhouhao94
Copy link
Author

zhouhao94 commented Mar 9, 2018

@ppwwyyxx Thank you. I have read that source code. And I have an other question. The actual parampeters for function 'SelectSmoothL1Loss' are 'retnet_bbox_pred_' + suffix, 'retnet_roi_bbox_targets_' + suffix, 'retnet_roi_fg_bbox_locs_' + suffix and 'retnet_fg_num'. I know that the 'retnet_bbox_pred_' + suffix are calculate by the model, but where do the other three actual parameters 'retnet_roi_bbox_targets_' + suffix, 'retnet_roi_fg_bbox_locs_' + suffix and 'retnet_fg_num' come from? And where can I find the source code that is used to produce the three actual parameters?
Thank you so much.

@ir413
Copy link
Contributor

ir413 commented Jun 4, 2018

Hi @zhouhao94, the blobs you mention are computed by the functions in this file.

@ashnair1
Copy link

ashnair1 commented Apr 8, 2019

@tobidelbruck
Copy link

Incidentally, here is TF implementation: https://www.tensorflow.org/addons/api_docs/python/tfa/losses/SigmoidFocalCrossEntropy

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants