Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VID2015 and Center loss #7

Open
pabloe4993 opened this issue Dec 3, 2017 · 1 comment
Open

VID2015 and Center loss #7

pabloe4993 opened this issue Dec 3, 2017 · 1 comment

Comments

@pabloe4993
Copy link

Hi,

Thank you for great job.
I have few questions:

  1. In the previous version DCFNet trained on uav123, nus-pro and tc-128.
    the current one is defined to VID2015, why?
  2. Why you don't use the CenterLoss loss anymore? Why in the first place the CenterLoss don't propagate(only forward but no backward)?

Thank you

@foolwood
Copy link
Owner

foolwood commented Dec 4, 2017

  1. The recent trend is to use VID to train.
  • First, the amount of video in VID is very large(~1 million images, and more than 4,000 videos). This number is far exceeding all the tracking datasets.

  • On the other side, there is a common belief that there is a risk of overfitting when training on the tracking datasets. VOT committee expressly prohibit training on this dataset(Learning from the tracking datasets (OTB, VOT, ALOV, NUSPRO) is prohibited.).

  1. CenterLoss was inherited from SiamFC directly to visualize the convergence conditions.
  • But I find it's too slow and provides no additional information. So I decide to remove this loss.
  • CenterLoss is only used to visualize the convergence (Like Top1/5 in image classification). This loss is non-differentiable( just think top1 loss. Only cross entropy can bp).

Thanks for your attention.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants