Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is the extra class embedding important to predict the results, why not simply use feature maps to predict? #61

Closed
QiushiYang opened this issue Jan 26, 2021 · 3 comments

Comments

@QiushiYang
Copy link

Different from the common ways to use feature maps to obtain classifcation prediction (with fc or GAP layers), VIT employs an extra class embedding to do this without using feature maps explicitly. Wonder the meanings of this unusual design?

BTW, I used official pre-training params to fine-tune VIT on a small dataset, found that the validation accuracy is a little better after I replaced the feature maps with leanable class embedding to predict. So is the class embedding (maybe like a kind of query within encoder) important to learn and to predict?

@lucasb-eyer
Copy link
Collaborator

Great question. It is not really important. However, we wanted the model to be "exactly Transformer, but on image patches", so we kept this design from Transformer, where a token is always used.

@moha23
Copy link

moha23 commented Mar 24, 2021

EDIT: opened new issue #83 (comment)

Hi @lucasb-eyer, I'm trying to wrap my head around the use of 'cls' token. From the code

if classifier == 'token':
what I understand is that if classification token is to be used, first a read-only variable 'cls' of shape (1, 1, c) is defined, initialised with zeros and repeated along first axis to get shape (n,1,c) and finally concatenated to patch embeddings x. A few, possibly naive, doubts:

  1. So, do the values of 'cls' token keep getting updated as training proceeds? Or is it fixed to be 0 valued?
  2. When you say 'learnable' embedding, does it mean it is a trainable variable and the network learns in the training process what its values should be?
  3. And hence while testing this embedding will have some pre-trained values? But then how will this pre-trained value be different for different inputs from different classes?

@lucasb-eyer
Copy link
Collaborator

lucasb-eyer commented Mar 26, 2021

1-3 are all the same question :) yes, it is zero-initialized, then learned/updated. It does not depend on class at the input side, it's purpose is to "collect" evidence for what the output should be while going through the network.

Edit: ah I saw Andreas answered in the other comment already.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants