-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is the extra class embedding important to predict the results, why not simply use feature maps to predict? #61
Comments
Great question. It is not really important. However, we wanted the model to be "exactly Transformer, but on image patches", so we kept this design from Transformer, where a token is always used. |
EDIT: opened new issue #83 (comment) Hi @lucasb-eyer, I'm trying to wrap my head around the use of 'cls' token. From the code vision_transformer/vit_jax/models.py Line 251 in 4317e06
|
1-3 are all the same question :) yes, it is zero-initialized, then learned/updated. It does not depend on class at the input side, it's purpose is to "collect" evidence for what the output should be while going through the network. Edit: ah I saw Andreas answered in the other comment already. |
Different from the common ways to use feature maps to obtain classifcation prediction (with fc or GAP layers), VIT employs an extra class embedding to do this without using feature maps explicitly. Wonder the meanings of this unusual design?
BTW, I used official pre-training params to fine-tune VIT on a small dataset, found that the validation accuracy is a little better after I replaced the feature maps with leanable class embedding to predict. So is the class embedding (maybe like a kind of query within encoder) important to learn and to predict?
The text was updated successfully, but these errors were encountered: