Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate gensim word2vec speedup over different architectures using different methods #119

Open
RemyLau opened this issue Jun 20, 2022 · 1 comment

Comments

@RemyLau
Copy link
Contributor

RemyLau commented Jun 20, 2022

@RemyLau I was building some of the cross species networks and I noticed that as the threads were increased (I have got up to 120 cpus going), generating the walks became very fast, however, the training time of the gensim model slows down a considerable amount. Using too many threads in gensim seems to be a known problem as shared in this post.. I think there could be two ways to help fix this

  1. The easy fix would be too add separate arguments for the number of workers in random walk generation and the number of workers used by gensim and let the user find what is best
  2. The other could be to use the corpus_file argument as described in the post above. Maybe it would be fast with large parallel processing to generate the corpus_file, and if users have some scratch system like MSU could easily be saved there temporarily

Originally posted by @ChristopherMancuso in #19 (comment)

@RemyLau
Copy link
Contributor Author

RemyLau commented Jun 20, 2022

  1. Different machines (amd20, intel16, etc.)
  2. Different cores to test speedup (1-max), (1-28)
  3. Use corpus file or not (also report the corpus file size)
  4. Other hyperparameter choices (Optional)
    • Embedding dimensions
    • Number of walks
    • Window size
    • p & q (should not have any effects)

Testing 1-3 can answer the following two questions:

  1. Do gensim scale better / run faster on AMD's or INTEL's chip.
  2. Does the corpus file approach provide noticeable speedup compared to not using it.

If indeed the answer to 2 is using the corpus file approach provides significant speedup, I'll proceed to make a CLI option for that. Some potential things to keep in mind.

  • Need to be careful about cleaning up cache (corpus files).
  • How should gensim word2vec be called using the corpus file? Should it be a separate process from the main PecanPy process?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant