Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How much memory does it use? #5

Open
JesseTG opened this issue Mar 31, 2018 · 3 comments
Open

How much memory does it use? #5

JesseTG opened this issue Mar 31, 2018 · 3 comments
Labels
enhancement New feature or request

Comments

@JesseTG
Copy link

JesseTG commented Mar 31, 2018

I need to dedupe line-separated records in a very large file (hundreds of gigs, each record a few KB). This looks like it could be useful for me, but I don't know how much RAM it uses. For stream mode it has to store the actual lines, which is understandable, but what about mmap mode?

@deepinthebuild
Copy link
Owner

The maximum amount of memory used for mmap mode depends on the number of distinct entries in the input. The program itself only uses 24 bytes of memory for each distinct entry but the peak run time usage will be significantly (2x-8x) higher than that, due to two copies of the internal hash table temporarily existing during reallocations. On the enwiki9 benchmark (1 GB dataset), the master branch hits 600MB peak memory usage for me (the data set just barely triggers a reallocation to the largest size).

I've been planning on implementing a tempfile-backed option on one of the experimental branches. I can prioritize that feature if it would be useful to you.

@deepinthebuild deepinthebuild added the enhancement New feature or request label Mar 31, 2018
@JesseTG
Copy link
Author

JesseTG commented Mar 31, 2018

I'd appteciate that a lot, thank you. Also, reallocations? Yikes. Might wanna look at other data structures. I'll shop around.

@JesseTG
Copy link
Author

JesseTG commented Mar 31, 2018

Here's an idea: when dealing with a mmap(2)ed file, use one of these (especially this) to make a first pass to guess the number of unique lines. Then pre-allocate a hash set with space for that many elements and do the actual deduplication. If there's a more efficient hash set out there, that would be great too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants