Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Novoplasty memory error #212

Open
snashraf opened this issue Aug 27, 2023 · 1 comment
Open

Novoplasty memory error #212

snashraf opened this issue Aug 27, 2023 · 1 comment

Comments

@snashraf
Copy link

Hi Team,

I have been stuck with this issue for a long time. I was able to run NOVOPlasty on my local machine but when I am trying on azure machine amd it failed. I have tried with multiple configurations and its getting failed every time . My current configuration is 16 cores with 512GB. But now all jobs are killing due to memory issues. I am using slurm to run this.

[nsyed@az-rbid-hpc-scheduler assembly]$ tail -f slurm-novoplasty-20939.err
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
/var/spool/slurmd/job20939/slurm_script: line 8: 8195 Killed NOVOPlasty4.3.1.pl -c ${config}/config_${name}
.txt
slurmstepd: error: Detected 1 oom-kill event(s) in StepId=20939.batch. Some of your processes may have been killed by the cgroup out-of

Could you please help me with this?

This is the horse genome, and the read1 file is around 32G.

horse.config.txt

Could you please help me with this? I have been struggling with this for the last two weeks.

Regards,
Najeeb

@ndierckx
Copy link
Owner

Hi,

Not sure what the problem is but if it runs out of memory, you can try the max memory option. Seems you have a very large dataset, which you wouldn't need all the data from to assemble the mitogenome. Not sure how much memory you are requesting with your slurm job, but maybe try max memory 50 Gb and request 70 Gb or so, let me know if this gives the same problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants