Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarking my dataset #30

Open
raman91 opened this issue Nov 3, 2017 · 2 comments
Open

Benchmarking my dataset #30

raman91 opened this issue Nov 3, 2017 · 2 comments

Comments

@raman91
Copy link

raman91 commented Nov 3, 2017

Hi,

I want to benchmark my variant calls (not GIAB or platinum dataset). How should i do with the help of this tool?

Thanks

@pkrusche
Copy link
Member

Hi,

I think hap.py should work with other datasets also -- you need these files minimally:

  • a truth VCF file
  • a query VCF file
  • a reference FASTA file that matches the two files above

An example command line for running hap.py on these files is given here:

https://github.com/illumina/hap.py#haplotype-comparison-tools

Optionally you can also give a set of confident hom-ref regions using the -f switch -- when these are not specified, every call in the query that is not matched by the truth will count as a false positive.

Hope this is helpful!

@aditya-sarkar441
Copy link

@pkrusche hap.py belongs to the Illumina. How can I run your tool on my input file ?
There is no mention in the documentation on how we can compile your tool on our files.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants