Skip to content

Version 1.5.0

Latest
Compare
Choose a tag to compare
@luispedro luispedro released this 14 Sep 10:46
· 37 commits to main since this release

The two big changes are:

  1. the ability to use Yaml files to specify samples,
  2. the introduction of run_for_all (and run_for_all_samples) functions to simplify the usage of the parallel module.

Several of the other changes were then to support these two features.
Additionally, some minor fixes and improvements were made.

Full ChangeLog:

  • Add load_sample_list function to load samples in YAML format.

  • Add compress_level argument to write function to specify the compression level.

  • Added name() method to ReadSet objects, so you can do:

    input = load_fastq_directory("my-sample")
    print(input.name())

which will print my-sample.

  • Added println function which works like print but prints a newline after the output.
  • Make print() accept ints and doubles as well as strings.
  • Added run_for_all function to parallel module, simplifying its API.
  • When using the parallel module and a job fails, writes the log to the corresponding .failed file.
  • External modules can now use the sequenceset type to represent a FASTA file.
  • The load_fastq_directory function now supports .xz compressed files.
  • The parallel module now checks for stale locks before re-trying failed tasks. The former model could lead to a situation where a particular sample failed deterministically and then blocked progress even when some locks were stale.

Bugfixes

  • The parallel module should generate a .failed file for each failed job, but this was not happening in every case.
  • Fixed parsing of GFF files to support negative values (reported by Josh Sekela on the mailing-list).