Skip to content

Frequently asked questions (FAQ)

eftychios pnevmatikakis edited this page May 15, 2018 · 7 revisions

- Should I use normcorre or normcorre_batch?

The functions give (almost) identical results. If you have access to the parallel computing toolbox, normcorre_batch is much faster since it aligns different frames in parallel. However, if the FOV is very large (e.g., during volumetric 3D imaging), then normcorre_batch can potentially lead to slowdowns and memory issues since it would try to load to much data in memory at once. In this case, use of the plain normcorre function is recommended.

- What about normcorre_batch_even?

This function will perform in parallel motion correction in the same way that normcorre_batch operates. The key difference is that all the patches are of equal size (grid_size + overlap_pre). This enables better vectorization and parallelization of different functions, and results in improved speed compared to normcorre_batch. It can also serve as a backbone for future GPU based implementation. Currently, this method supports only cubic shifts.

- What parameters should I pick for non-rigid motion correction?

That depends a lot on the properties of the dataset, e.g., existence of non-rigid motion, expression level, SNR etc. Smaller patches are preferable when there is a lot of non-rigid motion but can become less robust if the expression level is low and/or the SNR is poor. Empirically, for a standard two-photon experiment with a 512 x 512 wide FOV, with pixel size ~1μm x 1μm, taken at 30Hz, we have observed that a choice of grid_size = [128,128], overlap_pre = [32,32], mot_uf = 4, max_shift = [20,20], and max_dev = [8,8] gives typically good results.

- My file is too big to load in memory. What can I do?

Instead of passing an already loaded in RAM dataset as the first input Y to the functions normcorre or normcorre_batch, Y can just be a pointer to where the file is located (e.g., Y = 'big_file.tif'). Supported file types include .tif, .h5 and .mat memory mapped files. However, make sure that when setting the options struct, options.d1 and options.d2 correspond to the actual dimensions of the FOV, since size(Y,1), size(Y,2) since Y is now a string.

Similarly, the registered dataset can and should be saved directly in the hard-drive by modifying options.output_type. Supported file types include .tif, .h5 and .mat memory mapped files. See also the motion-correction-on-large-datasets entry on the wiki.

- Can I apply this algorithm to 1p micro-endoscopic data?

Yes, the simple approach of 1) doing high pass spatial filtering on the data, 2) estimating motion on the high pass filtered data and 3) correcting for the estimated motion on the original data, seems to work quite well. See the script demo_1p.m for details.

- I have more questions.

Please use the following gitter channel for questions. If you believe you have found a bug open an issue on github.