Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add FacemapInterface #752

Draft
wants to merge 27 commits into
base: main
Choose a base branch
from
Draft

add FacemapInterface #752

wants to merge 27 commits into from

Conversation

bendichter
Copy link
Contributor

@bendichter bendichter commented Feb 18, 2024

relates to #188

@bendichter
Copy link
Contributor Author

Doing a little more research it looks like this is handling the old MATLAB-based format. This is the format used by the Higley Lab. Here is an explanation of the fields:

MATLAB output:

  • nX,nY: cell arrays of number of pixels in X and Y in each video taken simultaneously
  • sc: spatial downsampling constant used
  • ROI: [# of videos x # of areas] - areas to be included for multivideo SVD (in downsampled reference)
  • eROI: [# of videos x # of areas] - areas to be excluded from multivideo SVD (in downsampled reference)
  • locROI: location of small ROIs (in order running, ROI1, ROI2, ROI3, pupil1, pupil2); in downsampled reference
  • ROIfile: in which movie is the small ROI
  • plotROIs: which ROIs are being processed (these are the ones shown on the frame in the GUI)
  • files: all the files you processed together
  • npix: array of number of pixels from each video used for multivideo SVD
  • tpix: array of number of pixels in each view that was used for SVD processing
  • wpix: cell array of which pixels were used from each video for multivideo SVD
  • avgframe: [sum(tpix) x 1] average frame across videos computed on a subset of frames
  • avgmotion: [sum(tpix) x 1] average frame across videos computed on a subset of frames
  • motSVD: cell array of motion SVDs [components x time] (in order: multivideo, ROI1, ROI2, ROI3)
  • uMotMask: cell array of motion masks [pixels x time]
  • runSpeed: 2D running speed computed using phase correlation [time x 2]
  • pupil: structure of size 2 (pupil1 and pupil2) with 3 fields: area, area_raw, and com
  • thres: pupil sigma used
  • saturation: saturation levels (array in order running, ROI1, ROI2, ROI3, pupil1, pupil2); only saturation levels for pupil1 and pupil2 are used in the processing, others are just for viewing ROIs
    an ROI is [1x4]: [y0 x0 Ly Lx]

source

@alessandratrapani alessandratrapani self-assigned this Feb 20, 2024
@alessandratrapani
Copy link
Contributor

stub version here

@@ -18,6 +18,7 @@
from neuroconv import NWBConverter
from neuroconv.datainterfaces import (
DeepLabCutInterface,
FacemapInterface,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add tests here @alessandratrapani ? we have our test data repository here, can you make a small example file or ask for a small example data that we can add to our testing suite?

@alessandratrapani alessandratrapani changed the title add FacemapInterface, which currently only supports eye tracking. add FacemapInterface May 21, 2024
@CodyCBakerPhD CodyCBakerPhD marked this pull request as draft May 21, 2024 14:37
@CodyCBakerPhD CodyCBakerPhD linked an issue Aug 19, 2024 that may be closed by this pull request
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature]: add facemap conversion
4 participants