Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Ideas] Ways to improve the ophys module #1

Open
CodyCBakerPhD opened this issue Sep 29, 2023 · 16 comments
Open

[Ideas] Ways to improve the ophys module #1

CodyCBakerPhD opened this issue Sep 29, 2023 · 16 comments
Assignees

Comments

@CodyCBakerPhD
Copy link
Member

i) Create a separate neurodata type for OpticChannels, rather than them being a special thing generated by the ImagingPlane table

This will give the ability to reuse the same optic channels across different ImagingPlanes

ii) Generic MicroscopySeries to replace OnePhotonSeries and TwoPhotonSeries; I also saw a recent talk where 3P was gaining popularity; we've also used OnePhotonSeries for light sheet data before and that association raised eyebrows

@CodyCBakerPhD
Copy link
Member Author

@alessandratrapani @weiglszonja @pauladkisson This is the first issue in which to compile all our ideas / pain points for improving the ophys representations

@pauladkisson
Copy link
Member

pauladkisson commented Oct 2, 2023

Ideas that I have come across:

  • BackgroundResponseSeries to represent background 'neuropil' activity without calling it an 'ROI'.
  • Volumetric Sampling Rate vs. Frame Sampling Rate for depth imaging
  • Multiple Optical Channels for 1 ImagingPlane doesn't match metadata

@CodyCBakerPhD
Copy link
Member Author

BackgroundResponseSeries

Would that link to a separate PlaneSegmentation table too?

@CodyCBakerPhD
Copy link
Member Author

  • emission_lambda is dependent on the indicator and is used to represent the filter used by the optic channel
  • The full information of the emission is a spectrum not a scalar but it is determinable via the indicator using external resources
  • Whereas excitation_lambda is a property of the optic channel
  • one subject could have multiple expressed indicators, indicator should be a list attached to the Subject
  • but, should maybe be its own neurodata type so that it can link to the optic channel since an optic channel exists to target the emission from a specific indicator (not several indicators); perhaps the optic channel field would be target_indicator
  • multiple indicators are excited simultaneously by the same excitation lambda
  • a more modular neurodata type, perhaps Microscope, subtype of Device which could have properties describing a laser or LED properties
  • excitation strength can be time-dependent as seen in fibre photometry; this is used more commonly now with a single indicator but multiple optic channels to discriminate functional vs. anatomical for example

@CodyCBakerPhD
Copy link
Member Author

Current draft proposal:

OpticChannel is a new neurodata type that can exist independently of the ImagingPlane

  • it has filter information, how much is still up to discussion (manufacturer, model number)
  • main property describes the spectrum being filtered

Microscope is a new neurodata type, subtype of Device

  • has attributes describing the excitation pattern (potentially over time) used, similar to command voltage; or a scalar if no variation was used

ImagingPlane is modified to only describe physical space, either of disjoint planes or contiguous volume

MicroscopySeries is a new neurodata type that describes 1P, 2P, 3P, light-sheet, widefield, confocal images

  • a regular scan line rate (because of the reliability of piezo design) is available as an attribute for 2P/3P but would be None for others
  • a regular plane acquisition rate (because of the reliability of piezo design) is available for any volumetric plane to describe the delay between plane acquisitions relative to the start of acquisition of each frame
  • this should remove the unit attribute because it is always otherwise set to n.a. or a.u. (quantal flux can be calculated after the fact but raw imaging would never have)
  • resolution is not relevant to microscopy, more of an ephys relevant value

@weiglszonja
Copy link

SegmentationImages should also have a link to which plane segmentation they belong to. Currently they are added to an Images container and the only way to determine which plane they belong to is relying on the name of the container.

@CodyCBakerPhD
Copy link
Member Author

@weiglszonja This would apply to all other summary images as well, right?

@weiglszonja
Copy link

@CodyCBakerPhD Yes, like what we see for Pinto with the contrast, PCA and vasculature "masks".

@alessandratrapani
Copy link
Collaborator

looping @h-mayorquin in

@pauladkisson
Copy link
Member

Hey guys, I found a nice paper on future trends in microscopy here. There are a few new developments that I think we should watch for such as bar code labeling that uses multiple fluorophores for each molecule to allow scientists to look at more markers simultaneously.

@pauladkisson
Copy link
Member

Based on the discussion in ndx-holographic-stimulation, should we also include changes in the Ogen module? Or should we limit those to a separate extension?

@CodyCBakerPhD
Copy link
Member Author

If we're extending ogen to include ogen from a 2P system, and the metadata for all that overlaps, then probably yes

@CodyCBakerPhD
Copy link
Member Author

From NeurodataWithoutBorders/helpdesk#64 (reply in thread) posted by @ehennestad

I would argue that the indicator should not be coupled to an optical channel. True, optical channels should be optimised to capture signal only from the indicator of interest and block out other indicators but this might not always be true (bleedthrough is a common problem). Furthermore, multiple indicators are present in an imaging plane, regardless of the configuration (filters) of the optical channel and whether or not an optical channel is active or not. On the other hand, I understand that the imaging plane is an abstraction and in the ideal case it is coupled to a "perfect" optical channel and thus capturing only one indicator. I don't have a strong opinion either way, I was just curious what is common practice.

@ehennestad
Copy link

ehennestad commented Feb 3, 2024

I really liked this suggestion though:

but, should maybe be its own neurodata type so that it can link to the optic channel since an optic channel exists to target the emission from a specific indicator (not several indicators); perhaps the optic channel field would be target_indicator

I still think it makes most sense to define the indicator somewhere else (e.g associated to the subject), and keep the optical channel independent from it. The optical channel is the same from one session to the next, independent of what kind of subject or indicator is being imaged.

@weiglszonja
Copy link

@alessandratrapani @pauladkisson FYI:
Multichannel volumetric imaging NEP nwb-extensions/nwbep-review#3

@CodyCBakerPhD
Copy link
Member Author

Something else that's come up in my latest conversion is a slightly irregular Z-axis (depth) on each frame acquisition, such that a regular grid_spacing as we have now doesn't exactly capture that descriptor of the volume

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants