-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running decoding_in_time yields different output every time #9
Comments
Adding the output of the function with 'verbose = True':
This repeats for 12 times (12it) |
Hi @JensBlack, the
good luck with your decoding journey! |
Thanks a bunch! I really appreciate your input. |
Hi @lposani,
Thank you for your previous help with #8.
I have continued on my journey with Decodanda and tried the "decoding_in_time" function.
My questions to you:
I am assuming that all these plots show values from the same distribution of possible performances.
See below for details...
The setup:
1 session of 1 freely moving mouse in an arena for approx. 1.5 h. The mouse explores and sleeps, hence the labels (awake and sleep, [0,1]) to adress this simple binary case. We recorded Ca2+ activity from a region in the brain (in this session 61 neurons at 20 Hz). The calcium activity is z-scored.
The dummy hypothesis:
Is the activity predictive of the state change between awake and sleep or vice versa?
Implementation:
Generating pseudo trials and time_from_onset
Each behavior onset is captured and used as the center of a pseudo trial to generate trials that are elligable for the calculation of "time_from_onset". For this calculation, I generate a range of numbers (negative, and positive) with each onset = 0 and the distance between two onsets split into a pre-onset period (- half to 0) and post-onset period (0 to half).
This should be in line with your example and the "time_from_onset" generated in the synthetic data.
This is a visualization of the above. The onsets (vertical grey lines) are at 0 (black horizontal line), while the trial pre-onset and post-onset times are ranging from negative to positive numbers (blue line).
There are a total of 76 state changes (onsets) with 38 per class (awake/sleep).
The distribution of pre-onset and post-offset timesteps:
The red-area indicates the time boundaries I am interested in (see code below).
Decodanda code
The output of 5 different runs of the same code (none of the input variables changed):
Given that there is no legend or caption concerning the output of this function, my assumption is that the plot depicts the average performance of k-fold crossvalidated models (blue dots) across each time bin (blue line) vs the performance of shuffled labels (black line with grey area +/- SD).
Addendum (just for clarity):
Given your previous input, the decoding results look like this, for the same underlying data (with the previously suggested trial scheme of chunking each bout into pieces with 20 samples - i.e., 1 second of data)
The text was updated successfully, but these errors were encountered: