The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 20
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 62675)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 20
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{Introduction}
Modern systems neuroscientists have access to the activity of many thousands to potentially millions of neurons via multi-photon calcium imaging and high-density silicon probes \cite{stringer2019high,demas2021high,jun2017fully,steinmetz2021neuropixels}. Such interfaces provide a qualitatively different picture of brain activity than was achievable even a decade ago.
However, neural interfaces increasingly face a trade-off -- the number of neurons that can be accessed (capacity) is often far greater than the number that is simultaneously monitored (bandwidth). For example, with 2-photon calcium imaging (2p; \textbf{Fig. 1a}, \textit{top}), hundreds to thousands of neurons are serially scanned by a laser that traverses the field of view, resulting in different neurons being sampled at different times within an imaging frame. As a consequence, a trade-off exists between the size of the field-of-view (and hence the number of neurons monitored), the sampling frequency, and the signal-to-noise with which each neuron is sampled. Whereas current analysis methods treat 2p data as if all neurons within a field-of-view were sampled at the same time at the imaging frame rate, the fact that each neuron is sampled at staggered, known times within the frame could be employed to increase the time resolution.
\begin{figure}[!b]
\centering
\includegraphics[width=0.8\textwidth]{figs/1_spacetime_schematic_adjusted.pdf}
\caption{Exploiting space-time trade-offs in neural interfaces using SBTT. (a) In 2-photon calcium imaging (top), individual neurons are serially scanned at a low frame rate, resulting in staggered sample times. In modern electrophysiological recordings (bottom), bandwidth or power constraints prevent simultaneous monitoring of all recording sites. (b) Observed neuronal activity reflects latent, low-dimensional dynamics (captured by the function $f$). (c) SBTT applied to a sequential autoencoder for inferring latent dynamics from neural population activity.}
\label{fig:spacetime_schematic}
\end{figure}
Electrophysiological interfaces face similar trade-offs (\textbf{Fig. 1a}, \textit{bottom}). With groundbreaking high-density probes such as Neuropixels and Neuroseeker \cite{jun2017fully,steinmetz2021neuropixels,raducanu2017time}, simultaneous monitoring of all recording sites is either not currently possible or limits the signal-to-noise ratio, so users typically monitor a selected subset of sites within a given recording session. For example, Neuropixels 2.0 probes contain up to 5120 electrodes, 384 of which can be recorded simultaneously \cite{steinmetz2021neuropixels}. In other situations, power constraints might make it preferable to restrict the number of channels that are simultaneously monitored, such as in wireless or fully-implanted applications where battery life and heat dissipation are key challenges \cite{miranda2010hermesd,borton2013implantable,simeral2021home}. As newer interfacing strategies provide a pathway to hundreds of thousands of channels for revolutionary brain-machine interfaces \cite{sahasrabuddhe2021argo,musk2019integrated}, neural data processing strategies that can leverage dynamic deployment of recording bandwidth might allow substantial power savings.
Solutions to these space-time trade-offs may come from the structure of neural activity itself. A large body of work suggests that the activity of individual neurons within a large population is not independent, but instead is coordinated through a lower-dimensional, latent state that evolves with stereotyped temporal structure (\textbf{Fig. 1b}). We can represent the state at time $t$ as a vector $\vct{x}_t \in \mathbb{R}^D$ that evolves according to dynamics captured by a function $f$ such that $\vct{x}_{t+1} \approx f(\vct{x}_t)$. Rather than directly observing the latent state $\vct{x}_t$, we observe neural activity that we represent as $\vct{y}_t \in \mathbb{R}^N$, where $\vct{y}_t \approx h(\vct{x}_t)$ for some function $h$. Due both to the fact that $f$ imposes a significant amount of structure on the trajectory of the $\vct{x}_t$'s and the fact that we typically expect the dimension $D$ of $\vct{x}_t$ to be far smaller than the number of possible observations $N$, one might expect that it should be possible to estimate the $\vct{x}_t$'s without observing every neuron at every time step (i.e., measuring only some of the elements of each $\vct{y}_t$), just as we generally infer latent states from only a fraction of the neurons in a given area. If so, principled exploitation of the space-time trade-off of neural interfaces might achieve higher-fidelity or more bandwidth-efficient characterization of neural population activity.
To our knowledge, no methods have demonstrated inference of dynamics from data in which the set of neurons being monitored changes dynamically at short intervals. To address this challenge, we introduce \textit{selective backpropagation through time} (SBTT; \textbf{Fig. 1c}), a method to train deep generative models of latent dynamics from data where the identity of observed variables varies from sample to sample. Here we explore applications of SBTT to state space modeling of neural population activity that obeys low-dimensional dynamics.
This paper is organized as follows. Section 2 provides an overview of related work. Section 3 details SBTT and its integration with sequential autoencoders for modeling neural population dynamics. Section 4 demonstrates the effectiveness of this solution in achieving more efficient and higher-fidelity inference of latent dynamics in applications to electrophysiological and calcium imaging data.
\section{Related work}
There is a long and rich literature on methods for system identification, particularly in the case of \emph{linear} dynamical systems. The last several years have witnessed a burst of activity in establishing a more robust theoretical understanding of when and how well these methods work. Particularly relevant to our approach,~\cite{Hardt2018} shows that under suitable conditions on the dynamical system, performing gradient descent on the reconstruction loss of observed data can provably recover the parameters of the system despite the nonconvexity of the problem. Additional guarantees are provided in~\cite{Simchowitz2018,Hazan2018,Oymak2019,Tsiamis2019,Lee2020} which make varying assumptions on the underlying dynamics and the observation function, the existence of an observable control input, and the stochasticity of the dynamical system. Adversarial noise models are further considered in~\cite{Simchowitz2019,Simchowitz2020}. We emphasize, however, that all of the above works limit their focus to \emph{linear} dynamical systems where the observations are \emph{fully sampled}, i.e., where all of $\vct{y}_t = \mtx{H} \vct{x}_t$ is measured for all $t$.
In the case of a linear observation model ($\vct{y}_t = \mtx{H} \vct{x}_t$) but where we observe only a subset of the elements of each $\vct{y}_t$, the problem is reminiscent of the \emph{low-rank matrix completion} problem~\cite{Davenport2016}. Specifically, by letting $\mtx{Y}$ and $\mtx{X}$ denote the matrices whose columns are given by the $\vct{y}_t$ and $\vct{x}_t$ respectively, we can write $\mtx{Y} = \mtx{H} \mtx{X}$. If $D \ll N$, this is a low-rank matrix, and hence could be recovered from a random sampling of $O(D \log N)$ elements of each column of $\mtx{Y}$~\cite{Davenport2016}. However, this strategy essentially assumes that there is no relationship between the $\vct{x}_t$ -- one would expect to obtain significant improvements by exploiting the dynamical structure among the $\vct{x}_t$ imposed by $f$. Indeed, in~\cite{Xu2016,Xu2017} the authors show that if the dynamics $f$ are \emph{known}, then it is possible to significantly reduce the sampling requirements. However, the question of \emph{learning} such an $f$ from undersampled observations has again not been addressed in this literature.
In some application domains, there have been hints in this direction. In particular, in the related contexts of recommendation systems~\cite{Hidasi2015,Wu2017} and student knowledge tracking~\cite{Piech2015,Xu2020} there have been successful empirical efforts aimed at learning dynamical systems for modeling how user preferences/knowledge change over time. While such approaches have also had to confront the issue of missing observations (items that are not rated or questions that are not answered), they are aided by the existence of rich sources of additional metadata (e.g., tags) that lead to fundamentally different approaches than what we take here.
Within our application domain, a variety of methods have been developed to infer latent dynamical structure from neural population activity on individual trials, including those based on Gaussian processes \cite{byron2009gaussian, duncker2019learning,zhao2017variational,wu2017gaussian}, linear \cite{macke2012empirical,gao2016linear,kao2015single} and switching linear dynamical systems \cite{petreska2011dynamical,linderman2017bayesian,glaser2020recurrent}, and nonlinear dynamical systems such as recurrent neural networks \cite{pandarinath2018inferring,keshtkaran2019enabling,she2020neural,keshtkaran2021large}, hidden Markov models \cite{hernandez2018novel}, neural ODEs \cite{kim21neuralode}, and transformers \cite{ye2021representation}. Variants of these methods accommodate cases where the particular observed neurons change over long time periods (e.g., over the course of days) \cite{pandarinath2018inferring,nonnenmacher2017extracting, kao2017leveraging}, but these are not appropriate for cases where neurons are intermittently sampled on short timescales. As described below, several of these methods would be amenable to using SBTT to adapt to intermittent sampling, as SBTT should be applicable to any neural network architecture that learns weights via backpropagation through time.
\section{Selective backpropagation through time}
\subsection{Overview}
SBTT is a learning rule for updating the weights of a neural network that allows backpropagation of loss for the portions of data that are present while preventing missing data from corrupting the gradient signal. The technique optimizes the model to reconstruct observed data while extrapolating to the unobserved data. The implementation of SBTT is related to other approaches that augment network inputs and cost functions to reflect different subsets of the data matrix across samples, in particular coordinated dropout \cite{keshtkaran2019enabling}, masked language modeling \cite{devlin2018bert}, and DeepInterpolation \cite{lecoq2020removing}. Though not designed for missing data, these previous approaches split fully-observed data into two portions - a portion that is provided at the input to the network, and a portion that is used to compute loss at the output. SBTT uses a similar strategy to accommodate missing data, by zero-filling missing input points and aggregating only losses for observed data points at the output. To demonstrate SBTT, we provide code for a basic experiment using a sequential autoencoder and Lorenz dataset (\url{https://github.com/snel-repo/sbtt-demo}).
\subsection{Illustration with a simple linear dynamical system}
We begin by describing our approach in the context of a simple linear dynamical system. In the case where we have no (observable) inputs, we can model a linear dynamical system as
\begin{align*}
\vct{x}_{t+1} = \mtx{A} \vct{x}_t + \vct{w}_t \\
\vct{y}_{t} = \mtx{H} \vct{x}_t + \vct{z}_t.
\end{align*}
Here, $\vct{x} \in \mathbb{R}^D$ represents a hidden state, $\vct{y} \in \mathbb{R}^N$ represents our observations, and $\vct{w}_t$ and $\vct{z}_t$ represent noise. The matrix $\mtx{A}$ models the dynamics of the hidden state, and $\mtx{H}$ models the observation function of our system. In this setting, our task is to learn the parameters $\mtx{A}$ and $\mtx{H}$ given the observations $\vct{y}_0, \ldots, \vct{y}_{T-1}$ as well as the initial system state $\vct{x}_0$.
SBTT is a variation of standard back-propagation where loss terms attributed to missing observations are ignored when computing back-propagation updates. Concretely speaking, consider a linear recurrent network that can learn this linear model using a least squares loss
\[ \mathcal{L} = \frac{1}{T}\sum_{t=0}^{T-1} \frac{1}{2}\| \vct{y}_t - \mtx{H} \vct{x}_t \|_2^2.
\]
If the observation vector $\vct{y}_t$ contains a missing entry at index $i$, the least squares loss would not contain the $(y_t^i - (Hx_t)^i)^2$ term, where the superscript $i$ represents the $i$th index of a vector.
If $\vct{o}_t = \mtx{H}\vct{x}_t$ is taken to be the output of the recurrent network at time step $t$, then the loss with respect to the outputs of the network is
\begin{equation}
\frac{\partial \mathcal{L}}{\partial \vct{o}_t} = \frac{1}{T}(\vct{o}_t - \vct{y}_t ). \label{eq: loss output}
\end{equation}
SBTT requires that loss terms, and subsequently loss gradients, related to missing observations are ignored. This means that elements in the gradient vector \eqref{eq: loss output} are ignored and set to 0 at indices $i$ where the corresponding observations, $y_t^i$, are missing. This gradient is then back-propagated through time to obtain gradients with respect to model parameters $\mtx{A}$ and $\mtx{H}$ as shown below
\begin{align*}
\frac{\partial \mathcal{L}}{\partial \mtx{H}} = \sum_{t = 0}^{T-1} \frac{\partial \mathcal{L} }{\partial \vct{o}_t}(\vct{x}_t)^\intercal,
\frac{\partial \mathcal{L}}{\partial \mtx{A}} = \sum_{t= 1}^{T-1} \frac{\partial \mathcal{L}}{\partial \vct{x}_t}\vct{x}_{t-1}^\intercal ,
\end{align*}
where $\frac{\partial L}{\partial \vct{x}_t}$ is recursively computed using back-propagation through time:
\[\frac{\partial \mathcal{L}}{\partial \vct{x}_t} = \mtx{A}^\intercal \frac{\partial \mathcal{L}}{\partial \vct{x}_{t+1}} + \mtx{H}^\intercal\frac{\partial \mathcal{L}}{\partial \vct{o}_t} .\]
These parameters can then be updated using gradient descent.
\vspace{3mm}
\subsection{Integration with a deep generative model of neural population dynamics}
Here we will demonstrate the use of SBTT with a recently developed framework for inferring nonlinear latent dynamics from neural population recordings. This framework, Latent Factor Analysis via Dynamical Systems (LFADS), is a sequential variational auto-encoder (SVAE), detailed in \cite{pandarinath2018inferring}. LFADS models single-trial latent dynamics by learning the initial state of the dynamical system, the dynamical rules that govern state evolution, and any time-varying inputs that cannot be explained by the dynamics (i.e., in the case of a non-autonomous dynamical system). Briefly, a bidirectional RNN encoder operates on the neural spiking sequence $\mathbf{y}(t)$ and produces a conditional distribution over initial condition $\mathbf{z}$, $Q(\mathbf{z}|\mathbf{y}(t))$. A Kullback-Leibler (KL) divergence penalty is applied as a regularizer for divergence between the uninformative prior $P(\mathbf{z})$ and $Q(\mathbf{z}|\mathbf{y}(t))$. The initial condition is then drawn from $Q(\mathbf{z}|\mathbf{y}(t))$ and mapped to an initial state for a generator RNN, which learns to approximate the dynamical rules underlying the neural data. A controller RNN takes as input the state of the generator at each time step, along with a time-varying encoding of $\mathbf{y}(t)$ (produced by a second bidirectional RNN encoder), and injects a time-varying input $\mathbf{u}(t)$ into the generator. Similar to $\mathbf{z}$, $\mathbf{u}(t)$ is drawn from a parameterized time-varying distribution of $Q(\mathbf{u}(t)|\mathbf{y}(t))$ produced by the controller. A second KL penalty is applied between $P(\mathbf{u}(t))$ and $Q(\mathbf{u}(t)|\mathbf{y}(t))$. At each time step, the generator state evolves with input from the controller and the controller receives delayed feedback from the generator. The generator states are linearly mapped to factors, which are in turn mapped to the firing rates of the neurons using a linear mapping followed by an exponential nonlinearity. LFADS assumes a Possion emission model for the observed spiking activity. The optimization objective combines the reconstruction cost of the observed spiking activity (i.e., the Poisson likelihood of the observed spiking activity given the rates produced by the generator network), the KL penalties described above, and L2 regularization penalties on the weights of the recurrent networks. During training, network weights are optimized using stochastic gradient descent and backpropagation through time.
The first step in applying SBTT to LFADS is to zero-fill the missing data before feeding it into the initial condition (IC) and controller input (CI) encoders. After passing the data through the remaining hidden layers, we use the resulting rate estimates to compute a reconstruction loss (Poisson negative log-likelihood) for each observed neuron-timepoint and aggregate by taking the mean. The modified reconstruction loss is combined with other losses as in the standard LFADS model. The network only optimizes for reconstruction of observed data and is free to interpolate at unobserved points.
Throughout this paper we use population-based training along with coordinated dropout, together known as AutoLFADS, to optimize our models \cite{keshtkaran2019enabling,keshtkaran2021large,jaderberg2017population}. This framework is essential for achieving reliably high-performing LFADS models, regardless of dataset statistics. Hyperparameters, search ranges, and training details are given in the supplement.
\section{Experiments}
\subsection{High performance with limited bandwidth on primate electrophysiological recordings}
\label{section: ephys_sparse}
A key target application of AutoLFADS with SBTT is to enable reduced sampling of electrodes: either to enable recording from larger populations of electrodes with limited bandwidth (such as with Neuropixels), or to reduce power consumption (such as for fully-implantable brain-machine interfaces). To investigate the performance of AutoLFADS models trained with SBTT, we started with a large and well-characterized dataset containing electrophysiological recordings from macaque primary motor and dorsal premotor cortex (M1/PMd) \cite{churchland2010cortical,maze_datarelease}. The data were collected during a delayed reaching task, in which the monkey made both straight and curved reaches from a center position, around virtual barriers (the maze), to one of 108 possible target positions. The dataset consisted of 2296 trials with 202 sorted units aligned to movement onset in a window from 250 ms before to 450 ms after this point. Spike counts were binned at 10 ms (70 bins). We held out 50 randomly selected units from modeling to use for evaluation of inferred latent factors. We simulated various missing data scenarios for the remaining 152 units by randomly masking a fraction of the observations at each time step for each trial (\textbf{Fig. 2a}, \textit{top}).
For each of the masked datasets, we used AutoLFADS with SBTT to robustly train neural dynamics models. Latent factors and firing rates were inferred for all time steps, despite the missing (masked) observations. Even with 70\% dropped samples, the inferred firing rates showed structure comparable to the model of fully observed data (\textbf{Fig. 2a}, \textit{bottom}).
To determine whether the models were able to capture biologically relevant information from sparsely sampled data, we evaluated the inferred latent factors in terms of their ability to predict hand velocity (\textbf{Fig. 2b}) and the spiking activity of held-out units (\textbf{Fig. 2c}). As a recognizable baseline, we trained a Gaussian Process Factor Analysis (GPFA) model (40 latent dimensions, 20 ms bins) on the fully observed dataset \cite{byron2009gaussian,elephant18}. GPFA is a commonly-used and versatile method for extracting latent structure from neural population activity, and these parameters have been validated on this dataset in prior work \cite{pandarinath2018inferring}. We trained simple linear decoders to predict hand velocity from the inferred latent factors with an 80 ms delay (50/50, trial-wise train-test split), and evaluated using the coefficient of determination, averaged over x- and y-dimensions. For AutoLFADS with SBTT, decoding performance showed a minimal decline until around 80\% of the data had been dropped, with some models outperforming the GPFA baseline using as little as 15\% of the original data (\textbf{Fig. 2b}). To measure how well the models captured the population structure, we trained generalized linear models (GLMs)~\cite{paninski2004maximum,Jas2020} to predict the spikes for the held out units and evaluated fit quality using pseudo-$R^{2}$ ($pR^{2}$). Similar to the decoding results, we found that AutoLFADS with SBTT captured population structure significantly better than fully observed GPFA, and that the information content of the factors declined slowly until about 80\% missing samples (\textbf{Fig. 2c}). More detail on the $R^2$ and $pR^2$ metrics can be found in Supplement Section E.
To evaluate the importance of modeling latent dynamics for accurate inference with sparsely observed data, we also trained NDT with selective backpropagation on the same datasets \cite{ye2021representation}. We found that decoding performance from inferred firing rates declined faster than for AutoLFADS with SBTT, but NDT still outperformed GPFA with up to 40\% missing data (\textbf{Fig. 2b}).
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figs/2_sparse_maze_wNDT_adjusted.pdf}
\caption{SBTT allows inference of latent dynamics from M1/PMd electrophysiology data with sparse observations. (a) Spike count input and inferred rate output of LFADS for the same example trial with increasingly sparse observations. Masked data are shown in white, observed zeroes are shown in light purple, and nonzero spike counts are shown in darker shades. Units are sorted by timing of firing rate peaks for the fully sampled model. (b) Accuracy of linear hand velocity decoding from inferred latent factors. (c) Quality of GLM fits from inferred latent factors to 50 held-out units. $pR^{2}$ values for each held-out unit are normalized to the corresponding values achieved by the GPFA baseline. Points denote the median across all units. Shaded areas depict the 25th and 75th quantiles.}
\label{fig:sparse_maze}
\end{figure}
\subsection{Recovery of high frequency features in simulated 2P calcium imaging data}
High-frequency features of neural responses are generally assumed to be lost in 2P imaging due to limited scanning speeds and indicator kinetics. We hypothesized that some of the loss is actually due to standard 2P data processing, which discards information regarding sub-frame sampling time of individual neurons, and that SBTT could recover some of this information. The inherently staggered sampling of neurons due to raster scanning can be treated as a time series with missing values and higher temporal resolution than the frame rate. We tested SBTT on both simulated and real calcium imaging data. In both cases, we adapted AutoLFADS to better account for the statistics of deconvolved calcium activity (AutoLFADS-ZIG, see supplement) by substituting the underlying Poisson emission model with a Zero-Inflated Gamma distribution~\cite{wei2020zero}. In our experiments we compared three methods: AutoLFADS-ZIG with SBTT (ALFADS-SBTT), a standard frame-resolution version of AutoLFADS-ZIG without SBTT (ALFADS), and Gaussian smoothing of deconvolved calcium activity.
We generated artificial 2P data from a population of simulated neurons (278 neurons) whose firing rates were linked to the state of an underlying Lorenz system~\cite{zhao2017variational,sussillo2016lfads} (see supplement). To assess the ability to reconstruct latent dynamics at different frequencies, we simulated Lorenz systems with different speeds. For each Lorenz system we report the Z dimension power spectrum peak, which contains the most concentrated and highest frequencies. Fluorescence traces were simulated from the spike trains using an order 1 autoregressive model followed by a non-linearity and injected with 4 sources of noise (see supplement). Firing rates were simulated with a sampling frequency of 100Hz, and a "location" was randomly chosen for each simulated neuron, such that sampling times for different neurons were staggered to simulate 2p laser scanning sampling times. This produced fluorescence traces with one of three possible associated phases (0,11,22ms) and overall sample rate 33 Hz. We deconvolved neural activity from the fluorescence traces using the OASIS algorithm~\cite{friedrich2017fast} as implemented in the CaImAn package~\cite{giovannucci_caiman_2019}.
For ALFADS-SBTT we used the sub-frame phase information to generate intermittently-sampled data. In contrast, for both ALFADS and Gaussian smoothing, we discarded phase information and collapsed samples into a single time bin per frame, as is standard in 2p imaging data processing. To evaluate the performance in recovering the ground truth Lorenz states, we trained a mapping from the output of each method (i.e., the inferred event rates from ALFADS-SBTT and ALFADS, and smoothed deconvolved events by Gaussian smoothing; signals were interpolated to 100 Hz for the latter methods) to the ground truth Lorenz states using cross-validated ridge regression. We used $R^2$ between the true and inferred Lorenz states as a metric of performance.
The true and predicted Lorenz states for two example trials are illustrated in \textbf{Fig. 3a}. The performance of smoothing and A-FR dropped substantially for higher Lorenz state frequencies, while A-SBTT maintained reasonable estimates ($R^2 \approx 0.8$) up to 15Hz (\textbf{Fig. 3a \& b}) and never dropped below $0.4$ in the range of tested frequencies.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{figs/3_synth_calcium_adjusted.pdf}
\caption{SBTT improves inference of high-frequency dynamics from simulated 2P data with known dynamical structure. (a) True and inferred Lorenz latent states (X/Y/Z dimensions) for a single example trial from Lorenz systems simulated at two different frequencies (7Hz and 15Hz). Black: ground truth. Colored: inferred. (c) Performance in estimating the Lorenz Z dimension as a function of Lorenz speed was quantified by variance explained ($R^2$) for all three methods. The speed of the Lorenz dynamics was quantified based on the peak location of the power spectra of Lorenz Z dimension, with a sampling frequency of 100Hz.
}
\label{fig:synth_calcium}
\end{figure}
\subsection{Improved representation of hand kinematics in mouse 2P calcium imaging data}
We next applied SBTT to real 2P calcium imaging data we collected from motor cortex in a mouse performing a forelimb water grab task. The dataset comprised 475 trials in which the mouse was cued by a tone to reach to a left or right spout and retrieve a droplet of water with its right forepaw. Pyramidal cells expressing the GCaMP6f calcium indicator were imaged with a two-photon microscope at a 31 Hz frame rate, and a subset of 439 modulated neurons within the field-of-view (FOV) were considered for analysis (FOV shown in \textbf{Fig. 4a}, \textit{left}; example calcium traces in \textbf{Fig. 4a}, \textit{right}). The mouse’s forepaw position was tracked in 3D at 150 Hz with stereo cameras and DeepLabCut~\cite{mathis2018deeplabcut}. Calcium events were deconvolved with OASIS~\cite{friedrich2017fast,giovannucci_caiman_2019}.
2P data for ALFADS-SBTT were processed analogously to the simulations, using neuron locations within the FOV to inform the intermittent sampling times. Trials represented a window spanning 200 ms before to 800 ms after the mouse's reach onset. This resulted in 100 time points per trial for ALFADS-SBTT, and 31 time points per trial for ALFADS and Gaussian smoothing. For both ALFADS-SBTT and ALFADS, trials were split into 80/20 train/validation.
To compare representations inferred by ALFADS-SBTT and ALFADS, we first evaluated how closely the single-trial event rates inferred for each neuron resembled that neuron's peri-stimulus time histogram (PSTH). PSTHs were calculated by taking the average of the Gaussian-smoothed deconvolved events across trials within each experimental condition. Because the mouse's reaches were not stereotyped to each spout (i.e., left or right), we subgrouped trials into 4 finer conditions based on forepaw Z position during the reach. ALFADS-SBTT single-trial event rates were more strongly correlated with neurons' PSTHs compared to those inferred by ALFADS (\textbf{Fig. 4b}).
We next decoded the mouse’s single-trial forepaw kinematics (position and velocity) based on each model’s output. Decoding was performed using ridge regression with 5-fold cross validation. We used $R^2$ between the true and predicted hand positions and velocities as a metric of performance. $R^2$ was averaged across XYZ behavioral dimensions and all 5 folds of the test sets. Decoding using ALFADS-SBTT inferred rates outperformed results from smoothing deconvolved events, or from the ALFADS inferred rates (\textbf{Fig. 4c}). Because the improvement of decoding performance for position is modest, we further assessed how the improvement was distributed as a function of temporal frequency. We computed the coherence between the true and decoded positions for each method (\textbf{Fig. 4d}). Consistent with the simulations, ALFADS-SBTT predictions showed higher coherence with true position than predictions from other methods, with improvements more prominent at higher frequencies (5-15Hz).
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figs/4_real_calcium_adjusted.pdf}
\caption{SBTT improves inference of latent dynamics from mouse 2P calcium imaging data. (a) Left: an example field-of-view (FOV), colored by neurons. Right: calcium traces (dF/F) from a single trial for 5 example neurons. (b) Performance of capturing empirical PSTHs was quantified by computing the correlation coefficient r between the inferred single-trial event rates and empirical PSTHs, comparing ALFADS vs ALFADS-SBTT. Each point represents an individual neuron. (c) Decoding performance was quantified by computing the $R^2$ between the true and decoded position (left) and velocity (right) across all trials. (d) Quality of reconstructing the kinematics across frequencies was quantified by measuring coherence between the true and decoded position for all three methods.}
\label{fig:real_calcium}
\end{figure}
\subsection{Using high-bandwidth observations to improve performance in low-bandwidth conditions}
In implantable or wireless applications, using the device's full interface bandwidth might incur significant power costs, which would burden users with frequent battery recharging. However, it may be possible to leverage high-bandwidth recordings from limited time periods to learn models of latent dynamics, and then switch to low-bandwidth modes for subsequent long-term operation, in order to minimize ongoing power use. Such an approach is enabled by the stability of latent dynamics over months to years ~\cite{pandarinath2018inferring,kao2017leveraging,gallego2020long}.
We tested these ideas on the same electrophysiological dataset described in section \ref{section: ephys_sparse}. After training AutoLFADS models on the fully sampled data, we retrained the initial condition and controller input encoders using SBTT on each of the sparsely sampled datasets. The weights for the rest of the network remained fixed. In this way, the dynamical rules learned from the fully sampled data are maintained, while the mappings from data to the initial conditions and controller inputs are adapted for sparse data. Retraining the encoding networks in this way (\textbf{Fig. 5}, “Retrained sparse”) maintained performance to high levels of missing data, outperforming AutoLFADS trained on fully observed data but run with missing data (\textbf{Fig. 5}, “Trained full”) or training directly on sparsely-sampled data (\textbf{Fig. 5}, “Trained sparse”, same as in \textbf{Fig. 2b}). These results show that dynamics models are learned most accurately on fully observed data, but that the learned dynamics can be used to model sparsely sampled data if models are adapted to the sparser domain using SBTT.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figs/5_retraining_maze_adjusted.pdf}
\caption{Retraining full-data LFADS encoders on sparse data improves decoding performance. (a) Hand velocity decoding performance in function of dropped samples (as in \textbf{Fig. 2b}). “Trained full” indicates training on fully observed data and inference on sparse data. “Trained sparse” indicates training and inference on sparse data. “Retrained sparse” indicates training on fully observed data, followed by encoder retraining and inference on sparse data. (b) Spike count input and inferred rate output of LFADS. Conventions are as in \textbf{Fig. 2a}.}
\label{fig:retraining}
\end{figure}
\section{Discussion}
We introduced SBTT, a novel approach for learning latent dynamics from irregularly or sparsely sampled time series data. In experiments on real electrophysiology data from macaque motor cortex, we show that models trained with SBTT learn biologically relevant neural dynamics with up to 80\% masked training data. On data from a synthetic 2P calcium imaging simulation, we show that models trained with SBTT capture high frequency features of the latent dynamics that are not captured at frame resolution. We also showed improved behavioral decoding performance on real 2P imaging data from mouse M1. Finally, we demonstrate that retraining the early layers of a full-data model on sparse datasets using SBTT can substantially improve decoding performance at the most challenging sparsity levels, outperforming models trained on the sparse data alone. Taken together, these results clearly show that SBTT is a valuable technique for training models with irregularly or sparsely sampled time series data.
\subsection{Limitations}
Though we made an effort to characterize performance across multiple potential applications, it remains untested how this approach would generalize to other experimental settings (microscopes, calcium indicators, expression levels), model systems, and brain areas or tasks with more complex or higher-dimensional dynamics \cite{keshtkaran2021large}, but we are optimistic that these properties will extend to AutoLFADS models that use SBTT in other settings. Applications to brain-machine interfaces await incorporation of neural network-based dynamics models into closed-loop, real time systems. We also note that hardware implementations of intermittent sampling for electrophysiology are still largely unexplored, and might incur time or power costs when switching between channels. This might change the point at which intermittent sampling is beneficial from a power or performance perspective. We hope that this work indicates new directions for future generations of recording hardware that focus on high interface capacities and rapid switching between contacts.
\subsection{Broader impact}
Our results could pave the way to substantially decreased power consumption for fully-implantable brain-machine interfaces. Ultimately, this should result in more reliable and less burdensome assistive devices for people with disabilities. Further, expanding the information that can be gathered through a given recording bandwidth has scientific implications, and could enable neuroscientists to ask new questions via larger-scale studies of the brain.
Like any resource-intensive technology, this technique has the potential to increase inequity by only benefiting those who can afford the most advanced neural interfaces. Efforts to deploy such technologies should weigh input from ethicists to ensure that everyone benefits from these scientific innovations \cite{klein2015engineering,goering2021recommendations}.
\begin{ack}
We thank M. Rivers and R. Vescovi for help with the real-time camera setup, and D. Sabatini for contributions to the behavioral control software. This work was supported by the Emory Neuromodulation and Technology Innovation Center (ENTICe), NSF NCS 1835364, NIH Eunice Kennedy Shriver NICHD K12HD073945, the Simons Foundation as part of the Simons-Emory International Consortium on Motor Control (CP), the Alfred P. Sloan Foundation (CP, MTK), NSF Graduate Research Fellowship DGE-2039655 (ARS), NSF NCS 1835390, The University of Chicago, the Neuroscience Institute at The University of Chicago (MTK), and a Beckman Young Investigators Award (AG). The work was also supported by the following collaborative awards (PI: Prof. Ellen Hess, Emory): NIH NINDS R21 NS116311, Imagine, Innovate and Impact (I3) Funds from the Emory School of Medicine and through the Georgia CTSA NIH UL1-TR002378, and a pilot grant from the Emory Udall Center of Excellence for Parkinson’s Research. The authors declare no competing interests.
\end{ack}
\newpage
\bibliographystyle{unsrtnat}
\section{Training the AutoLFADS models}
\subsection{LFADS architecture}
The architecture of LFADS is described in more detail in the original publication \cite{pandarinath2018inferring}. We used a dimension of 64 for the initial condition (IC) encoder, controller input (CI) encoder, initial condition, and controller. The controller output dimension was 2 and the generator dimension was 100. The latent factor dimensionality was 40 for the maze dataset and 100 for both calcium datasets.
\subsection{Hyperparameter tuning}
LFADS models benefit from appropriate hyperparameter (HP) tuning, as optimal HP combinations can vary from dataset to dataset \cite{keshtkaran2019enabling, keshtkaran2021large}. As mentioned in the main text, we use AutoLFADS \cite{keshtkaran2021large} to ensure appropriate HP tuning. The framework combines a regularization strategy (coordinated dropout; CD \cite{keshtkaran2019enabling}) with a large-scale framework for optimizing model hyperparameters (population-based training; PBT \cite{jaderberg2017population}). The details of these strategies are outlined in previous work; here we provide specifics for replicating our findings.
PBT trains many models in parallel, while using evolutionary algorithms to exploit and explore high-performing HP combinations. In our experiments, we used 18 workers on a local cluster with 9 NVIDIA GeForce RTX 2080 GPUs (i.e., generations of 18 models were trained in parallel). The following HPs were searched using PBT, with initial values sampled from distributions listed in parentheses: Adam learning rate (0.001 for all models), CD rate (0.5 for all models), dropout rate ($Uniform(0, 0.7)$), L2 penalty on the generator recurrent weights ($LogUniform(-5, -1)$), L2 penalty on the controller recurrent weights ($LogUniform(-5, -1)$), KL penalty on the controller output ($LogUniform(-6, -4)$), and KL penalty on the initial condition distribution ($LogUniform(-6, -4)$). After an initial ramping period (80 epochs), during which regularization penalties were linearly increased to full strength, we trained all models in intervals of 50 epochs (one generation). For all the datasets used in this paper, trials were split into 80/20 training and validation sets. At the end of each generation, models were scored using exponentially smoothed negative log-likelihood (NLL) on validation data and participants in the next generation were selected with a binary tournament. Models with worse scores copied the weights and mutated HPs from the winning models. Training was stopped when a fractional improvement of less than 0.001 (maze data) or less than 0 (simulated calcium and real calcium data) in best-in-generation score was achieved over 25 generations. The model checkpoints at epochs with the best smoothed validation NLL were used for subsequent analysis.
\section{Zero-Inflated Gamma (ZIG) emissions model}
Recent work demonstrated that deconvolved calcium events in 2P data can be robustly modeled with a zero-inflated gamma distribution~\cite{wei2020zero}. We therefore replaced the Poisson emissions model of LFADS, which links latent factors to observed events, with the ZIG model. Concretely, a ZIG distribution is a two-component mixture model that combines a gamma distribution to model the continuous-valued deconvolved events and a point mass that represents the probability of zero events (missed spikes~\cite{wei2020zero}):
\[y_n(t) \sim (1-q_n(t))\cdot\delta(0) + q_n(t)\cdot gamma(\alpha_n(t),k_n(t),loc_n),
\]
where $y_n(t)$ is the distribution of observed deconvolved events, $\alpha_n(t)$ and $k_n(t)$ are the scale and shape parameters of the gamma distribution, and $q_n(t)$ denotes the probability of non-zeros, for neuron $n$ at time $t$. The location parameter $loc_n$ of the gamma distribution for neuron $n$ was fixed as the minimum nonzero deconvolved event ($s_{min}$) for that neuron. We modified LFADS so it infers the three time-varying parameters ($\alpha_n(t)$, $k_n(t)$, and $q_n(t)$) for each neuron. This is achieved through linear transformation of the factors followed by a trainable, scaled sigmoid nonlinearity. The outputs of the sigmoid for $\alpha_n(t)$ and $k_n(t)$ are scaled by positive parameters (one for each neuron) that are optimized alongside network weights. An L2 penalty is applied between the scaling factors and a PBT-searchable prior to prevent extreme values. The training objective is to minimize the negative log-likelihood of the deconvolved events given the inferred parameters:
\[\prod p(y_i(t)|\mathrm{ZIG}(\hat{\alpha_i}(t), \hat{k_i}(t), \hat{q_i}(t)))
\]
The event rate for neuron $n$ at time $t$ was estimated by taking the mean of the inferred ZIG distribution: $\hat{q_n}(t)\cdot(\hat{k_n}(t)\cdot\hat{\alpha_n}(t) + s_{min})$.
\section{Calcium simulations}
\subsection{Simulation pipeline}
Synthetic data were generated with underlying dynamics that follow a Lorenz system, as described in previous work~\cite{sussillo2016lfads,zhao2017variational}. Lorenz parameters were set to standard values ($\sigma$: 10, $\rho$: 28, and $\beta$: 8/3) and $\Delta t$ was set to 0.01. We generated Lorenz systems with various speeds and frequency peaks by downsampling the original Lorenz states. We simulated a population of 278 neurons with firing rates taken as linear projections of the Lorenz state variables using random weights, followed by an exponential nonlinearity. Scaling factors were applied so that the baseline firing rate for all neurons was 3 spikes/sec. We simulated rates for 32 conditions and sampled spikes for 60 trials per condition. Each condition was obtained by starting the Lorenz system with a random initial state vector and running it for 900ms.
Generating realistic calcium traces from the synthetic spike trains followed a multi-step process. First, independent Gaussian noise ($s.d. = 0.1$) was added to each spike in the spike train to model the variability in spike amplitudes observed in real calcium data. Next, we modeled the calcium concentration dynamics ($c(t)$) as an autoregressive process of order 1:
\begin{equation}
c(t) = \gamma c(t-1)+s(t)
\end{equation}
with s(t) representing the number of spikes at time t and $\gamma \sim \mathbf{U}(0.93, 0.95)$ is the autoregressive coefficient uniformly distributed to account for variability across neurons in calcium imaging movies. Subsequently, we computed the noiseless fluorescence signals by passing the calcium dynamics through a nonlinear transformation estimated from the literature~\cite{dana2019high} for the calcium indicator GCaMP6f. After passing through a nonlinearity the relationship between spike size and trace size is corrupted, and therefore we rescaled the trace using min-max normalization. Finally, Gaussian noise ($\sim \mathbf{N}(0,sn)$) and Poisson noise (simulated as Gaussian with mean 0 and variance proportional to the signal amplitude at each time point via a constant $d$) were added to the normalized traces. The simulated fluorescence signals were deconvolved using OASIS parameterized with an order 1 auto-regressive model and $s_{min}=0.1$ ($s_{min}$ corresponds to the location parameter of a ZIG distribution)~\cite{friedrich2017fast}.
The noise level associated with each fluorescence trace is a crucial parameter. High noise levels lead to very poor spike detection and very low noise levels enable a near-perfect reconstruction of the spike train. In order to select a fair level of noise we matched the SNR distributions of the simulated data to that of real data from motor cortex. SNR was estimated as the ratio between the noise level of the fluorescence signal estimated by OASIS and the largest detected spike inferred by OASIS. We found that a truncated normal distribution of noise levels for Gaussian and Poisson noise best matched the SNR distributions. More precisely, for each neuron, $sn=d$ was sampled independently from a truncated normal distribution $\mathbf{N}(0.3, 0.02)$ truncated below 0.09. We also measured the correlation coefficient $r$ between the deconvolved events and ground truth spikes. With the above noise settings, the mean $r$ was 0.32, which is consistent with standard benchmarks~\cite{berens2018spikefinder} for OASIS. It is worth noting that real data feature a broad range of noise levels that depend on the imaging conditions, depth, expression level, laser power and other factors. In our setting the goal was not to investigate all possible noise conditions, but rather to provide simulated data whose properties roughly matched the features of the real calcium imaging data used in this paper.
\subsection{Mapping to ground truth Lorenz states}
\label{section: lorenz_mapping}
The A-ZIG-SBTT and A-ZIG-FR models output inferred calcium event rates at 100Hz and 33Hz respectively, whereas Gaussian smoothing outputs 33Hz smoothed deconvolved events. To evaluate the performance in recovering the 100Hz ground truth Lorenz states, the 33Hz outputs were linearly extrapolated to 100Hz.
Since our goal was to quantify modeling performance by estimating the underlying Lorenz States, we trained a mapping from the output of each model to the ground truth Lorenz states using ridge regression. First, we split the trials into training (80\%) and test (20\%) sets. We used the training set to optimize the regularization coefficient using 5-fold cross-validation, and used the optimal regularization coefficient to train the mapping on the full training set. We then quantified state estimation performance by applying this trained mapping to the test set and calculating the coefficient of determination ($R^2$) between the true and predicted Lorenz states. We repeated the above procedure five times with train/test splits drawn from the data in an interleaved fashion. We reported the mean $R^2$ across the repeats, such that all reported numbers reflect held-out performance.
The same cross-validated ridge regression procedure was used for the real calcium data, i.e., to train decoders to predict positions and velocities from the inferred event rates produced by each model.
\section{Real 2P Calcium imaging}
We tested SBTT with real 2P calcium data that we collected from motor cortex in a mouse performing a forelimb water grab task. These data have not been published previously, and thus we provide detailed experimental methods below.
\subsection{Surgical procedures}
All procedures were approved by the Animal Care and Use Committee at the institution where the experiments were performed. One male Ai148D transgenic mouse (TIT2L-GC6f-ICL-tTA2; Jackson Laboratory) was used and underwent a single surgery. The mouse was injected subcutaneously with dexamethasone (8 mg/kg) 24 hours and 1 hour before surgery. The mouse was anesthetized with 2-2.5\% inhaled isoflurane gas, then injected intraperitoneally with a ketamine-medetomidine mixture (60 mg/kg ketamine, 0.25 mg/kg medetomidine), and maintained on a low level of supplemental isoflurane (0-1\%) if it showed any signs that the depth of anesthesia was insufficient. Meloxicam was also administered subcutaneously (2 mg/kg) at the beginning of the surgery and for 1-3 subsequent days. The scalp was shaved, cleaned, and resected, the skull was cleaned and the wound margins glued to the skull with tissue glue (VetBond, 3M), and a 3 mm circular craniotomy was made with a 3 mm biopsy punch centered over the left CFA/S1 border. The coordinates for the center of CFA were taken to be 0.4 mm anterior and 1.6 mm lateral of bregma. Virus (AAV9-CaMKII-Cre, stock 2.1*1013 particles/nL, 1:1 dilution in PBS, Addgene) was pressure injected (NanoJect III, Drummond Scientific) at multiple sites near the target site, with 140 nL injected at each of two depths per site (250 and 500 µm below the pia) over 5 minutes each. The craniotomy was then sealed with a custom cylindrical glass plug (3 mm diameter, 660 µm depth; Tower Optical) bonded (Norland Optical Adhesive 61, Norland) to a round coverslip and glued in place. A small craniotomy was also made using a dental drill over right CFA at 0.4 mm anterior and -1.6 mm lateral of bregma, where 140 nL of AAVretro-tdTomato (stock 1.02*1013 particles/nL, Addgene) was injected at 300 µm below the pia. This injection labeled cells in left CFA projecting to the contralateral cortex. Here, this labeling was used solely for stabilizing the imaging plane (see below). A custom laser-cut titanium head bar was affixed to the skull with black dental acrylic. The animal was allowed to recover at least 3 days before water restriction.
\subsection{Behavioral task}
The water grab task was a variant of a previously-reported water reaching task~\cite{galinanes2018directional}. This task was performed by a water-restricted, head-fixed mouse, with the forepaws beginning on metal paw rests and the hindpaws and body supported by an acrylic tube enclosure. After the mouse held the paw rests for 700-900 ms, a tone was played by stereo speakers and a droplet of water appeared at one of two water spouts positioned on either side of the snout. The tone's pitch indicated the location of the water, with a 4000 Hz tone indicating left and a 7000 Hz tone indicating right. The tone lasted 500 ms or until the mouse made contact with the correct water spout. The mouse could grab the water droplet and bring it to its mouth to drink any time after the tone began. Both the paw rests and spouts were wired with capacitative touch sensors (Teensy 3.2, PJRC). Good contact with the correct spout produced an inter-trial interval of 3-6 s, while failure to make contact (or insufficiently strong contact) with the spout produced an inter-trial interval of 20 s. Because the touch sensors required good contact from the paw, this setup encouraged complex contacts with the spouts. The mouse was trained to make all reaches with the right paw and to keep the left paw on the paw rest during reaching. Training took approximately two weeks, though the behavior continued to solidify for at least two more weeks. Data presented here were collected after 6-8 weeks’ experience with the task. Touch event monitoring and task control were performed at 60 Hz.
Behavior was also recorded using a pair of cameras (BFS-U3-16S2M-CS, FLIR; varifocal lenses COZ2813CSIR2, Computar) mounted 150 mm from the right paw rest at 10 degrees apart to enable 3D triangulation. Infrared illuminators enabled behavioral imaging. Cameras were synchronized and recorded at 150 frames per second with real-time image cropping and JPEG compression, and streamed to one HDF5 file per camera. The knuckles and wrist of the reaching paw were tracked in each camera using DeepLabCut~\cite{mathis2018deeplabcut} and triangulated into 3D using camera calibration parameters obtained from the MATLAB Stereo Camera Calibration toolbox~\cite{heikkila1997four,zhang2000flexible}. To screen the tracked markers for quality we created distributions of all inter-marker distances in 3D across every labeled frame and identified frames with any inter-marker distance exceeding the 99.9th percentile of its respective distribution as problematic. Trials with more than one problematic frame in the period of -200 ms to 800 ms after the raw reach onset were discarded (where reach onset was taken as the first 60 Hz tick after the paw rest touch sensor fell below contact threshold). The kinematics of all trials that passed this screening procedure were visualized to confirm quality. Forepaw centroid marker kinematics were obtained by averaging the kinematics of all paw markers, locking them to behavioral events and then smoothing using a Gaussian filter (15 ms s.d.). To obtain velocity and acceleration, centroid data were numerically differentiated with MATLAB’s \texttt{diff} function and then smoothed again using a Gaussian filter (15 ms s.d.).
\subsection{Two-photon imaging}
Calcium imaging was performed with a Neurolabware two-photon microscope and pulsed Ti:sapphire laser (Vision II, Coherent). Depth stability of the imaging plane was maintained using a custom plugin that acquired an image stack at the beginning of the session (1.4 µm spacing), then compared a registered rolling average of the red-channel data to each plane of the stack. If sufficient evidence indicated that a different plane was a better match to the image being acquired, the objective was automatically moved to compensate.
Offline, images were run through Suite2p~\cite{pachitariu2017suite2p} to perform motion correction, ROI detection, and fluorescence extraction from both ROIs and neuropil. ROIs were manually curated using the Suite2p GUI. We then subtracted the neuropil signal scaled by 0.7~\cite{chen2013ultrasensitive}. Neuropil-subtracted ROI fluorescence was then detrended by performing a running 10th percentile operation, smoothing with a Gaussian (20s s.d.), then subtracting the result from the trace. This result was fed into OASIS~\cite{friedrich2017fast} using the ‘thresholded’ method, AR1 event model, and limiting the tau parameter to be between 300 and 800 ms. Neurons were discarded if they did not meet a minimum signal-to-noise (SNR) criterion. To compute SNR, we took the fluorescence at each time point when OASIS identified an “event” (non-zero), computed (fluorescence - neuropil) / neuropil, and computed the median of the resulting distribution. ROIs were excluded if this value was less than 0.05. To put events on a more useful scaling, for each ROI we found the distribution of event sizes, smoothed the distribution (\texttt{ksdensity} in MATLAB, with an Epanechnikov kernel and log transform), found the peak of the smoothed distribution, and divided all event sizes by this value. This rescales the peak of the distribution to have a value of unity. Data from one mouse (one session) were used (439 neurons, 475 trials).
\subsection{Modeling with frame and sub-frame resolution}
To prepare data for A-ZIG-SBTT and A-ZIG-FR, the deconvolved events were normalized by $s_{min}$ so that the minimal event size was 0.1 across all neurons. The deconvolved events for individual neurons had a sampling rate equal to the frame rate (31.08 Hz). For modeling with A-ZIG-SBTT, the deconvolved events were assigned into 10ms bins using the timing of individual measurements for each neuron to achieve sub-frame resolution (i.e., 100 Hz). For A-ZIG-FR and Gaussian smoothing, the deconvolved events were assigned into a single time bin per frame (i.e., 32.17 ms bins) to mimic standard processing of 2p imaging data, where the sub-frame timing of individual measurements is discarded. Trials were created by aligning the data to 200 ms before and 800 ms after reach onset (100 time points per trial for A-ZIG-SBTT, and 31 time points per trial for A-ZIG-FR and Gaussian smoothing). Failed trials (latency to contact with correct spout $>20$ s), or trials where the grab to the incorrect spout occurred before the grab to the correct spout, were discarded.
\subsection{Evaluating against empirical PSTHs}
The continuous range of reaching behavior was discretized into groups for trial-averaging. Trials were sorted into four groups based on the Z dimension of hand position. The hand position was obtained by smoothing the centroid marker position with a Gaussian filter (40 ms s.d.). Time windows where the hand Z position was used to split trials were selected arbitrarily to present a good separation between subgroups of hand trajectories. A window of 30 ms to 50 ms after reach onset was used to split both left and right condition trials. For both left or right conditions, 55 trials with the lowest and highest Z positions were selected as group 1 and group 2, respectively; trials with middle-range Z positions were discarded.
To assess how well the models' inferred event rates recapitulated the empirical PSTHs on single trials, empirical PSTHs were computed by trial-averaging smoothed deconvolved events (40 ms kernel s.d., 32.1729 ms bins) within each of the 4 subgroups of trials. Event rates inferred from A-ZIG-SBTT were first downsampled from 100 Hz to 31.0821 Hz with an antialiasing filter applied, to match the sampling frequency (i.e., the frame rate) of the original deconvolved signals. The correlation coefficient ($r$) was computed between inferred single-trial event rates and the corresponding empirical PSTHs for all active neurons for both methods (i.e., calculated on rates concatenated across all trials within the four subgroups; one $r$ for each neuron). Active neurons were defined as neurons that had more than 40 nonzero events across all trials from all 4 subgroups in the time window of 200 ms before to 800 ms after reach onset.
\subsection{Decoding hand kinematics}
A-ZIG-SBTT inferred rates, A-ZIG-FR inferred rates, and smoothed deconvolved events (Gaussian kernel 40 ms s.d.) were used to decode hand position and velocity using ridge regression. The hand position and velocity were obtained as described above and binned at 10ms (i.e., 100 Hz). The non-A-ZIG-SBTT rates were retained to a sampling frequency of 100Hz using linear interpolation. For simplicity, we did not include a lag between the neural data and kinematics. However, additional analyses confirmed that adding a lag did not alter the results (data not shown). Trials with an interval between water presentation and reach onset that was longer than a threshold (i.e., 400ms) were discarded due to potential variations in behavior (e.g., inattention). The data were aligned to 50 ms before and 350 ms after reach onset. The decoder was trained and tested using the same cross-validated Ridge regression approach described in section \ref{section: lorenz_mapping}. The coefficient of determination ($R^{2}$) was computed and averaged across x-, y- and z- kinematics.
\subsection{Coherence analysis}
Coherence was computed between the true and predicted kinematics (window: 200 ms before and 500 ms after reach onset) across all trials and across all x-, y- and z- dimensions using magnitude-squared coherence (MATLAB: \texttt{mscohere}). The power spectral density estimation parameters within \texttt{mscohere} were specified to ensure a robust calculation on the single trial activity. Hanning windows with 35 timesteps (i.e., 350 ms) for the FFT and window size, and 25 timesteps (i.e., 250ms) of overlap between windows. Coherence was evaluated at 18 frequencies that were evenly spaced between 0 Hz and half of the sampling frequency (i.e., 100Hz). The interval between the frequencies was determined by the window size passed to \texttt{mscohere}. In Fig. 4d, only coherence between 0 and 15Hz was plotted because coherence dropped low for all methods after 15Hz.
\section{Evaluation metrics}
\subsection{Coefficient of determination}
Decoding performance was quantified using the coefficient of determination ($R^2$) between true and predicted hand velocities as implemented in \texttt{sklearn} \cite{pedregosa2011scikit}. We compute $R^2$ for $x$- and $y$-dimensions separately, and then average.
$$R^2(y, \hat{y}) = 1 - \frac{\sum_{i=1}^{n} (y_i - \hat{y}_i)^2}{\sum_{i=1}^{n} (y_i - \bar{y})^2}$$
\subsection{Pseudo-$R^2$}
The problem with evaluating our models using kinematic decoding alone is that it only gives information about the quality of the behaviorally relevant projections of the population state, while these could represent only a small fraction of the total variance of the population. To obtain a more complete picture of inference quality, we also want to characterize how well the inferred population state predicts the spiking activity of held-out neurons. Since spikes are Poisson-distributed, we fit a Poisson GLM (i.e., with exponential link function) to predict the firing rate of each held-out neuron based on the inferred population state. To quantify the performance of these models, we use pseudo-R2 as a likelihood-based metric similar to the coefficient of determination. We use an implementation from \texttt{pyglmnet} \cite{Jas2020}, where $ln\hat{L}$ is the estimated log-likelihood, $S$ is a model that predicts the spike counts, $M_{GLM}$ is the GLM, and $M_{null}$ is a model that predicts the mean count.
$$pR^2 = 1 - \frac{ln \hat{L}(S) - ln \hat{L}(M_{GLM})}{ln \hat{L}(S) - ln \hat{L}(M_{null})}$$
\newpage
\bibliographystyle{unsrtnat}
|
{
"timestamp": "2021-11-02T01:03:11",
"yymm": "2111",
"arxiv_id": "2111.00070",
"language": "en",
"url": "https://arxiv.org/abs/2111.00070"
}
|
"\\section{Introduction}\nCoulomb branches of 3d $\\mathcal{N}=4$ quiver gauge theories have rich ge(...TRUNCATED)
| {"timestamp":"2021-11-02T01:00:26","yymm":"2111","arxiv_id":"2111.00026","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\label{sec:Introduction}\n\nThe heavy-strange spectrum is an excellent sy(...TRUNCATED)
| {"timestamp":"2021-11-08T02:18:06","yymm":"2111","arxiv_id":"2111.00023","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\\label{}\n\\noindent\nThe Penrose inequality relates the total mass of a (...TRUNCATED)
| {"timestamp":"2021-11-23T02:11:41","yymm":"2111","arxiv_id":"2111.00106","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\\label{sec1}\r\n\r\nAll graphs considered in this paper are finite, simple,(...TRUNCATED)
| {"timestamp":"2021-11-02T01:06:15","yymm":"2111","arxiv_id":"2111.00141","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction} \\label{sec:intro}\n\nWithin galaxies, the structures of star formation are(...TRUNCATED)
| {"timestamp":"2021-11-02T01:01:45","yymm":"2111","arxiv_id":"2111.00046","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\n\nIn their pioneering work, \\citet{vapnik71} identified a key combinato(...TRUNCATED)
| {"timestamp":"2021-11-02T01:02:32","yymm":"2111","arxiv_id":"2111.00062","language":"en","url":"http(...TRUNCATED)
|
"\\section{Background and related work}\n\n\\subsection{Introduction}\n{\n\nAn estimated 40\\% to 60(...TRUNCATED)
| {"timestamp":"2021-11-02T01:04:41","yymm":"2111","arxiv_id":"2111.00094","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\\label{sec:intro}\nSelection of significant features with control over the (...TRUNCATED)
| {"timestamp":"2021-11-02T01:01:36","yymm":"2111","arxiv_id":"2111.00043","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\n\nDeep neural net (DNN) quantization is an crucial technique to reduce t(...TRUNCATED)
| {"timestamp":"2021-11-02T01:06:39","yymm":"2111","arxiv_id":"2111.00155","language":"en","url":"http(...TRUNCATED)
|
End of preview.