Incorporating structural knowledge into unsupervised deep learning for two-photon imaging data
Published in bioRxiv, 2021
Live imaging techniques, such as two-photon imaging, promise novel insights into cellular activity patterns at a high spatio-temporal resolution. While current deep learning approaches typically focus on specific supervised tasks in the analysis of such data, we investigate how structural knowledge can be incorporated into an unsupervised generative deep learning model directly at the level of the video frames. We exemplify the proposed approach with two-photon imaging data from hippocampal CA1 neurons in mice, where we account for spatial structure with convolutional neural network components, disentangle the neural activity of interest from the neuropil background signal with separate foreground and background encoders and model gradual temporal changes by imposing smoothness constraints. Taken together, our results illustrate how such architecture choices facilitate a modeling approach that combines the flexibility of deep learning with the benefits of domain knowledge, providing an interpretable, purely image-based model of activity signals from live imaging data.