Article révisé par les pairs
Résumé : Linking network structure to function is a long standing issue in the neuroscience field. An outstanding example is the cerebellum. Its structure was known in great detail for decades but the full range of computations it performs is yet unknown. This reflects a need for new systematic methods to characterize the computational capacities of the cerebellum. In the present work, we apply a method borrowed from the field of machine learning to evaluate the computational capacity and the working memory of a prototypical cerebellum model. The model that we study is a reservoir computing rate model of the cerebellar granular layer in which granule cells form a recurrent inhibitory network and Purkinje cells are modelled as linear trainable readout neurons. It was introduced by [2, 3] to demonstrate how the recurrent dynamics of the granular layer is needed to perform typical cerebellar tasks (e.g.: timing-related tasks). The method, described in detail in [1], consists in feeding the model with a random time dependent input signal and then quantifying how well a complete set of functions (each function representing a different type of computation) of the input signal can be reconstructed by taking a linear combination of the neuronal activations. We conducted simulations with 1000 granule cells. Relevant parameters were optimized within a biologically plausible range using a Bayesian Learning approach. Our results show that the cerebellum prototypical model can compute both linear functions - as expected from previous work -, and - surprisingly - highly nonlinear functions of its input (specifically, up to the 10th degree Legendre polynomial functions). Moreover, the model has a working memory of the input up to 100 ms in the past. These two properties are essential to perform typical cerebellar functions, such as fine-tuning nonlinear motor control tasks or, we believe, even higher cognitive functions.