Article révisé par les pairs
Résumé : Reservoir computing is a bio-inspired computing paradigm for processing time-dependent signals. The performance of its hardware implementation is comparable to state-of-the-art digital algorithms on a series of benchmark tasks. The major bottleneck of its implementations is the readout layer, based on slow offline post-processing. Few analogue solutions have been proposed, but all suffered from noticeable decrease in performance due to added complexity of the setup. Here, we propose the use of online training to solve these issues. We study the applicability of this method using numerical simulations of an experimentally feasible reservoir computer with an analogue readout layer. We also consider a nonlinear output layer, which would be very difficult to train with traditional methods. We show numerically that online learning allows to circumvent the added complexity of the analogue layer and obtain the same level of performance as with a digital layer. This work paves the way to high-performance fully analogue reservoir computers through the use of online training of the output layers.