Reservoir computing is a neuromorphic architecture that may offer viable solutions to the growing energy costs of machine learning. In software-based machine learning, computing performance can be readily reconfigured to suit different computational tasks by tuning hyperparameters. This critical functionality is missing in ‘physical’ reservoir computing schemes that exploit nonlinear and history-dependent responses of physical systems for data processing. Here we overcome this issue with a ‘task-adaptive’ approach to physical reservoir computing. By leveraging a thermodynamical phase space to reconfigure key reservoir properties, we optimize computational performance across a diverse task set. We use the spin-wave spectra of the chiral magnet Cu2OSeO3 that hosts skyrmion, conical and helical magnetic phases, providing on-demand access to different computational reservoir responses. The task-adaptive approach is applicable to a wide variety of physical systems, which we show in other chiral magnets via above (and near) room-temperature demonstrations in Co8.5Zn8.5Mn3 (and FeGe).
Physical separation between processing and memory units in conventional computer architectures causes substantial energy waste due to the repeated shuttling of data, known as the von Neumann bottleneck. To circumvent this, neuromorphic computing1,2, which draws inspiration from the brain to provide integrated memory and processing, has attracted a great deal of attention as a promising future technology. Reservoir computing3,4,5 is a type of neuromorphic architecture with complex recurrent pathways (the ‘reservoir’) that maps input data to a high-dimensional space. Weights within the reservoir are randomly initialized and fixed, and only the small one-dimensional weight vector that connects the reservoir to the output requires optimization using computationally cheap linear regression. As such, reservoir computing can achieve powerful neuromorphic computation at a fraction of the processing cost relative to other schemes, for example, deep neural networks, where the whole weight network (typically involving more than millions of nodes) must be trained6.