Talk

Towards quantum extreme learning and reservoir computing on utility-scale digital quantum processors

Abstract

Quantum Extreme Learning Machines (QELM) exploit the rich dynamics and high-dimensionality of Hilbert spaces, together with design principles inspired by Reservoir Computing, to offer a promising path for large-scale quantum machine learning. However, most prior work has focused so far on analog implementations or numerical simulations, with practical deployment often hindered by device noise and concentration of measure effects.

In this work, we propose a scalable QELM architecture tailored for state-of-the-art digital quantum processors. Our design combines theoretical rigor with experimental feasibility, aiming to preserve model expressivity while mitigating the impact of observable concentration and shot noise. Central contributions include a practical hyperparameter tuning strategy that identifies optimal operating regimes balancing robustness and processing capacity, as well as a local signal-to-noise ratio optimization method based on eigentask analysis.

We validate our approach on paradigmatic benchmark tasks, demonstrating strong performance and noise resilience using up to 124 qubits on IBM Quantum processors. By tracking key indicators such as output variability and expressivity, we uncover a universal regime that generalizes across tasks and system sizes.