Ankur Agrawal, Saekyu Lee, et al.
ISSCC 2021
Analog in-memory computing is an emerging paradigm designed to efficiently accelerate deep neural network workloads. Recent advancements have focused on either inference or training acceleration. However, a unified analog in-memory technology platform-capable of on-chip training, weight retention, and long-term inference acceleration-has yet to be reported. This work presents an all-in-one analog AI accelerator, combining these capabilities to enable energy-efficient, continuously adaptable AI systems. The platform leverages an array of analog filamentary conductive-metal-oxide (CMO)/HfOx resistive switching memory cells (ReRAM) integrated into the back-end-of-line (BEOL). The array demonstrates reliable resistive switching with voltage amplitudes below 1.5V, compatible with advanced technology nodes. The array’s multi-bit capability (over 32 stable states) and low programming noise (down to 10nS) enable a nearly ideal weight transfer process, more than an order of magnitude better than other memristive technologies. Inference performance is validated through matrix-vector multiplication simulations on a 64x64 array, achieving a root-mean-square error improvement by a factor of 20 at 1 second and 3 at 10 years after programming, compared to state-of-the-art. Training accuracy closely matching the software equivalent is achieved across different datasets. The CMO/HfOx ReRAM technology lays the foundation for efficient analog systems accelerating both inference and training in deep neural networks.
Ankur Agrawal, Saekyu Lee, et al.
ISSCC 2021
Bert J. Offrein, Jacqueline Geler-Kremer, et al.
IEDM 2020
Chai Wah Wu
ISCAS 2020
Dominik Metzler
PESM 2023