Duixian Liu, Xiaoxiong Gu, et al.
AP-S/URSI 2023
Analog Non-Volatile Memory-based accelerators offer high-throughput and energy-efficient Multiply-Accumulate operations for the large Fully-Connected layers that dominate Transformer-based Large Language Models. We describe architectural, wafer-scale testing, chip-demo, and hardware-aware training efforts towards such accelerators, and quantify the unique raw-throughput and latency benefits of Fully- (rather than Partially-) Weight-Stationary systems.
Duixian Liu, Xiaoxiong Gu, et al.
AP-S/URSI 2023
Manuel Le Gallo, Riduan Khaddam-Aljameh, et al.
Nature Electronics
Christoph Hagleitner, Charles Johns, et al.
IEEE JVA Symposium 2023
Tobias Webel, Phillip Restle, et al.
ISSCC 2025