Conference paper

Enabling Spill-Free Compilation via Affine-Based Live Range Reduction Optimization

Abstract

AI Accelerators employ dataflow architectures to achieve impressive peak compute performance (TOPS) and processing efficiencies (TOPS/W). Typically, dataflow architectures use wide data-paths to connect off-chip memory to dense compute arrays (via hierarchy of on-chip memories/vector register files) for efficient data movement with reuse, as well as compute. Such architectures often possess an independent lightweight control-path for loading programs and initializing registers, and lack traditional architectural features like instruction cache and execution stacks. This poses a unique challenge to compiler requiring program generation of complex compute kernels to fit within an instruction buffer and allocating a limited set of scalar registers without support to spill to memory.

This paper contributes a significant step towards spill-free compilation and proposes a Live range reduction optimization based on Affine expression propagation analysis}. Our solution performs a global, compiler-directed analysis to model variable values as affine expressions of in-scope variables, enabling safe symbolic re-materialization of values at their use-sites leveraging near-by variables without introducing new operations. This shortens variable lifetimes, while significantly reducing register pressure without incurring program binary and execution overhead. The static nature and regular memory access patterns of AI applications make them well-suited for the proposed optimization. We demonstrate the effectiveness of the technique in the context of IBM Spyre accelerator and its compiler. Our results over a range of AI workloads spanning transformer and CNN models demonstrate spill-free code generation, with most of the workloads requiring less than 50% of the available registers.