Minhao Cheng, Jinfeng Yi, et al.
AAAI 2020
Data encoding remains a fundamental bottleneck in quantum machine learning, where amplitude encoding of high-dimensional classical vectors into quantum states incurs exponential cost. In this work, we propose a pre-trained tensor-train (TT) encoding network (Pre-TT-Encoder) that significantly reduces the computational complexity of amplitude encoding while preserving essential data structure. The Pre-TT-Encoder exploits low-rank TT decompositions learned from classical data, enabling polynomial-time state preparation in the number of qubits and TT-ranks. We provide a theoretical analysis of the encoding complexity and establish fidelity bounds that quantify the trade-off between TT-rank and approximation error. Empirical evaluations on classical (MNIST) and quantum-native (semiconductor quantum dot) datasets demonstrate that our approach achieves substantial gains in encoding efficiency over direct amplitude encoding and PCA-based dimensionality reduction, while maintaining competitive performance in downstream variational quantum circuit classification tasks. The proposed method highlights the role of tensor networks as scalable intermediaries between classical data and quantum processors.
Minhao Cheng, Jinfeng Yi, et al.
AAAI 2020
Yong Xie, Dakuo Wang, et al.
NAACL 2022
Sarwan Ali, Bikram Sahoo, et al.
Scientific Reports
Shengwei An, Sheng-Yen Chou, et al.
AAAI 2024