Jihun Yun, Peng Zheng, et al.
ICML 2019
In sparse learning, the squared Euclidean distance is a popular choice for measuring the approximation quality. However, the use of other forms of parametrized loss functions, including asymmetric losses, has generated research interest. In this paper, we perform sparse learning using a broad class of smooth piecewise linear quadratic (PLQ) loss functions, including robust and asymmetric losses that are adaptable to many real-world scenarios. The proposed framework also supports heterogeneous data modeling by allowing different PLQ penalties for different blocks of residual vectors (split-PLQ). We demonstrate the impact of the proposed sparse learning in image recovery, and apply the proposed split-PLQ loss approach to tag refinement for image annotation and retrieval.
Jihun Yun, Peng Zheng, et al.
ICML 2019
Weizhong Zhu, Jason Pelecanos
ICASSP 2016
Asaf Rendel, Raul Fernandez, et al.
ICASSP 2016
Karthikeyan Natesan Ramamurthy, Kush R. Varshney, et al.
SSP 2014