Generative OpenMax for multi-class open set classification
Zongyuan Ge, Sergey Demyanov, et al.
BMVC 2017
Anti-sparse representation was recently considered for unsupervised hashing, due to its remarkable robustness to the binary quantization error. We relax the existing spread property [4, 22] for anti-sparse solutions, to a new Relaxed Spread Property (RSP) that demands milder conditions. We then propose a novel Transformed Anti-Sparse Hashing (TASH) model to overcome several major bottlenecks, that have significantly limited the effectiveness of anti-sparse hashing models. TASH jointly learns a dimension-reduction transform, a dictionary and the anti-sparse representations in a unified formulation. We have conducted extensive experiments on real datasets and practical settings, and demonstrate the highly promising performance of TASH.
Zongyuan Ge, Sergey Demyanov, et al.
BMVC 2017
Zhangyang Wang, Ding Liu, et al.
IJCNN 2017
Yijun Huang, Qiang Meng, et al.
Journal of Biomedical Informatics
Bo Liu, Ji Liu, et al.
IJCAI 2016