Rie Kubota Ando
CoNLL 2006
Recently, sample complexity bounds have been derived for problems involving linear functions such as neural networks and support vector machines. In many of these theoretical studies, the concept of covering numbers played an important role. It is thus useful to study covering numbers for linear function classes. In this paper, we investigate two closely related methods to derive upper bounds on these covering numbers. The first method, already employed in some earlier studies, relies on the so-called Maurey's lemma; the second method uses techniques from the mistake bound framework in online learning. We compare results from these two methods, as well as their consequences in some learning formulations.
Rie Kubota Ando
CoNLL 2006
Arthur Nádas
IEEE Transactions on Neural Networks
Bing Zhang, Mikio Takeuchi, et al.
NAACL 2025
Arnold.L. Rosenberg
Journal of the ACM