Jihun Yun, Aurelie Lozano, et al.
NeurIPS 2021
Several works in implicit and explicit generative modeling empirically observed that feature-learning discriminators outperform fixed-kernel discriminators in terms of the sample quality of the models. We provide separation results between probability metrics with fixed-kernel and feature-learning discriminators using the function classes F2 and F1 respectively, which were developed to study overparametrized two-layer neural networks. In particular, we construct pairs of distributions over hyper-spheres that can not be discriminated by fixed kernel (F2)integral probability metric (IPM) and Stein discrepancy (SD) in high dimensions, but that can be discriminated by their feature learning ( F1) counterparts. To further study the separation we provide links between the F1 and F2 IPMs with sliced Wasserstein distances. Our work suggests that fixed-kernel discriminators perform worse than their feature learning counterparts because their corresponding metrics are weaker.
Jihun Yun, Aurelie Lozano, et al.
NeurIPS 2021
Owen Cornec, Rahul Nair, et al.
NeurIPS 2021
Gosia Lazuka, Andreea Simona Anghel, et al.
SC 2024
Nishad Gothoskar, Marco Cusumano-Towner, et al.
NeurIPS 2021