papers AI Learner
The Github is limit! Click to go to the new site.

On Sampling Random Features From Empirical Leverage Scores: Implementation and Theoretical Guarantees

2019-03-20
Shahin Shahrampour, Soheil Kolouri

Abstract

Random features provide a practical framework for large-scale kernel approximation and supervised learning. It has been shown that data-dependent sampling of random features using leverage scores can significantly reduce the number of features required to achieve optimal learning bounds. Leverage scores introduce an optimized distribution for features based on an infinite-dimensional integral operator (depending on input distribution), which is impractical to sample from. Focusing on empirical leverage scores in this paper, we establish an out-of-sample performance bound, revealing an interesting trade-off between the approximated kernel and the eigenvalue decay of another kernel in the domain of random features defined based on data distribution. Our experiments verify that the empirical algorithm consistently outperforms vanilla Monte Carlo sampling, and with a minor modification the method is even competitive to supervised data-dependent kernel learning, without using the output (label) information.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.08329

PDF

http://arxiv.org/pdf/1903.08329


Similar Posts

Comments