P.S. Gopalakrishnan, Dimitri Kanevsky, et al.
IEEE Trans. Inf. Theory
The language model probabilities are estimated by an empirical Bayes approach in which a prior distribution for the unknown probabilities is itself estimated through a novel choice of data. The predictive power of the model thus fitted is compared by means of its experimental perplexity [1] to the model as fitted by the Jelinek-Mercer deleted estimator and as fitted by the Turing-Good formulas for probabilities of unseen or rarely seen events. Copyright © 1984 by The Institute of Electrical and Electronics Engineers, Inc.
P.S. Gopalakrishnan, Dimitri Kanevsky, et al.
IEEE Trans. Inf. Theory
Arthur Nádas
IEEE Transactions on Acoustics, Speech, and Signal Processing
Stephen Silverman, Arthur Nádas
ISIT 1991
Toby G. Rossman, Ekaterina I. Goncharova, et al.
Mutation Research - Fundamental and Molecular Mechanisms of Mutagenesis