Oral Prelim: Nhat Ho, Convergence rate of matrix-variate parameter estimation in finite mixture models and novel approach of assessing bicycle-motor crash data
Understanding mixture models has become a focal point of consideration recently. Not only do mixture models provide people with numerous ways to combine relatively simple models into rich-er classes of statistical models, but also they are demonstrated to perform consistently under iden-tifiable conditions of class of kernel density functions as well as sufficiently well-posed conditions of estimated parameters. However, most of the consistent results for mixture models have so far con-centrated on the convergence behavior of the data density under maximum-likelihood ap-proach or on the convergence behavior of posterior distribution of data density under Bayesi-anâ€™s setting. In the first part of this talk, we study the strong identifiability condition and con-vergence behavior of mixing mea-sures in terms of Wasserstein distance under the settings of fi-nite mixture models with the inclusion of covariance matrices. Strong identifiability condition is used to establish the lower bound of Hellinger distance in terms of Wasserstein distance and is shown in the paper to satisfy by many classes of density functions, ranging from location-covariance classes to location-scale-shape-covariance classes. Con-vergence rates of mixing measures are established for both symmetric classes and skewed classes of density functions. Sim-ulations are carried out to illustrate the convergence rates of mixing measures as well as the rela-tionship between Hellinger distance and Wasserstein distance. In the last part of this talk, we pro-pose a fresh look on bicycle-motor crash data, which people used to determine the factors that sig-nificantly reduce the risk of bicycle-motor crash. Mixture models appear to be very powerful tools to explore various aspects of this data.