BEGIN:VCALENDAR VERSION:2.0 PRODID:-//132.216.98.100//NONSGML kigkonsult.se iCalcreator 2.20.4// BEGIN:VEVENT UID:20250916T161819EDT-3225kNFUVA@132.216.98.100 DTSTAMP:20250916T201819Z DESCRIPTION:Sample-Efficient Learning of Mixture Distributions.\n\nLearning about a probability distribution from a sample generated by that distribu tion is a fundamental task. We consider PAC learning of probability distri butions (a.k.a. density estimation)\, where we are given an i.i.d. sample generated from an unknown target distribution\, and want to output a distr ibution that is close to the target. Given the target distribution can be approximated by a member of some predetermined class of distributions\, we analyze how large should a sample be so that one is able to find a distri bution that is close to the target in total variation distance.  Determini ng such sample complexity with respect to an arbitrary class of distributi ons is a fundamental open problem.\n \n In particular\, we improve the best known upper bounds for learning a variety of mixture classes\, including m ixtures of Gaussian distributions over R^n. Furthermore\, we introduce a n ovel method for learning distributions via a form of compression. Using th is generic framework\, we provide the first tight bound (up to logarithmic factors) for learning the class of mixtures of axis-aligned Gaussians. Th is is joint work with Abbas Mehrabian and Shai Ben-David.Learning about a probability distribution from a sample generated by that distribution is a fundamental task. We consider PAC learning of probability distributions ( a.k.a. density estimation)\, where we are given an i.i.d. sample generated from an unknown target distribution\, and want to output a distribution t hat is close to the target. Given the target distribution can be approxima ted by a member of some predetermined class of distributions\, we analyze how large should a sample be so that one is able to find a distribution th at is close to the target in total variation distance.  Determining such s ample complexity with respect to an arbitrary class of distributions is a fundamental open problem.\n \n In particular\, we improve the best known upp er bounds for learning a variety of mixture classes\, including mixtures o f Gaussian distributions over R^n. Furthermore\, we introduce a novel meth od for learning distributions via a form of compression. Using this generi c framework\, we provide the first tight bound (up to logarithmic factors) for learning the class of mixtures of axis-aligned Gaussians. This is joi nt work with Abbas Mehrabian and Shai Ben-David.\n DTSTART:20171206T150000Z DTEND:20171206T160000Z LOCATION:Room 1205\, Burnside Hall\, CA\, QC\, Montreal\, H3A 0B9\, 805 rue Sherbrooke Ouest SUMMARY:Hassan Ashtiani\, University of Waterloo URL:/mathstat/channels/event/hassan-ashtiani-universit y-waterloo-283164 END:VEVENT END:VCALENDAR