BEGIN:VCALENDAR VERSION:2.0 PRODID:-//132.216.98.100//NONSGML kigkonsult.se iCalcreator 2.20.4// BEGIN:VEVENT UID:20250703T064242EDT-0413LfU7aU@132.216.98.100 DTSTAMP:20250703T104242Z DESCRIPTION:Title: Neyman-Pearson classification: parametrics and sample si ze requirement.\n\nAbstract: The Neyman-Pearson (NP) paradigm in binary cl assification seeks classifiers that achieve a minimal type II error while enforcing the prioritized type I error controlled under some user-specifie d level alpha. This paradigm serves naturally in applications such as seve re disease diagnosis and spam detection\, where people have clear prioriti es among the two error types. Recently\, Tong\, Feng and Li (2018) propose d a nonparametric umbrella algorithm that adapts all scoring-type classifi cation methods (e.g.\, logistic regression\, support vector machines\, ran dom forest) to respect the given type I error (i.e.\, conditional probabil ity of classifying a class 0 observation as class 1 under the 0-1 coding) upper bound alpha with high probability\, without specific distributional assumptions on the features and the responses. Universal the umbrella algo rithm is\, it demands an explicit minimum sample size requirement on class 0\, which is often the more scarce class\, such as in rare disease diagno sis applications. In this work\, we employ the parametric linear discrimin ant analysis (LDA) model and propose a new parametric thresholding algorit hm\, which does not need the minimum sample size requirements on class 0 o bservations and thus is suitable for small sample applications such as rar e disease diagnosis. Leveraging both the existing nonparametric and the ne wly proposed parametric thresholding rules\, we propose four LDA-based NP classifiers\, for both low- and high-dimensional settings. On the theoreti cal front\, we prove NP oracle inequalities for one proposed classifier\, where the rate for excess type II error benefits from the explicit paramet ric model assumption. Furthermore\, as NP classifiers involve a sample spl itting step of class 0 observations\,  we construct a new adaptive sample splitting scheme that can be applied universally to NP classifiers\, and t his adaptive strategy reduces the type II error of these classifiers. The proposed NP classifiers are implemented in the R package nproc.  \n DTSTART:20200228T210000Z DTEND:20200228T220000Z LOCATION:Room 1104\, Burnside Hall\, CA\, QC\, Montreal\, H3A 0B9\, 805 rue Sherbrooke Ouest SUMMARY:Yang Feng\, NYU URL:/mathstat/channels/event/yang-feng-nyu-320895 END:VEVENT END:VCALENDAR