Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.07.06.548028v1?rss=1
Authors: Xu, S., Ackerman, M. E.
Abstract: Background Compared to traditional supervised machine learning approaches employing fully labeled samples, positive-unlabeled (PU) learning techniques aim to classify unlabeled samples based on a smaller proportion of known positive examples. This more challenging modeling goal reflects many real-world scenarios in which negative examples are not available, posing direct challenges to defining prediction accuracy robustness. While several studies have evaluated predictions learned from only definitive positive examples, few have investigated whether correct classification of a high proportion of known positives (KP) samples from among unlabeled samples can act as a surrogate to indicate a performance. Results In this study, we report a novel methodology combining multiple established PU learning-based strategies to evaluate the potential of KP samples to accurately classify unlabeled samples without using ground truth positive and negative labels for validation. To address model robustness, we report the first application of permutation test in PU learning. Multivariate synthetic datasets and real-world high-dimensional benchmark datasets were employed to validate the proposed pipeline with varied underlying ground truth class label compositions among the unlabeled set and different proportions of KP examples. Comparisons between model performance with actual and permutated labels could be used to distinguish reliable from unreliable models. Conclusions Like in fully supervised machine learning, permutation testing offers a means to set a baseline no-information rate benchmark in the context of semi-supervised PU learning inference tasks against which model performance can be compared.
Copy rights belong to original authors. Visit the link for more info
Podcast created by Paper Player, LLC