Have a personal or library account? Click to login
Accounting for Label Shift of Positive Unlabeled Data under Selection Bias Cover

Accounting for Label Shift of Positive Unlabeled Data under Selection Bias

Open Access
|Sep 2025

References

  1. Bekker, J. and Davis, J. (2020). Learning from positive and unlabeled data: A survey, Machine Learning 109: 719–760, DOI: 10.1007/s10994-020-05877-5.
  2. Elkan, C. (2001). The foundations of cost-sensitive learning, Proceedings of the 17th International Joint Conference on Artificial Intelligence, Seattle, USA, pp. 973–978.
  3. Garg, S., Wu, Y., Balakrishnan, S. and Lipton, Z.C. (2020). A unified view of label shift estimation, Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, pp. 1–11.
  4. Gonźalez, P., Castaño, A., Chawla, N. and Coz, J. (2017). A review on quantification learning, ACM Computing Surveys 50(5): 1-74.
  5. Iyer, A., Nath, S. and Sarawagi, S. (2014). Maximum mean discrepancy for class ratio estimation: Convergence bounds and kernel selection, Proceedings of the 31st International Conferencce on Machine Learning, Beijing, China, pp. 230–238.
  6. Kiryo, R., Niu, G., du Plessis, M.C. and Sugiyama, M. (2017). Positive-unlabeled learning with non-negative risk estimator, Proceedings of the International Conference on Neural Information Processing Systems, NIPS’17, Long Beach, USA, pp. 1674–1684.
  7. Lipton, Z.C., Wang, Y. and Smola, A.J. (2018). Detecting and correcting for label shift with black box predictors, Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, pp. 3128–3136.
  8. Mielniczuk, J. (1985). Estimation of number of errors in case of repetitive quality control, Probability and Mathematical Statistics 6(2): 131–136.
  9. Na, B., Kim, H., Song, K., Joo, W., Kim, Y.-Y. and Moon, I. (2020). Deep generative positive-unlabeled learning under selection bias, Proceedings of CIKM’20, New York, NY, USA, pp. 1155–1164.
  10. Saerens, M., Latinne, P. and Decaestecker, C. (2002). Adjusting the outputs of a classifier to new a priori probabilities: A simple procedure, Neural Computing 14(1): 21–41.
  11. Vaz, A., Izbicki, R. and Stern, R. (2019). Quantification under prior probability shift: The ratio estimator and its extensions, Journal of Machine Learning Research 20: 1–33.
  12. Wainwright,M. (2019). High-dimensional Statistics, Cambridge University Press, Cambridge.
  13. Wawrzeńczyk, A. and Mielniczuk, J. (2022). Revisiting strategies for fitting logistic regression for positive and unlabeled data, International Journal of Applied Mathematics and Computer Science 32(2): 299–309, DOI: 10.34768/amcs-2022-0022.
  14. Wawrzeńczyk, A. and Mielniczuk, J. (2023). One-class classification approach to variational learning from biased positive unlabelled data, Proceedings of the European Conference on Artificial Intelligence, ECAI’23, Cracow, Poland, pp. 1720–1727.
  15. Wawrzeńczyk, A. and Mielniczuk, J. (2024). Augmented prediction of a true class for positive unlabeled data under selection bias, Proceedings of the European Conference on Artificial Intelligence, ECAI’24, Santiago de Compostela, Spain, pp. 2725–2733.
  16. Ye, C., Tsuchida, R., Petersson, L. and Barnes, N. (2024). Label shift estimation for class-imbalance problem: A Bayesian approach, IEEE/CVF 2024, Seattle, USA.
DOI: https://doi.org/10.61822/amcs-2025-0036 | Journal eISSN: 2083-8492 | Journal ISSN: 1641-876X
Language: English
Page range: 507 - 517
Submitted on: Dec 2, 2024
Accepted on: May 11, 2025
Published on: Sep 8, 2025
Published by: University of Zielona Góra
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2025 Jan Mielniczuk, Adam Wawrzeńczyk, published by University of Zielona Góra
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.