Have a personal or library account? Click to login
The Proportion for Splitting Data into Training and Test Set for the Bootstrap in Classification Problems Cover

The Proportion for Splitting Data into Training and Test Set for the Bootstrap in Classification Problems

Open Access
|Jun 2021

References

  1. 1. Breiman L., (1995), “Better Subset Regression Using the Nonnegative Garrote”, Technometrics, Vol. 37 No 4, pp. 373 – 384.10.1080/00401706.1995.10484371
  2. 2. Breiman L., (1992), “The Little Bootstrap and Other Methods for Dimensionality Selection in Regression: X-fixed Prediction Error”, Journal of American Statistical Association, Vol. 87 No. 419, pp. 738-754.10.1080/01621459.1992.10475276
  3. 3. Breiman, L. (1996), “Bagging predictors”, Machine Learning. 24 (2), pp. 123–140.10.1007/BF00058655
  4. 4. Grubinger, T., Zeileis, A. and Pfeiffer, K., 2014. Evtree: Evolutionary learning of globally optimal classification and regression trees in R. J. Stat. Software 61 (1), pp. 1-29.10.18637/jss.v061.i01
  5. 5. Efron B., (1979), “Bootstrap Methods: Another Look at the Jackknife”, the Annals of Statistics, Vol. 17, pp. 1–26.10.1214/aos/1176344552
  6. 6. Efron B., Tibshirani R., (1997), “Improvements on Cross-Validation: The.632+ Bootstrap Method”, Journal of the American Statistical Association, vol. 92, pp. 548–560.10.1080/01621459.1997.10474007
  7. 7. Hoerl E., Kennard W., (1970), “Ridge Regression. Applications to nonorthogonal Problems”, Technometrics, Vol. 12 No. 1, pp. 69-82. Iz 201210.1080/00401706.1970.10488635
  8. 8. Iannarilli F., Rubin P., (2003), Feature selection for multiclass discrimination via mixed-integer linear programming, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25 No. 6, pp. 779-783.10.1109/TPAMI.2003.1201827
  9. 9. James G., D. W., Hastie T., Tibshirani R., (2013), An Introduction to Statistical Learning. Springer, STS Vol. 103.10.1007/978-1-4614-7138-7
  10. 10. Kim B., Shin S., (2019), “Principal weighted logistic regression for sufficient dimension reduction in binary classification”, Journal of the Korean Statistical Society, Vol. 48 No. 2, pp. 194-206.10.1016/j.jkss.2018.11.001
  11. 11. Krstajic D., Buturovic J., Leahy E., Thomas S., (2014), “Cross-validation pitfalls when selecting and assessing regression and classification models”, Cheminformatics, Vol. 6 Article No. 10.10.1186/1758-2946-6-10
  12. 12. MacKinnon J., (2002), “Bootstrap Inference in Econometrics”, The Canadian Journal of Economics, Vol. 35 No. 4, pp. 615—645.10.1111/0008-4085.00147
  13. 13. Maldonado S., Pérez J., Weber R., Labbé M., (2014), Feature Selection for Support Vector Machines via Mixed Integer Linear Programming, Information Sciences, Vol. 279, pp. 163–175.10.1016/j.ins.2014.03.110
  14. 14. Pampel F., (2000), Logistic regression: A primer. Sage University Papers Series on Quantitative Applications in the Social Sciences, 07-132. Sage Publications, Thousand Oaks, CA.
  15. 15. Pedregosa et al., (2011), Scikit-learn: Machine Learning in Python, Journal of Machine Learning Research 12, pp. 2825-2830.
  16. 16. Velliangiri S., Alagumuthukrishnan S., Joseph S., (2019), A Review of Dimensionality Reduction Techniques for Efficient Computation, Procedia Computer Science, Vol. 165, pp. 104-111.10.1016/j.procs.2020.01.079
  17. 17. Vrigazova B., (2018), “Nonnegative Garrote as a Variable Selection Method in Panel Data”, International Journal of Computer Science and Information Security, Vol. 16 No. 1.
  18. 18. Vrigazova B., Ivanov I., (2019), “Optimization of the ANOVA Procedure for Support Vector Machines”, International Journal of Recent Technology and Engineering, Vol. 8 No. 4.10.35940/ijrte.D7375.118419
  19. 19. Vrigazova B., Ivanov I., (2020a), “The bootstrap procedure in classification problems”, International Journal of Data Mining, Modelling and Management, Vol. 12 No. 4.10.1504/IJDMMM.2020.111400
  20. 20. Vrigazova, B.& Ivanov, I., (2020b), “Tenfold bootstrap procedure for support vector machines”, Computer Science, Vo. 21 No. 2, pp. 241-257. 10.7494/csci.2020.21.2.3634.10.7494/csci.2020.21.2.3634
  21. 21. Wong T., (2015), “Performance evaluation of classification algorithms by k-fold and leave-one-out cross-validation”, Pattern Recognition, Vol. 48 No. 9, pp. 2839–2846.10.1016/j.patcog.2015.03.009
  22. 22. Yeturu К., (2020), Chapter 3 - Machine learning algorithms, applications, and practices in data science, Editor(s): Arni S.R. Srinivasa Rao, C.R. Rao, Handbook of Statistics, Elsevier, Vol. 43, pp. 81-206.10.1016/bs.host.2020.01.002
DOI: https://doi.org/10.2478/bsrj-2021-0015 | Journal eISSN: 1847-9375 | Journal ISSN: 1847-8344
Language: English
Page range: 228 - 242
Submitted on: Aug 12, 2020
Accepted on: Mar 15, 2021
Published on: Jun 4, 2021
Published by: IRENET - Society for Advancing Innovation and Research in Economy
In partnership with: Paradigm Publishing Services
Publication frequency: 2 issues per year

© 2021 Borislava Vrigazova, published by IRENET - Society for Advancing Innovation and Research in Economy
This work is licensed under the Creative Commons Attribution 4.0 License.