Have a personal or library account? Click to login
A dynamic model of classifier competence based on the local fuzzy confusion matrix and the random reference classifier Cover

A dynamic model of classifier competence based on the local fuzzy confusion matrix and the random reference classifier

Open Access
|Mar 2016

References

  1. Bache, K. and Lichman, M. (2013). UCI machine learning repository, http://archive.ics.uci.edu/ml.
  2. Berger, J.O. and Berger, J.O. (1985). Statistical Decision Theory and Bayesian Analysis, Springer-Verlag, New York, NY.10.1007/978-1-4757-4286-2
  3. Bishop, C. (1995). Neural Networks for Pattern Recognition, Clarendon Press/Oxford University Press, Oxford/New York, NY.
  4. Blum, A. (1998). On-line algorithms in machine learning, in A. Fiat and G.J. Woeginger (Eds.), Developments from a June 1996 Seminar on Online Algorithms: The State of the Art, Springer-Verlag, London, pp. 306–325.10.1007/BFb0029575
  5. Breiman, L. (1996). Bagging predictors, Machine Learning24(2): 123–140.10.1007/BF00058655
  6. Breiman, L., Friedman, J., Olshen, R. and Stone, C. (1984). Classification and Regression Trees, Wadsworth and Brooks, Monterey, CA.
  7. Cover, T. and Hart, P. (1967). Nearest neighbor pattern classification, IEEE Transactions on Information Theory13(1): 21–27, DOI:10.1109/TIT.1967.1053964.10.1109/TIT.1967.1053964
  8. Dai, Q. (2013). A competitive ensemble pruning approach based on cross-validation technique, Knowledge-Based Systems37(9): 394–414, DOI: 10.1016/j.knosys.2012.08.024.10.1016/j.knosys.2012.08.024
  9. Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets, The Journal of Machine Learning Research7: 1–30.
  10. Devroye, L., Györfi, L. and Lugosi, G. (1996). A Probabilistic Theory of Pattern Recognition, Springer, New York, NY.10.1007/978-1-4612-0711-5
  11. Didaci, L., Giacinto, G., Roli, F. and Marcialis, G.L. (2005). A study on the performances of dynamic classifier selection based on local accuracy estimation, Pattern Recognition38(11): 2188–2191.10.1016/j.patcog.2005.02.010
  12. Dietterich, T.G. (2000). Ensemble methods in machine learning, Proceedings of the 1st International Workshop on Multiple Classifier Systems, MCS’00, Cagliari, Italy, pp. 1–15.
  13. Dunn, O.J. (1961). Multiple comparisons among means, Journal of the American Statistical Association56(293): 52–64.10.1080/01621459.1961.10482090
  14. Fraz, M.M., Remagnino, P., Hoppe, A., Uyyanonvara, B., Rudnicka, A.R., Owen, C.G. and Barman, S. (2012). An ensemble classification-based approach applied to retinal blood vessel segmentation, IEEE Transactions on Biomedical Engineering59(9): 2538–2548.10.1109/TBME.2012.2205687
  15. Freund, Y. and Shapire, R. (1996). Experiments with a new boosting algorithm, Machine Learning: Proceedings of the 13th International Conference, Bari, Italy, pp. 148–156.
  16. Friedman, M. (1940). A comparison of alternative tests of significance for the problem of m rankings, The Annals of Mathematical Statistics11(1): 86–92, DOI: 10.2307/2235971.
  17. Gama, J. (2010). Knowledge Discovery from Data Streams, 1st Edn., Chapman & Hall/CRC, London.
  18. Giacinto, G. and Roli, F. (2001). Dynamic classifier selection based on multiple classifier behaviour, Pattern Recognition34(9): 1879–1881.10.1016/S0031-3203(00)00150-3
  19. Holm, S. (1979). A simple sequentially rejective multiple test procedure, Scandinavian Journal of Statistics6(2): 65–70.
  20. Hsieh, N.-C. and Hung, L.-P. (2010). A data driven ensemble classifier for credit scoring analysis, Expert systems with Applications37(1): 534–545.10.1016/j.eswa.2009.05.059
  21. Huenupán, F., Yoma, N.B., Molina, C. and Garretón, C. (2008). Confidence based multiple classifier fusion in speaker verification, Pattern Recognition Letters29(7): 957–966.10.1016/j.patrec.2008.01.015
  22. Jurek, A., Bi, Y., Wu, S. and Nugent, C. (2013). A survey of commonly used ensemble-based classification techniques, The Knowledge Engineering Review29(5): 551–581, DOI: 10.1017/s0269888913000155.10.1017/S0269888913000155
  23. Kittler, J. (1998). Combining classifiers: A theoretical framework, Pattern Analysis and Applications1(1): 18–27.10.1007/BF01238023
  24. Ko, A.H., Sabourin, R. and Britto, Jr., A.S. (2008). From dynamic classifier selection to dynamic ensemble selection, Pattern Recognition41(5): 1718–1731.10.1016/j.patcog.2007.10.015
  25. Kuncheva, L.I. (2004). Combining Pattern Classifiers: Methods and Algorithms, 1st Edn., Wiley-Interscience, New York, NY.
  26. Kuncheva, L.I. and Rodríguez, J.J. (2014). A weighted voting framework for classifiers ensembles, Knowledge-Based Systems38(2): 259–275.10.1007/s10115-012-0586-6
  27. Kurzynski, M. (1987). Diagnosis of acute abdominal pain using three-stage classifier, Computers in Biology and Medicine17(1): 19–27.10.1016/0010-4825(87)90030-8
  28. Kurzynski, M., Krysmann, M., Trajdos, P. and Wolczowski, A. (2014). Two-stage multiclassifier system with correction of competence of base classifiers applied to the control of bioprosthetic hand, IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2014, Limassol, Cyprus.10.1109/ICTAI.2014.98
  29. Kurzynski, M. and Wolczowski, A. (2012). Control system of bioprosthetic hand based on advanced analysis of biosignals and feedback from the prosthesis sensors, Proceedings of the 3rd International Conference on Information Technologies in Biomedicine, ITIB 12, Kamień Śląski, Poland, pp. 199–208.
  30. Mamoni, D. (2013). On cardinality of fuzzy sets, International Journal of Intelligent Systems and Applications5(6): 47–52.10.5815/ijisa.2013.06.06
  31. Plumpton, C.O. (2014). Semi-supervised ensemble update strategies for on-line classification of FMRI data, Pattern Recognition Letters37: 172–177.10.1016/j.patrec.2013.03.029
  32. Plumpton, C.O., Kuncheva, L.I., Oosterhof, N.N. and Johnston, S.J. (2012). Naive random subspace ensemble with linear classifiers for real-time classification of FMRI data, Pattern Recognition45(6): 2101–2108.10.1016/j.patcog.2011.04.023
  33. R Core Team (2012). R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, http://www.R-project.org/.
  34. Rokach, L. (2010). Ensemble-based classifiers, Artificial Intelligence Review33(1–2): 1–39.10.1007/s10462-009-9124-7
  35. Rokach, L. and Maimon, O. (2005). Clustering methods, Data Mining and Knowledge Discovery Handbook, Springer Science + Business Media, New York, NY, pp. 321–352.
  36. Rousseeuw, P. (1987). Silhouettes: A graphical aid to the interpretation and validation of cluster analysis, Journal of Computational and Applied Mathematics20(1): 53–65.10.1016/0377-0427(87)90125-7
  37. Scholkopf, B. and Smola, A.J. (2001). Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, MIT Press, Cambridge, MA.
  38. Tahir, M.A., Kittler, J. and Bouridane, A. (2012). Multilabel classification using heterogeneous ensemble of multi-label classifiers, Pattern Recognition Letters33(5): 513–523.10.1016/j.patrec.2011.10.019
  39. Tsoumakas, G., Katakis, I. and Vlahavas, I. (2010). Random k-labelsets for multi-label classification, IEEE Transactions on Knowledge and Data Engineering99(1): 1079–1089.10.1109/TKDE.2010.164
  40. Valdovinos, R. and Sánchez, J. (2009). Combining multiple classifiers with dynamic weighted voting, in E. Corchado et al. (Eds.), Hybrid Artificial Intelligence Systems, Lecture Notes in Computer Science, Vol. 5572, Springer, Berlin/Heidelberg, pp. 510–516.10.1007/978-3-642-02319-4_61
  41. Ward, J. (1963). Hierarchical grouping to optimize an objective function, Journal of the American Statistical Association58(301): 236–244.10.1080/01621459.1963.10500845
  42. Wilcoxon, F. (1945). Individual comparisons by ranking methods, Biometrics Bulletin1(6): 80–83.10.2307/3001968
  43. Woloszynski, T. (2013). Classifier competence based on probabilistic modeling (ccprmod.m) at Matlab central file exchange, http://www.mathworks.com/matlabcentral/fileexchange/28391-a-probabilistic-model-of-classifier-competence.
  44. Woloszynski, T. and Kurzynski, M. (2011). A probabilistic model of classifier competence for dynamic ensemble selection, Pattern Recognition44(10–11): 2656–2668.10.1016/j.patcog.2011.03.020
  45. Woloszynski, T., Kurzynski, M., Podsiadlo, P. and Stachowiak, G.W. (2012). A measure of competence based on random classification for dynamic ensemble selection, Information Fusion13(3): 207–213.10.1016/j.inffus.2011.03.007
  46. Wolpert, D.H. (1992). Stacked generalization, Neural Networks5(2): 214–259.10.1016/S0893-6080(05)80023-1
  47. Wozniak, M., Graña, M. and Corchado, E. (2014). A survey of multiple classifier systems as hybrid systems, Information Fusion16(1): 3–17.10.1016/j.inffus.2013.04.006
DOI: https://doi.org/10.1515/amcs-2016-0012 | Journal eISSN: 2083-8492 | Journal ISSN: 1641-876X
Language: English
Page range: 175 - 189
Submitted on: Nov 10, 2014
|
Published on: Mar 31, 2016
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2016 Pawel Trajdos, Marek Kurzynski, published by University of Zielona Góra
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.