Have a personal or library account? Click to login
Graph-based generation of a meta-learning search space Cover
Open Access
|Sep 2012

References

  1. Bensusan, H., Giraud-Carrier, C. and Kennedy, C.J. (2000). A higher-order approach to meta-learning, in J. Cussens and A. Frisch (Eds.), Proceedings of the Work-in-Progress Track at the 10th International Conference on Inductive Logic Programming, Springer-Verlag, Berlin/Heidelberg, pp. 33-42.
  2. Brazdil, P., Giraud-Carrier, C., Soares, C. and Vilalta, R. (2009). Metalearning: Applications to Data Mining, Springer, Berlin/Heidelberg.10.1007/978-3-540-73263-1
  3. Brazdil, P., Soares, C. and da Costa, J.P. (2003). Ranking learning algorithms: Using IBL and meta-learning on accuracy and time results, Machine Learning 50(3): 251-277.10.1023/A:1021713901879
  4. Chan, P. and Stolfo, S.J. (1996). On the accuracy of metalearning for scalable data mining, Journal of Intelligent Information Systems 8(1): 5-28.10.1023/A:1008640732416
  5. Czarnowski, I. and J˛edrzejowicz, P. (2011). Application of agent-based simulated annealing and tabu search procedures to solving the data reduction problem, International Journal of Applied Mathematics and Computer Science 21(1): 57-68, DOI: 10.2478/v10006-011-0004-3.10.2478/v10006-011-0004-3
  6. Duch, W. and Grudzi´nski, K. (1999). Search and global minimization in similarity-based methods, International Joint Conference on Neural Networks, Washington, DC, USA, p. 742.
  7. Duch,W. and Itert, L. (2003). Committees of undemocratic competent models, Proceedings of the Joint International Conference on Artificial Neural Networks (ICANN) and the International Conference on Neural Information Processing (ICONIP), Istanbul, Turkey, pp. 33-36.
  8. Duch, W., Wieczorek, T., Biesiada, J. and Blachnik, M. (2004). Comparison of feature ranking methods based on information entropy, Proceedings of International Joint Conference on Neural Networks, Budapest, Hungary, pp. 1415-1420.
  9. Frank, A. and Asuncion, A. (2010). UCI machine learning repository, University of California, School of Information and Computer Science, Irvine, CA, http://archive.ics.uci.edu/ml.
  10. Gra˛bczewski, K. and Jankowski, N. (2011). Saving time and memory in computational intelligence system with machine unification and task spooling, Knowledge-Based Systems 24(5): 570-588.10.1016/j.knosys.2011.01.003
  11. Guyon, I. (2003). NIPS 2003 workshop on feature extraction, http://www.clopinet.com/isabelle/Projects/NIPS2003.
  12. Guyon, I. (2006). Performance prediction challenge, http://www.modelselect.inf.ethz.ch.
  13. Guyon, I., Gunn, S., Nikravesh, M. and Zadeh, L. (Eds.) (2006). Feature Extraction: Foundations and Applications, Springer, Berlin/Heidelberg.10.1007/978-3-540-35488-8
  14. Jankowski, N., Duch, W. and Gra˛bczewski, K. (Eds.) (2011). Meta-learning in Computational Intelligence, Studies in Computational Intelligence, Vol. 358, Springer, Berlin/Heidelberg.
  15. Jankowski, N. and Gra˛bczewski, K. (2005). Heterogenous committees with competence analysis, in N. Nedjah, L. Mourelle, M. Vellasco, A. Abraham and M. Köppen (Eds.), 5th International Conference on Hybrid Intelligent Systems, Rio de Janeiro, Brazil, IEEE Press, New York, NY, pp. 417-422.10.1109/ICHIS.2005.54
  16. Jankowski, N. and Gra˛bczewski, K. (2007). Handwritten digit recognition-Road to contest victory, IEEE Symposium Series on Computational Intelligence, IEEE Press, New York, NY, pp. 491-498.
  17. Jankowski, N. and Grochowski, M. (2004). Comparison of instances selection algorithms I: Algorithms survey, in L. Rutkowski, I. Siekmann, R. Tadeusiewicz and L.A. Zadeh (Eds.), Artificial Intelligence and Soft Computing, Lecture Notes in Artifical Intelligence, Vol. 3070, Springer-Verlag, Berlin/Heidelberg pp. 598-603.
  18. Jankowski, N. and Grochowski, M. (2005). Instances selection algorithms in the conjunction with LVQ, in M.H. Hamza (Ed.), Artificial Intelligence and Applications, ACTA Press, Innsbruck, pp. 453-459.
  19. Kadlec, P. and Gabrys, B. (2008). Learnt topology gating artificial neural networks, IEEE World Congress on Computational Intelligence, Hong Kong, China, pp. 2605-2612.
  20. Kohonen, T. (1986). Learning vector quantization for pattern recognition, Technical Report TKK-F-A601, Helsinki University of Technology, Espoo.
  21. Kordík, P. and ˇCerný, J. (2011). Self-organization of supervised models, in N. Jankowski, W. Duch and K. Gra˛bczewski (Eds.), Meta-learning in Computational Intelligence, Studies in Computational Intelligence, Vol. 358, Springer, Berlin/Heidelberg, pp. 179-223.10.1007/978-3-642-20980-2_6
  22. Korytkowski, M., Nowicki, R., Rutkowski, L. and Scherer, R. (2011). AdaBoost ensemble of DCOG rough-neuro-fuzzy systems, in P. J˛edrzejowicz, N.T. Nguyen and K. Hoang (Eds.), ICCCI (1), Lecture Notes in Computer Science, Vol. 6922, Springer, Berlin/Heidelberg, pp. 62-71.10.1007/978-3-642-23935-9_6
  23. ٞeski, J. (2003). A fuzzy if-then rule-based nonlinear classifier, International Journal of Applied Mathematics and Computer Science 13(2): 215-223.
  24. Peng, Y., Falch, P., Soares, C. and Brazdil, P. (2002). Improved dataset characterisation for meta-learning, 5th International Conference on Discovery Science, Luebeck, Germany, pp. 141-152.
  25. Pfahringer, B., Bensusan, H. and Giraud-Carrier, C. (2000). Meta-learning by landmarking various learning algorithms, International Conference on Machine Learning, Stanford, CA, USA, pp. 743-750.
  26. Prodromidis, A. and Chan, P. (2000). Meta-learning in distributed data mining systems: Issues and approaches, in H. Kargupta and P. Chan (Eds.), Book on Advances of Distributed Data Mining, AAAI Press, Menlo Park, CA.
  27. Scherer, R. (2010). Designing boosting ensemble of relational fuzzy systems, International Journal of Neural Systems 20(5): 381-388.10.1142/S012906571000252820945517
  28. Scherer, R. (2011). An ensemble of logical-type neuro-fuzzy systems, Expert Systems with Applications 38(10): 13115-13120.10.1016/j.eswa.2011.04.117
  29. Smith-Miles, K.A. (2008). Towards insightful algorithm selection for optimization using meta-learning concepts, IEEE World Congress on Computational Intelligence, Hong Kong, China, pp. 4117-4123.
  30. Todorovski, L. and Dzeroski, S. (2003). Combining classifiers with meta decision trees, Machine Learning Journal 50(3): 223-249.10.1023/A:1021709817809
  31. Tro´c, M. and Unold, O. (2010). Self-adaptation of parameters in a learning classifier system ensemble machine, International Journal of Applied Mathematics and Computer Science 20(1): 157-174, DOI: 10.2478/v10006-010-0012-8.10.2478/v10006-010-0012-8
  32. Witten, I.H. and Frank, E. (2005). Data Mining: Practical Machine Learning Tools and Techniques, Morgan Kaufmann, Amsterdam.
DOI: https://doi.org/10.2478/v10006-012-0049-y | Journal eISSN: 2083-8492 | Journal ISSN: 1641-876X
Language: English
Page range: 647 - 667
Published on: Sep 28, 2012
Published by: University of Zielona Góra
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2012 Norbert Jankowski, published by University of Zielona Góra
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.