Have a personal or library account? Click to login
Learning the naive Bayes classifier with optimization models Cover
By: Sona Taheri and  Musa Mammadov  
Open Access
|Dec 2013

Abstract

Naive Bayes is among the simplest probabilistic classifiers. It often performs surprisingly well in many real world applications, despite the strong assumption that all features are conditionally independent given the class. In the learning process of this classifier with the known structure, class probabilities and conditional probabilities are calculated using training data, and then values of these probabilities are used to classify new observations. In this paper, we introduce three novel optimization models for the naive Bayes classifier where both class probabilities and conditional probabilities are considered as variables. The values of these variables are found by solving the corresponding optimization problems. Numerical experiments are conducted on several real world binary classification data sets, where continuous features are discretized by applying three different methods. The performances of these models are compared with the naive Bayes classifier, tree augmented naive Bayes, the SVM, C4.5 and the nearest neighbor classifier. The obtained results demonstrate that the proposed models can significantly improve the performance of the naive Bayes classifier, yet at the same time maintain its simple structure.

DOI: https://doi.org/10.2478/amcs-2013-0059 | Journal eISSN: 2083-8492 | Journal ISSN: 1641-876X
Language: English
Page range: 787 - 795
Published on: Dec 31, 2013
Published by: University of Zielona Góra
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2013 Sona Taheri, Musa Mammadov, published by University of Zielona Góra
This work is licensed under the Creative Commons License.

Volume 23 (2013): Issue 4 (December 2013)