

The objective of this work is to reduce trees’ size using several popular and widely known pruning methods, adapted to our classifier, and check the way pruning influences the classification accuracy and computation time. We noticed that C-fuzzy decision trees and Cluster–context fuzzy decision trees which are the part of created forest are too large and overfitted to training dataset. While working with our classifier-C-fuzzy random forest-we met the same problem. It also extends a classification time, which is especially important in large trees. The existence of such nodes distort the power of generalization which should characterize the classifier. They meet the problem of overfitting to the training dataset-some nodes are created to fit the single objects of this dataset and they seem to be redundant. There are also situations where it is better to remain trees unpruned.ĭecision trees created during learning process are often overgrown. However, there is no pruning method which fits the best for all datasets-the pruning method should be chosen individually according to the given problem. The method which pruned trees the most was PEP and the fastest one was MEP. Generalizing, the best classification accuracy improvement was achieved using CVP for discrete decision class problems and REP for continuous decision class datasets, but for each dataset different pruning methods work well. Our experiments show that pruning trees in C-fuzzy random forest in general reduce computation time and improve classification accuracy. The experiments on eleven different discrete decision class datasets and two continuous decision class datasets were performed to evaluate five implemented pruning methods. The evaluation of created forests was performed on eleven discrete decision class datasets (forest with C-fuzzy decision trees) and two continuous decision class datasets (forest with Cluster–context fuzzy decision trees). C-fuzzy random forests with unpruned trees and trees constructed using each of these pruning methods were created. Five pruning methods were adjusted to mentioned kind of trees and examined: Reduced Error Pruning (REP), Pessimistic Error Pruning (PEP), Minimum Error Pruning (MEP), Critical Value Pruning (CVP) and Cost-Complexity Pruning.

This solution is based on fuzzy random forest and uses C-fuzzy decision trees or Cluster–context fuzzy decision trees-depending on the variant. C-fuzzy random forest is a classifier which we created and we are improving. In this paper, the idea of applying different pruning methods to C-fuzzy decision trees and Cluster–context fuzzy decision trees in C-fuzzy random forest is presented. Pruning decision trees is the way to decrease their size in order to reduce classification time and improve (or at least maintain) classification accuracy.
