Decision Trees
explains about the similarities and how to measure. which is the best split? based on SSE and GINI (good info about gini here).
For classification the Gini cost function is used which provides an indication of how βpureβ the leaf nodes are (how mixed the training data assigned to each node is).
Gini = sum(pk * (1 β pk))
Early stop - 1 sample per node is overfitting, 5-10 are good
Pruning - evaluate what happens if the lead nodes are removed, if there is a big drop, we need it.
KDTREE
RANDOM FOREST
How do deal with imbalanced data in Random-forest -
One is based on cost sensitive learning.
Other is based on a sampling technique
EXTRA TREES
Last updated