|

AdaBoost

AdaBoost is short for Adaptive Boosting.Generally, AdaBoost is used with short decision trees.

What Makes AdaBoost Different ?

AdaBoost is similar to Random Forests in the sense that the predictions are taken from many decision trees. However, there are three main differences that make AdaBoost unique:

  1. First, AdaBoost creates a forest of stumps rather than trees. A stump is a tree that is made of only one node and two leaves (like the image above).
  2. Second, the stumps that are created are not equally weighted in the final decision (final prediction). Stumps that create more error will have less say in the final decision.
  3. Lastly, the order in which the stumps are made is important, because each stump aims to reduce the errors that the previous stump(s) made.

How does it work

Working of AdaBoost
  • AdaBoost is implemented by combining several weak learners into a single strong learner.
  • The weak learners in AdaBoost take into account a single input feature and draw out a single split decision tree called the decision stump. Each observation is weighed equally while drawing out the first decision stump.
  • The results from the first decision stump are analyzed and if any observations are wrongfully classified, they are assigned higher weights.
  • Post this, a new decision stump is drawn by considering the observations with higher weights as more significant.Again if any observations are misclassified, they’re given higher weight and this process continues until all the observations fall into the right class.
  • Adaboost can be used for both classification and regression-based problems, however, it is more commonly used for classification purposes.

Here’s a mathematical explanation you can go through: https://towardsdatascience.com/a-mathematical-explanation-of-adaboost-4b0c20ce4382

Similar Posts

Leave a Reply

Your email address will not be published.