Lectura: «Naive Bayes for Machine Learning» ―Jason Brownlee
Lectura: «Machine Learning With Random Forests And Decision Trees» ―Scott Hartshorn
El Random Forest es un tipo de algoritmo de aprendizaje automático, que se utiliza típicamente para realizar una categorización basada en datos o atributos.
Aprendizaje automático (Machine learning) ―¿qué es?
But at a high level, a lot of different types of machine learning are intended to do the same thing: start with examples of something that you know, and use those to develop a pattern so you can recognize those characteristics in other data that you don’t know as much about.
there are a plethora of different techniques that are applicable to different types of problems,
he details of how algorithms work differ between different algorithms, but most of them are used in similar ways.
The Basics of Most Machine Learning
Start with a set of data
Train your machine learning algorithm
Get a set of data that you want to know the answer to,
Pass that data through your trained algorithm and find the result
Typically how #2 is done is what makes machine learning algorithms different from each other.
The Random Forest is a bit of a Swiss Army Knife of machine learning algorithms. It can be applied to a wide range of problems, and be fairly good at all of them. However it might not be as good as a specialized algorithm at any given specific problem.
Random Forests are simply a collection of Decision Trees that have been generated using a random subset of data.
The name “Random Forest” comes from combining the randomness that is used to pick the subset of data with having a bunch of decision trees, hence a forest.
A Decision Tree is simply a step by step process to go through to decide a category something belongs to.
flow chart of questions to go through to determine what type of fruit something is.
To use the decision tree, you start at the top and begin hitting each decision in series. At each point, you need to make a choice on which way to go between exactly two options. Eventually you reach the bottom and have a decision as to the outcome, in this case what type of fruit something is.
A Random Forest is made up of a number of decision trees, each of which have been generated slightly differently.
In order to generate those decision trees you need a starting data set. That starting data set needs to have both features and results.
Results are the final answer that you get when you are trying to categorize something.
Features are information about the item that you can use to distinguish different results from each other.
When you pass the data set with both features and results into the Random Forest generation algorithm, it will examine the features and determine which ones to use to generate the best decision tree.
One important thing to know about features is that whatever features you use to train the Random Forest model, you need to use the same features on your data when you use it.
What Random Forests, Decision Trees, and other types of machine learning algorithms are most useful for are taking higher dimensional data, or a large quantity of data, and making sense of it.
You can get the Python code that ran the classification and generated this plot here. http://www.fairlynerdy.com/randomforestexamples/
A more important difference is how the Decision Tree chose to split off the Bananas. We did it with a single diagonal line. The decision tree used two lines, one horizontal and one vertical.
The decision tree works by picking a criteria and a threshold.
The criteria specifies what to split, for instance length or width, and one single criteria is always used.
The threshold specifies where to split, i.e. what value to split at.
The best a decision tree can do is to split on one value, and then the other value and repeat that process.
But the decision tree classifier would take quite a few steps to do it, and end up with a stair step like plot, as shown below.
This ends up being one way to improve the results of a random forest. If you see a relationship, like a ratio, between different criteria in the data you can make it easier for the code by making the ratio its own value.
another difference between how the computer classified the data and how we would do it is the level of detail. Eventually we got to a spot where we gave up classifying the data.
The computer never threw up its hands and stopped. By default it continues until every single piece of data is split into a category that is 100% pure.
Overfitting means that we are drawing too fine of conclusions from the data that we have.
There are a couple ways to control overfitting in decision trees.
One way is to limit the number of splits that the decision tree makes.
Even though we have set a random seed, the different number of splits means the branches are analyzed in a different order, which changes the random state at different branches.
We could keep adding splits, but it becomes hard to distinguish exactly where they are all made,
If you want to generate these yourself, you can find the code I used here, http://www.fairlynerdy.com/randomforestexamples/ and the parameter that you need to change is max_depth, an example of which is shown below
The other way to limit the overfitting it to only split a branch of the decision tree if there are a certain number of data points on it.
we might decide that if we don’t have at least 6 pieces of data on a branch, than we shouldn’t split because we might be overfitting.
These limits can be set using different parameters. In python you could do it as shown below
while an individual decision tree is useful, it does have some limitations.
one of the most severe limitations is the tendency for decision trees to overfit their data.
In many real world examples, it can be challenging to classify things based on the data that is given.
A decision tree will not smooth out those anomalies.
a decision tree might break the data down into very small, specialized ranges that work for your data, but not for any random fruit that might come in.
Random Forests attempt to fix this problem by using multiple decision trees and averaging the results.
So Random Forests generate their decision trees using subsets of the full data set that are randomly selected.
There are overlapping shades of different colors. What this is representing is that this random forest was generated with 16 different decision trees. Each of those 16 different decision trees was generated with a slightly different set of data. Then the results for each of the decision trees were combined.
For some areas,
all of the decision trees reached the same conclusion so the colors are not shaded between multiple colors.
For other areas, different results were generated on different decision trees, so there are overlapping colors.
If multiple decision trees are giving you different results, which result should you use? Random Forests solve this problem with voting.
There are two different ways the voting can work.
The first is just to count all the votes by all the decision trees and take the highest count as the solution.
The other way is to count all the votes and return a result based on the ratio of the votes.
More mathematically, the second answer that could be generated is a weighting of all the results.
Note – what is actually occurring is just an averaging of all the results of all the trees.
if you just want the most votes for the most common category, you can use the predict(X) function.
If you want to return the weightings for all the different trees, you can use the predict_proba(X) function.
The data on each of them is all selected from the same source, but on average they each only have 63.2% of the original data set. This is known as Boot Strapping, and is covered in the next section.
Because the decision trees are built with different data, they will not all be the same. This is one area of the randomness in a random forest.
There are two ways that randomness is inserted into a Random Forest.
One is based on what data is selected for each tree,
the other is based on how the criteria for the splits are chosen.
All of the Decision Trees in a Random Forest use a slightly different set of data.
The final result is based on the votes from all the decision trees.
The outcome of this is that anomalies tend to get smoothed over, since the data causing the anomalies will be in some of the decision trees, but not all of them, while the data that is more general will be in most if not all of the trees.
When generating each tree, that tree has a unique set of data. That set is generated from a random subset of all of the available data, with replacement.
This technique is known as bootstrapping.
Each of the trees uses a set of data that is the same size of the original data set.
If you run this random sampling enough times, with large enough data sets, you will find that on average 63.2% of the original data set is in each tree. Since each tree is the same size as the original data set, that means the other 36.8% is duplicates of the original data set.
Since most Random Forests contain anywhere from a few dozen to several hundred trees, then you are likely that each piece of data is included in at least some of the trees.
The other way that a random forest adds randomness to a decision tree is deciding which feature to split the tree on.
Within any given feature, the split will be located at the location which maximizes the information gain on the tree, i.e. the best location.
if the decision tree evaluates multiple features, it will pick the best location in all the features that it looks at when deciding where to make the split.
if all of the trees looked at the same features, they would be very similar.
The way that Random Forest deals with that is to not let the trees look at all of the features.
At any given branch in the decision tree, only a subset of the features are available for it to classify on.
Other branches, even higher or lower branches on the same tree, will have different features that they can classify on.
By default, a Random Forest will use the square root of the number of features as the maximum features that it will look on any given branch.
The next decision branch gets to choose between two features independent of what any previous branches evaluated.