Random forests or random decision forests are an ensemble learning method for categorization, regression and other tasks that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (categorization) or mean prediction (regression) of the individual trees. The first algorithm for random decision forests was created by Tin Kam Ho use the random subspace method, which, in Ho's formulation, is a manner to implement the" stochastic discrimination" approach to categorization proposed by Eugene Kleinberg. The report also offers the first theoretical consequence for random forests in the form of a bound on the generalization mistake which depends on the strength of the trees in the forest and their correlation. The idea of random subspace choice from Ho was also influential in the design of random forests. Ho established that forests of trees dividing with oblique hyperplanes can gain accuracy as they grow without suffering from overtraining, as long as the forests are randomly restricted to be sensitive to only choose feature dimensions.

COMING SOON!

```
library(randomForest)
x <- cbind(x_train,y_train)
# Fitting model
fit <- randomForest(Species ~ ., x,ntree=500)
summary(fit)
# Predict Output
predicted= predict(fit,x_test)
```