An introduction to recursive partitioning: rationale, application, and characteristics of classification and regression trees, bagging, and random forests

Psychol Methods. 2009 Dec;14(4):323-48. doi: 10.1037/a0016973.

Abstract

Recursive partitioning methods have become popular and widely used tools for nonparametric regression and classification in many scientific fields. Especially random forests, which can deal with large numbers of predictor variables even in the presence of complex interactions, have been applied successfully in genetics, clinical medicine, and bioinformatics within the past few years. High-dimensional problems are common not only in genetics, but also in some areas of psychological research, where only a few subjects can be measured because of time or cost constraints, yet a large amount of data is generated for each subject. Random forests have been shown to achieve a high prediction accuracy in such applications and to provide descriptive variable importance measures reflecting the impact of each variable in both main effects and interactions. The aim of this work is to introduce the principles of the standard recursive partitioning methods as well as recent methodological improvements, to illustrate their usage for low and high-dimensional data exploration, but also to point out limitations of the methods and potential pitfalls in their practical application. Application of the methods is illustrated with freely available implementations in the R system for statistical computing.

MeSH terms

  • Data Interpretation, Statistical
  • Humans
  • Models, Psychological*
  • Psychology / methods*
  • Psychology / statistics & numerical data*
  • Regression Analysis
  • Statistics, Nonparametric*