SMOTE for Learning from Imbalanced Data: Progress and Challenges, Marking the 15-year Anniversary Fernández Hilario, Alberto Luis García López, Salvador Herrera Triguero, Francisco Chawla, Nitesh V. The Synthetic Minority Oversampling Technique (SMOTE) preprocessing algorithm is considered \de facto" standard in the framework of learning from imbalanced data. This is due to its simplicity in the design of the procedure, as well as its robustness when applied to di erent type of problems. Since its publication in 2002, SMOTE has proven successful in a variety of applications from several di erent domains. SMOTE has also inspired several approaches to counter the issue of class imbalance, and has also signi cantly contributed to new supervised learning paradigms, including multilabel classi cation, incremental learning, semi-supervised learning, multi-instance learning, among others. It is standard benchmark for learning from imbalanced data. It is also featured in a number of di erent software packages | from open source to commercial. In this paper, marking the fteen year anniversary of SMOTE, we re ect on the SMOTE journey, discuss the current state of a airs with SMOTE, its applications, and also identify the next set of challenges to extend SMOTE for Big Data problems. 2019-07-12T10:53:53Z 2019-07-12T10:53:53Z 2018 info:eu-repo/semantics/article Fernández Hilario, A.L [et al.]. SMOTE for Learning from Imbalanced Data: Progress and Challenges, Marking the 15-year Anniversary. Journal of Arti cial Intelligence Research 61 (2018) 863-905. [http://hdl.handle.net/10481/56411] 1076-9757 1943-5037 http://hdl.handle.net/10481/56411 eng http://creativecommons.org/licenses/by-nc-nd/3.0/es/ info:eu-repo/semantics/openAccess Atribución-NoComercial-SinDerivadas 3.0 España AI Access Foundation