Follow the regularized leader
FTRL the paper by McMahan et al.
FTRL by nicolo compolongo - "The “Follow the Regularized Leader” algorithm stems from the online learning setting, where the learning process is sequential. In this setting, an online player makes a decision in every round and suffers a loss."
A thorough medium article by dhiraj reddy
Keras on FTRL - ""Follow The Regularized Leader" (FTRL) is an optimization algorithm developed at Google for click-through rate prediction in the early 2010s. It is most suitable for shallow models with large and sparse feature spaces. The algorithm is described by McMahan et al., 2013. The Keras version has support for both online L2 regularization (the L2 regularization described in the paper above) and shrinkage-type L2 regularization (which is the addition of an L2 penalty to the loss function)."
Last updated