Follow the regularized leader
Last updated
Last updated
by McMahan et al.
by nicolo compolongo - "The “Follow the Regularized Leader” algorithm stems from the online learning setting, where the learning process is sequential. In this setting, an online player makes a decision in every round and suffers a loss."
by dhiraj reddy
- ""Follow The Regularized Leader" (FTRL) is an optimization algorithm developed at Google for click-through rate prediction in the early 2010s. It is most suitable for shallow models with large and sparse feature spaces. The algorithm is described by . The Keras version has support for both online L2 regularization (the L2 regularization described in the paper above) and shrinkage-type L2 regularization (which is the addition of an L2 penalty to the loss function)."