Book - bandit algorithms
Bandits for recommender systems by Eugene Yan
Rolling out multi armed bandits for fast adaptive experimentation
Thompson sampling for multi-armed bandit
Last updated 4 months ago