Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.
Citeste mai mult

transport gratuit

465.43Lei

465.43Lei

Primesti 465 puncte

Important icon msg

Primesti puncte de fidelitate dupa fiecare comanda! 100 puncte de fidelitate reprezinta 1 leu. Foloseste-le la viitoarele achizitii!

Livrare in 2-4 saptamani

Descrierea produsului


Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.
Citeste mai mult

Parerea ta e inspiratie pentru comunitatea Libris!

Istoricul tau de navigare

Noi suntem despre carti, si la fel este si

Newsletter-ul nostru.

Aboneaza-te la vestile literare si primesti un cupon de -10% pentru viitoarea ta comanda!

*Reducerea aplicata prin cupon nu se cumuleaza, ci se aplica reducerea cea mai mare.

Ma abonez image one
Ma abonez image one
Accessibility Logo