Approximate Dynamic Programming: Solving the Curses of Dimensionality, Edition 2

┬╖ John Wiley & Sons
рмЗрммрнБрмХрнН
656
рмкрнГрм╖рнНрмарм╛рмЧрнБрнЬрм┐рмХ
рм░рнЗрмЯрм┐рмВ рмУ рм╕рморнАрмХрнНрм╖рм╛рмЧрнБрнЬрм┐рмХрнБ рмпрм╛рмЮрнНрмЪ рмХрм░рм╛рмпрм╛рмЗрмирм╛рм╣рм┐рмБ ┬армЕрмзрм┐рмХ рмЬрм╛рмгрмирнНрмдрнБ

рмПрм╣рм┐ рмЗрммрнБрмХрнН рммрм┐рм╖рнЯрм░рнЗ

Praise for the First Edition

"Finally, a book devoted to dynamic programming and written using the language of operations research (OR)! This beautiful book fills a gap in the libraries of OR specialists and practitioners."
тАФComputing Reviews

This new edition showcases a focus on modeling and computation for complex classes of approximate dynamic programming problems

Understanding approximate dynamic programming (ADP) is vital in order to develop practical and high-quality solutions to complex industrial problems, particularly when those problems involve making decisions in the presence of uncertainty. Approximate Dynamic Programming, Second Edition uniquely integrates four distinct disciplinesтАФMarkov decision processes, mathematical programming, simulation, and statisticsтАФto demonstrate how to successfully approach, model, and solve a wide range of real-life problems using ADP.

The book continues to bridge the gap between computer science, simulation, and operations research and now adopts the notation and vocabulary of reinforcement learning as well as stochastic search and simulation optimization. The author outlines the essential algorithms that serve as a starting point in the design of practical solutions for real problems. The three curses of dimensionality that impact complex problems are introduced and detailed coverage of implementation challenges is provided. The Second Edition also features:

  • A new chapter describing four fundamental classes of policies for working with diverse stochastic optimization problems: myopic policies, look-ahead policies, policy function approximations, and policies based on value function approximations

  • A new chapter on policy search that brings together stochastic search and simulation optimization concepts and introduces a new class of optimal learning strategies

  • Updated coverage of the exploration exploitation problem in ADP, now including a recently developed method for doing active learning in the presence of a physical state, using the concept of the knowledge gradient

  • A new sequence of chapters describing statistical methods for approximating value functions, estimating the value of a fixed policy, and value function approximation while searching for optimal policies

The presented coverage of ADP emphasizes models and algorithms, focusing on related applications and computation while also discussing the theoretical side of the topic that explores proofs of convergence and rate of convergence. A related website features an ongoing discussion of the evolving fields of approximation dynamic programming and reinforcement learning, along with additional readings, software, and datasets.

Requiring only a basic understanding of statistics and probability, Approximate Dynamic Programming, Second Edition is an excellent book for industrial engineering and operations research courses at the upper-undergraduate and graduate levels. It also serves as a valuable reference for researchers and professionals who utilize dynamic programming, stochastic programming, and control theory to solve problems in their everyday work.

рм▓рнЗрмЦрмХрмЩрнНрмХ рммрм┐рм╖рнЯрм░рнЗ

WARREN B. POWELL, PhD, is Professor of Operations Research and Financial Engineering at Princeton University, where he is founder and Director of CASTLE Laboratory, a research unit that works with industrial partners to test new ideas found in operations research. The recipient of the 2004 INFORMS Fellow Award, Dr. Powell has authored more than 160 published articles on stochastic optimization, approximate dynamicprogramming, and dynamic resource management.

рмПрм╣рм┐ рмЗрммрнБрмХрнНтАНрмХрнБ рморнВрм▓рнНрнЯрм╛рмЩрнНрмХрми рмХрм░рмирнНрмдрнБ

рмЖрмкрмг рмХрмг рмнрм╛рммрнБрмЫрмирнНрмдрм┐ рмдрм╛рм╣рм╛ рмЖрмормХрнБ рмЬрмгрм╛рмирнНрмдрнБред

рмкрнЭрм┐рммрм╛ рмкрм╛рмЗрмБ рмдрмернНрнЯ

рм╕рнНрморм╛рм░рнНрмЯрмлрнЛрми рмУ рмЯрм╛рммрм▓рнЗрмЯ
Google Play Books рмЖрмкрнНрмХрнБ, Android рмУ iPad/iPhone рмкрм╛рмЗрмБ рмЗрмирм╖рнНрмЯрм▓рнН рмХрм░рмирнНрмдрнБред рмПрм╣рм╛ рм╕рнНрм╡рмЪрм╛рм│рм┐рмд рмнрм╛рммрнЗ рмЖрмкрмгрмЩрнНрмХ рмЖрмХрм╛рмЙрмгрнНрмЯрм░рнЗ рм╕рм┐рмЩрнНрмХ рм╣рнЛтАНрмЗрмпрм┐рмм рмПрммрмВ рмЖрмкрмг рмпрнЗрмЙрмБрмарм┐ рмерм╛рмЖрмирнНрмдрнБ рмирм╛ рмХрм╛рм╣рм┐рмБрмХрм┐ рмЖрмирм▓рм╛рмЗрмирнН рмХрм┐рморнНрммрм╛ рмЕрмлрм▓рм╛рмЗрмирнНтАНрм░рнЗ рмкрнЭрм┐рммрм╛ рмкрм╛рмЗрмБ рмЕрмирнБрмормдрм┐ рмжрнЗрммред
рм▓рм╛рмкрмЯрмк рмУ рмХрморнНрмкрнНрнЯрнБрмЯрм░
рмирм┐рмЬрм░ рмХрморнНрмкрнНрнЯрнБрмЯрм░рнНтАНрм░рнЗ рмерм┐рммрм╛ рн▒рнЗрммрнН рммрнНрм░рм╛рмЙрмЬрм░рнНтАНрмХрнБ рммрнНрнЯрммрм╣рм╛рм░ рмХрм░рм┐ Google Playрм░рнБ рмХрм┐рмгрм┐рмерм┐рммрм╛ рмЕрмбрм┐рмУрммрнБрмХрнНтАНрмХрнБ рмЖрмкрмг рм╢рнБрмгрм┐рмкрм╛рм░рм┐рммрнЗред
рмЗ-рм░рм┐рмбрм░рнН рмУ рмЕрмирнНрнЯ рмбрм┐рмнрм╛рмЗрм╕рнНтАНрмЧрнБрнЬрм┐рмХ
Kobo eReaders рмкрм░рм┐ e-ink рмбрм┐рмнрм╛рмЗрм╕рмЧрнБрмбрм╝рм┐рмХрм░рнЗ рмкрмврм╝рм┐рммрм╛ рмкрм╛рмЗрмБ, рмЖрмкрмгрмЩрнНрмХрнБ рмПрмХ рмлрм╛рмЗрм▓ рмбрм╛рмЙрмирм▓рнЛрмб рмХрм░рм┐ рмПрм╣рм╛рмХрнБ рмЖрмкрмгрмЩрнНрмХ рмбрм┐рмнрм╛рмЗрм╕рмХрнБ рмЯрнНрм░рм╛рмирнНрм╕рмлрм░ рмХрм░рм┐рммрм╛рмХрнБ рм╣рнЗрммред рм╕рморм░рнНрмерм┐рмд eReadersрмХрнБ рмлрм╛рмЗрм▓рмЧрнБрмбрм╝рм┐рмХ рмЯрнНрм░рм╛рмирнНрм╕рмлрм░ рмХрм░рм┐рммрм╛ рмкрм╛рмЗрмБ рм╕рм╣рм╛рнЯрмдрм╛ рмХрнЗрмирнНрмжрнНрм░рм░рнЗ рмерм┐рммрм╛ рм╕рммрм┐рм╢рнЗрм╖ рмирм┐рм░рнНрмжрнНрмжрнЗрм╢рм╛рммрм│рнАрмХрнБ рмЕрмирнБрм╕рм░рмг рмХрм░рмирнНрмдрнБред