search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
OPERATIONS, TECHNOLOGY, AND INFORMATION MANAGEMENT


Nonstationary Reinforcement Learning: Te Blessing of


(More) Optimism Management Science, 69, 10, October 2023 LINK TO PAPER


RUIHAO ZHU ASSISTANT PROFESSOR


Cornell Peter and Stephanie Nolan School of Hotel Administration


Cornell SC Johnson College of Business Cornell University


Co-authors • Ruihao Zhu


Assistant Professor, Cornell Peter and Stephanie Nolan School of Hotel Administration, Cornell SC Johnson College of Business, Cornell University


• Wang Chi Cheung, National University of Singapore • David Smichi-Levi, Massachusetts Institute of Technology


Summary


Motivated by operations research applications such as inventory control and real-time bidding, Zhu et al. consider undiscounted reinforcement learning in Markov decision processes under model uncertainty and temporal drifts. In this setting, both the latent reward and state transition distributions are allowed to evolve over time, as long as their respective total variations do not exceed certain variation budgets. Tey first develop the sliding window upper confidence bound for reinforcement learning with confidence-widening (SWUCRL2-CW) algorithm and establish its dynamic regret bound when the variation budgets are known. Ten they propose the bandit-over-reinforce- ment learning algorithm to adaptively tune the SWUCRL2-CW algorithm to achieve the same dynamic regret bound in a parameter-free manner. Finally, they conduct numerical experiments to show that their proposed algorithms achieve superior empirical performance compared with existing algorithms.


Under nonstationarity, historical data samples may falsely indicate that state transition rarely happens. Tis thus presents a significant challenge when one tries to apply the conventional optimism in the face of uncertainty principle to achieve a low dynamic regret bound. Tey overcome this challenge by pro- posing a novel confidence-widening technique that incorporates additional optimism into their learning algorithms, demonstrating how one can leverage special structures on the state transition distributions to achieve improved dynamic regret bound in time-varying demand environments.


TO IMPACT CONTENTS


RESEARCH WITH IMPACT: CORNELL SC JOHNSON COLLEGE OF BUSINESS • 2023 EDITION


62


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68  |  Page 69  |  Page 70  |  Page 71  |  Page 72  |  Page 73  |  Page 74  |  Page 75  |  Page 76  |  Page 77  |  Page 78  |  Page 79  |  Page 80  |  Page 81  |  Page 82  |  Page 83  |  Page 84  |  Page 85  |  Page 86  |  Page 87  |  Page 88  |  Page 89  |  Page 90  |  Page 91  |  Page 92  |  Page 93  |  Page 94  |  Page 95  |  Page 96  |  Page 97  |  Page 98  |  Page 99  |  Page 100  |  Page 101  |  Page 102  |  Page 103  |  Page 104  |  Page 105  |  Page 106  |  Page 107  |  Page 108  |  Page 109  |  Page 110  |  Page 111  |  Page 112  |  Page 113