A comparison of control strategies applied to a pricing problem in retail release_44u5d7mj4fcytebxhvx6hodvae

by Asbjørn N. Riseth, Jeff N. Dewynne, Chris L. Farmer

Released as a article .

(2017)

Abstract

When sales of a product are affected by randomness in demand, retailers can use dynamic pricing strategies to maximise their profits. In this article the pricing problem is formulated as a stochastic optimal control problem, where the optimal policy can be found by solving the associated Bellman equation. The aim is to investigate Approximate Dynamic Programming algorithms for this problem. For realistic retail applications, modelling the problem and solving it to optimality is intractable. Thus practitioners make simplifying assumptions and design suboptimal policies, but a thorough investigation of the relative performance of these policies is lacking. To better understand such assumptions, we simulate the performance of two algorithms on a one-product system. It is found that for more than half of the realisations of the random disturbance, the often-used, but approximate, Certainty Equivalent Control policy yields larger profits than an optimal, maximum expected-value policy. This approximate algorithm, however, performs significantly worse in the remaining realisations, which colloquially can be interpreted as a more risk-seeking attitude by the retailer. Another policy, Open-Loop Feedback Control, is shown to work well as a compromise between the Certainty Equivalent Control and the optimal policy.
In text/plain format

Archived Files and Locations

application/pdf  353.4 kB
file_2fp3l6rkknepxlr42veddlx4gq
web.archive.org (webarchive)
arxiv.org (repository)
Read Archived PDF
Archived
Type  article
Stage   submitted
Date   2017-10-05
Version   v1
Language   en ?
arXiv  1710.02044v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: c35d16d9-5170-4fcb-a83a-8dd274d97576
API URL: JSON