Computing policy parameters for stochastic inventory control using stochastic dynamic programming approaches

Loading...
Thumbnail Image
Files
PhD_Thesis_AndreaVisentin.pdf(1008.15 KB)
Full Text E-thesis
Date
2020-08-29
Authors
Visentin, Andrea
Journal Title
Journal ISSN
Volume Title
Publisher
University College Cork
Published Version
Research Projects
Organizational Units
Journal Issue
Abstract
The objective of this work is to introduce techniques for the computation of optimal and near-optimal inventory control policy parameters for the stochastic inventory control problem under Scarf’s setting. A common aspect of the solutions presented herein is the usage of stochastic dynamic programming approaches, a mathematical programming technique introduced by Bellman. Stochastic dynamic programming is hybridised with branch-and-bound, binary search, constraint programming and other computational techniques to develop innovative and competitive solutions. In this work, the classic single-item, single location-inventory control with penalty cost under the independent stochastic demand is extended to model a fixed review cost. This cost is charged when the inventory level is assessed at the beginning of a period. This operation is costly in practice and including it can lead to significant savings. This makes it possible to model an order cancellation penalty charge. The first contribution hereby presented is the first stochastic dynamic program- ming that captures Bookbinder and Tan’s static-dynamic uncertainty control policy with penalty cost. Numerous techniques are available in the literature to compute such parameters; however, they all make assumptions on the de- mand probability distribution. This technique has many similarities to Scarf’s stochastic dynamic programming formulation, and it does not require any ex- ternal solver to be deployed. Memoisation and binary search techniques are deployed to improve computational performances. Extensive computational studies show that this new model has a tighter optimality gap compared to the state of the art. The second contribution is to introduce the first procedure to compute cost- optimal parameters for the well-known (R, s, S) policy. Practitioners widely use such a policy; however, the determination of its parameters is considered com- putationally prohibitive. A technique that hybridises stochastic dynamic pro- gramming and branch-and-bound is presented, alongside with computational enhancements. Computing the optimal policy allows the determination of op- timality gaps for future heuristics. This approach can solve instances of consid- erable size, making it usable by practitioners. The computational study shows the reduction of the cost that such a system can provide. Thirdly, this work presents the first heuristics for determining the near-optimal parameters for the (R,s,S) policy. The first is an algorithm that formally models the (R,s,S) policy computation in the form of a functional equation. The second is a heuristic formed by a hybridisation of (R,S) and (s,S) policy parameters solvers. These heuristics can compute near-optimal parameters in a fraction of time compared to the exact methods. They can be used to speed up the optimal branch-and-bound technique. The last contribution is the introduction of a technique to encode dynamic programming in constraint programming. Constraint programming provides the user with an expressive modelling language and delegates the search for the solution to a specific solver. The possibility to seamlessly encode dynamic programming provides new modelling options, e.g. the computation of optimal (R,s,S) policy parameters. The performances in this specific application are not competitive with the other techniques proposed herein; however, this encoding opens up new connections between constraint programming and dynamic programming. The encoding allows deploying DP based constraints in modelling languages such as MiniZinc. The computational study shows how this technique can outperform a similar encoding for mixed-integer programming.
Description
Keywords
Inventory control , Dynamic programming , Stochastic programming , Stochastic lot sizing
Citation
Visentin, A. 2020. Computing policy parameters for stochastic inventory control using stochastic dynamic programming approaches. PhD Thesis, University College Cork.