TY - CHAP
T1 - Analysis of a Stochastic Lot Scheduling Problem with Strict Due-Dates
AU - van Foreest, Nicolaas
AU - Wijngaard, Jacob
PY - 2017
Y1 - 2017
N2 - This chapter considers admission control and scheduling rules for a single machine production environment. Orders arrive at a single machine and can be grouped into several product families. Each order has a family dependent due-date, production duration, and reward. When an order cannot be served before its due-date it has to be rejected. Moreover, when the machine changes the production of one type of family to another family, a setup time is incurred. The problem is to find long-run average optimal policies that accept or reject orders and schedule the accepted orders. To obtain insight into the optimal performance of the system we model it as a Markov decision process (MDP). This formal description leads to, at least, three tangible goals. First, for small scale problems the optimal admission and scheduling policy can be obtained with, e.g., policy iteration. Second, simple heuristic policies can be formulated in terms of the concepts developed for the MDP, i.e., the states, actions and (action-dependent) transition matrices. Finally, the simulator required to study the performance of heuristic policies for large scale problems can be directly implemented as an MDP. Thus, the formal description of the system in terms of an MDP has considerable off-spin beyond the mere numerical aspects of solving the MDP for small-scale systems.
AB - This chapter considers admission control and scheduling rules for a single machine production environment. Orders arrive at a single machine and can be grouped into several product families. Each order has a family dependent due-date, production duration, and reward. When an order cannot be served before its due-date it has to be rejected. Moreover, when the machine changes the production of one type of family to another family, a setup time is incurred. The problem is to find long-run average optimal policies that accept or reject orders and schedule the accepted orders. To obtain insight into the optimal performance of the system we model it as a Markov decision process (MDP). This formal description leads to, at least, three tangible goals. First, for small scale problems the optimal admission and scheduling policy can be obtained with, e.g., policy iteration. Second, simple heuristic policies can be formulated in terms of the concepts developed for the MDP, i.e., the states, actions and (action-dependent) transition matrices. Finally, the simulator required to study the performance of heuristic policies for large scale problems can be directly implemented as an MDP. Thus, the formal description of the system in terms of an MDP has considerable off-spin beyond the mere numerical aspects of solving the MDP for small-scale systems.
U2 - 10.1007/978-3-319-47766-4_15
DO - 10.1007/978-3-319-47766-4_15
M3 - Chapter
SN - 978-3-319-47764-0
T3 - International Series in Operations Research and Management Science
SP - 429
EP - 444
BT - Markov Decision Processes in Practice
A2 - Boucherie, Richard
A2 - van Dijk, Nico M.
PB - Springer
CY - Cham
ER -