Dynamically changing sequencing rules with reinforcement learning in a job shop system with stochastic influences

Publikation: Beiträge in SammelwerkenAufsätze in KonferenzbändenForschungbegutachtet


Sequencing operations can be difficult, especially under uncertain conditions. Applying decentral sequencing rules has been a viable option; however, no rule exists that can outperform all other rules under varying system performance. For this reason, reinforcement learning (RL) is used as a hyper heuristic to select a sequencing rule based on the system status. Based on multiple training scenarios considering stochastic influences, such as varying inter arrival time or customers changing the product mix, the advantages of RL are presented. For evaluation, the trained agents are exploited in a generic manufacturing system. The best agent trained is able to dynamically adjust sequencing rules based on system performance, thereby matching and outperforming the presumed best static sequencing rules by ~ 3%. Using the trained policy in an unknown scenario, the RL heuristic is still able to change the sequencing rule according to the system status, thereby providing a robust performance.
Titel in ÜbersetzungDynamische Auswahl von Reihenfolgeregeln mit bestärkendem Lernen in einer Werkstattfertigung mit stochastischen Einflüssen
TitelProceedings of the 2020 Winter Simulation Conference, WSC 2020
HerausgeberK.-H. Bae, B. Feng, S. Kim, S. Lazarova-Molnar, Z. Zheng, T. Roeder, R. Thiesing
Anzahl der Seiten11
VerlagIEEE - Institute of Electrical and Electronics Engineers Inc.
Seiten1608 - 1618
ISBN (elektronisch)978-1-7281-9499-8
PublikationsstatusErschienen - 14.12.2020
VeranstaltungWinter Simulation Conference 2020: Simulation Drives Innovation - Orlando, USA / Vereinigte Staaten
Dauer: 14.12.202018.12.2020

Zugehörige Aktivitäten