Dynamically changing sequencing rules with reinforcement learning in a job shop system with stochastic influences

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Authors

Sequencing operations can be difficult, especially under uncertain conditions. Applying decentral sequencing rules has been a viable option; however, no rule exists that can outperform all other rules under varying system performance. For this reason, reinforcement learning (RL) is used as a hyper heuristic to select a sequencing rule based on the system status. Based on multiple training scenarios considering stochastic influences, such as varying inter arrival time or customers changing the product mix, the advantages of RL are presented. For evaluation, the trained agents are exploited in a generic manufacturing system. The best agent trained is able to dynamically adjust sequencing rules based on system performance, thereby matching and outperforming the presumed best static sequencing rules by ~ 3%. Using the trained policy in an unknown scenario, the RL heuristic is still able to change the sequencing rule according to the system status, thereby providing a robust performance.
Translated title of the contributionDynamische Auswahl von Reihenfolgeregeln mit bestärkendem Lernen in einer Werkstattfertigung mit stochastischen Einflüssen
Original languageEnglish
Title of host publicationProceedings of the 2020 Winter Simulation Conference, WSC 2020
EditorsK.-H. Bae, B. Feng, S. Kim, S. Lazarova-Molnar, Z. Zheng, T. Roeder, R. Thiesing
Number of pages11
PublisherIEEE - Institute of Electrical and Electronics Engineers Inc.
Publication date14.12.2020
Pages1608 - 1618
Article number9383903
ISBN (Electronic)978-1-7281-9499-8
DOIs
Publication statusPublished - 14.12.2020
EventWinter Simulation Conference - WSC 2020: Simulation Drives Innovation - Orlando, United States
Duration: 14.12.202018.12.2020
http://meetings2.informs.org/wordpress/wsc2020/