Einsatz von bestärkendem Lernen in der Reihenfolgeplanung mit dem Ziel der platzeffizienten Produktion

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Standard

Einsatz von bestärkendem Lernen in der Reihenfolgeplanung mit dem Ziel der platzeffizienten Produktion. / Müller, Kristin; Heger, Jens.
Simulation in Produktion und Logistik 2025. ed. / Sebastian Rank; Mathias Kühn; Thorsten Schmidt. Dresden: Dresden University of Technology, 2025. 40 (ASIM-Mitteilung; No. 194), (Tagungsband ASIM-Fachtagung Simulation in Produktion und Logistik; Vol. 21).

Research output: Contributions to collected editions/worksArticle in conference proceedingsResearchpeer-review

Harvard

Müller, K & Heger, J 2025, Einsatz von bestärkendem Lernen in der Reihenfolgeplanung mit dem Ziel der platzeffizienten Produktion. in S Rank, M Kühn & T Schmidt (eds), Simulation in Produktion und Logistik 2025., 40, ASIM-Mitteilung, no. 194, Tagungsband ASIM-Fachtagung Simulation in Produktion und Logistik, vol. 21, Dresden University of Technology, Dresden, 21. ASIM-Fachtagung Simulation in Produktion und Logistik, Dresden, Germany, 24.09.25. https://doi.org/10.25368/2025.273, https://doi.org/10.25368/2025.233

APA

Müller, K., & Heger, J. (2025). Einsatz von bestärkendem Lernen in der Reihenfolgeplanung mit dem Ziel der platzeffizienten Produktion. In S. Rank, M. Kühn, & T. Schmidt (Eds.), Simulation in Produktion und Logistik 2025 Article 40 (ASIM-Mitteilung; No. 194), (Tagungsband ASIM-Fachtagung Simulation in Produktion und Logistik; Vol. 21). Dresden University of Technology. https://doi.org/10.25368/2025.273, https://doi.org/10.25368/2025.233

Vancouver

Müller K, Heger J. Einsatz von bestärkendem Lernen in der Reihenfolgeplanung mit dem Ziel der platzeffizienten Produktion. In Rank S, Kühn M, Schmidt T, editors, Simulation in Produktion und Logistik 2025. Dresden: Dresden University of Technology. 2025. 40. (ASIM-Mitteilung; 194). (Tagungsband ASIM-Fachtagung Simulation in Produktion und Logistik). doi: 10.25368/2025.273, 10.25368/2025.233

Bibtex

@inbook{3e8c1bdd43104b9eb421dc3ab545989a,
title = "Einsatz von best{\"a}rkendem Lernen in der Reihenfolgeplanung mit dem Ziel der platzeffizienten Produktion",
abstract = "Priority rules are often used in production planning and control for sequence planning of production orders to optimise production efficiency based onkey figures such as order throughput time, machine utilisation or production output. Compared to priority rules, reinforcement learning agents are adaptive and can adjust to dynamic production conditions. This offers enormous potential for optimising production control. This study shows the successful implementation of a reinforcement learning agent that is trained with the Proximal Policy Optimisation algorithm. The agent is used to adjust the sequence of transport orders with the aim of achieving space-efficient production with short lead times. The results of the simulation study show an improvement in lead time and space efficiency compared to conventional priority rules.",
keywords = "Ingenieurwissenschaften",
author = "Kristin M{\"u}ller and Jens Heger",
year = "2025",
doi = "10.25368/2025.273",
language = "Deutsch",
isbn = "978-3-86780-806-4",
series = "ASIM-Mitteilung",
publisher = "Dresden University of Technology",
number = "194",
editor = "Sebastian Rank and Mathias K{\"u}hn and Thorsten Schmidt",
booktitle = "Simulation in Produktion und Logistik 2025",
address = "Deutschland",
note = "21. ASIM-Fachtagung Simulation in Produktion und Logistik ; Conference date: 24-09-2025 Through 25-09-2025",

}

RIS

TY - CHAP

T1 - Einsatz von bestärkendem Lernen in der Reihenfolgeplanung mit dem Ziel der platzeffizienten Produktion

AU - Müller, Kristin

AU - Heger, Jens

N1 - Conference code: 21

PY - 2025

Y1 - 2025

N2 - Priority rules are often used in production planning and control for sequence planning of production orders to optimise production efficiency based onkey figures such as order throughput time, machine utilisation or production output. Compared to priority rules, reinforcement learning agents are adaptive and can adjust to dynamic production conditions. This offers enormous potential for optimising production control. This study shows the successful implementation of a reinforcement learning agent that is trained with the Proximal Policy Optimisation algorithm. The agent is used to adjust the sequence of transport orders with the aim of achieving space-efficient production with short lead times. The results of the simulation study show an improvement in lead time and space efficiency compared to conventional priority rules.

AB - Priority rules are often used in production planning and control for sequence planning of production orders to optimise production efficiency based onkey figures such as order throughput time, machine utilisation or production output. Compared to priority rules, reinforcement learning agents are adaptive and can adjust to dynamic production conditions. This offers enormous potential for optimising production control. This study shows the successful implementation of a reinforcement learning agent that is trained with the Proximal Policy Optimisation algorithm. The agent is used to adjust the sequence of transport orders with the aim of achieving space-efficient production with short lead times. The results of the simulation study show an improvement in lead time and space efficiency compared to conventional priority rules.

KW - Ingenieurwissenschaften

UR - https://d-nb.info/1378976436/34

U2 - 10.25368/2025.273

DO - 10.25368/2025.273

M3 - Aufsätze in Konferenzbänden

SN - 978-3-86780-806-4

T3 - ASIM-Mitteilung

BT - Simulation in Produktion und Logistik 2025

A2 - Rank, Sebastian

A2 - Kühn, Mathias

A2 - Schmidt, Thorsten

PB - Dresden University of Technology

CY - Dresden

T2 - 21. ASIM-Fachtagung Simulation in Produktion und Logistik

Y2 - 24 September 2025 through 25 September 2025

ER -

DOI