Adaptive and Dynamic Feedback Loops between Production System and Production Network based on the Asset Administration Shell

Research output: Journal contributionsConference article in journalResearchpeer-review

Authors

  • Florian Stamer
  • Simon Maier
  • Sina Peukert
  • Gisela Lanza

In production networks, production must run efficiently across company boundaries. Companies must be able to react quickly as a single unit. Two trends are influencing this situation: On the one hand, the progressing servitization leads to the increased offering of digital services in the field of manufacturing. From the literature, it is known that digital services let manufacturers, suppliers, and industrial customers interact more closely and frequently in a production network. On the other hand, the concept of the digital twin is trending. It promises the real-time prognosis and control of production systems. Although the concept of the digital twin itself can be vague there are some technologies trying to implement the digital twin of production. The asset administration shell (AAS) is an example of such a technology that draws growing attention. Picking up the initial situation these two trends could be used to create a feedback loop between the production system and network and thus improve the overall efficiency in production networks. Based on this idea, the paper first presents an approach to model systematically a possibility for a feedback loop orienting to the business model concept. Second, a reference architecture is derived from the RAMI 4.0 standard. The specified reference architecture is the basis for the specific implementation. Third, a procedure is developed to implement a specific architecture. For implementing an architecture, the usage of the asset administration shell is assumed. Finally, the approach is validated in a use case from the high precision weight industry.

Original languageEnglish
JournalProcedia CIRP
Volume112
Pages (from-to)79-84
Number of pages6
ISSN2212-8271
DOIs
Publication statusPublished - 2022
Externally publishedYes
Event15th CIRP Conference on Intelligent Computation in Manufacturing Engineering - ICME 2021 - Naples, Italy
Duration: 14.07.202116.07.2021
Conference number: 15

Bibliographical note

Publisher Copyright:
© 2022 The Author(s).

    Research areas

  • Asset Administration Shell, Dynamic Interaction, IT Architecture, Production Network, Servitization
  • Engineering

Recently viewed

Publications

  1. Framework for setting up and operating biobanks
  2. Evaluating the construct validity of Objective Personality Tests using a multitrait-multimethod-Multioccasion-(MTMM-MO)-approach
  3. Different approaches to learning from errors: Comparing the effectiveness of high reliability and error management approaches
  4. Selective disassembly planning considering process capability and component quality utilizing reinforcement learning
  5. Adaptive and Dynamic Feedback Loops between Production System and Production Network based on the Asset Administration Shell
  6. Advantages and disadvantages of different text coding procedures for research and practice in a school context
  7. Integration of Environmental Management Information Systems and ERP systems using Integration Platforms
  8. On the Inclusion of Parameter Uncertainties into Engineering Design Computations
  9. Simple saturated relay non-linear PD control for uncertain motion systems with friction and actuator constraint
  10. A reference architecture for the integration of EMIS and ERP-Systems
  11. Segment Introduction
  12. Mostly harmless econometrics? Statistical paradigms in the ‘top five’ from 2000 to 2018
  13. Explicit and Implicit Framing Effects on Product Attitudes When Using Country-of- Origin Cues
  14. Sliding Mode Control of an Inductive Power Transmission System with Maximum Efficiency
  15. Dynamically adjusting the k-values of the ATCS rule in a flexible flow shop scenario with reinforcement learning
  16. Dynamic pricing of product and delivery time in multi-variant production using an actor critic reinforcement learning