Luis Felipe Casas Murillo
Create Your First Project
Start adding your projects to your portfolio. Click on "Manage Projects" to get started
Master's Thesis: "Experimental Digital Twin for Job Shops with Transportation Agents"
Project Type
Reinforcement Learning, Experimental Digital Twin, Job Shop Scheduling Problem with Transportation (JSPT)
Date
December 2022
Location
Aachen, Germany
Publication
Learning and Intelligent Optimization. LION 2023
A Reinforcement Learning Simulation Environment for Optimizing Flexible Assembly Shops with Multiple Agents.
Abstract- Nowadays factories are challenged with high product customization, increasing product complexity, and the need for shorter lead times. Consequently, companies require Flexible Assembly Systems (FAS) to remain competitive within their markets. Within FAS, the layout of the shop floor and the scheduling of jobs must be performed repeatedly to efficiently assemble the products. Both the Job Shop Scheduling Problem (JSSP) and the layout planning represent combinatorial optimization problems with NP-hardness given that no optimal solution can be found in a reasonable amount of time. Reinforcement Learning (RL) has been successfully used to partially solve such problems. Moreover, RL methods have outperformed the multiple alternative methods they have been compared with. However, the training environments used within the current state-of-the-art applications portray a static representation of today’s FAS. In this work, a highly configurable environment is proposed for optimizing FAS with multiple RL agents. Consequently, the agents can be trained rapidly within different FAS configurations, meaning different layouts and Job Shop Problem (JSP) instances. The environment was created using the Unity simulator and the Unity ML-Agents Toolkit. Furthermore, an RL approach was selected for implementation within the environment and trained within multiple instances of different FAS configurations. The trained agents were then tested for performance with agent scalability, performance with job scalability, performance with JSP scalability, and their performance within different shop floor layouts. Additionally, Priority Dispatching Rules (PDR) were integrated into the environment and compared with the trained RL agents. The trained agents were able to outperform every PDR they were compared with. Finally, the environment’s visualization methods allowed for the analysis of agent behaviors, where the agents demonstrated noticeable preferences in actions while assembling the products.
Publication:
Gannouni, A., Casas Murillo, L.F., Kemmerling, M., Abdelrazeq, A., Schmitt, R.H. (2023). Experimental Digital Twin for Job Shops with Transportation Agents. In: Sellmann, M., Tierney, K. (eds) Learning and Intelligent Optimization. LION 2023. Lecture Notes in Computer Science, vol 14286. Springer, Cham. https://doi.org/10.1007/978-3-031-44505-7_25