Please use this identifier to cite or link to this item: http://localhost:8081/jspui/handle/123456789/19961
Title: LEARNING COLLABORATIVE BEHAVIOR IN CONSTRAINED MULTI-AGENT ENVIRONMENT
Authors: Hasan, Maram
Issue Date: Nov-2024
Publisher: IIT Roorkee
Abstract: Multi-agent reinforcement learning (MARL) has become a foundational framework for solving complex real-world problems that involve coordinated collaborative behavior among multiple agents. This thesis advances the understanding and application of MARL by tackling critical challenges such as sparse rewards, limited communication, operational and spatial constraints across diverse settings. The research begins by examining the essential issue of reward specification and its pivotal role in encouraging cooperative behavior among agents. By rigorously evaluating various reward structures with state-of-the-art MARL algorithms, the study offers profound insights into optimizing learning processes under sparse reward conditions. Building on this foundation, the thesis introduces a novel reinforcement learning framework designed for multi-agent coordination in logistics and supply chain environments, particularly within warehouses. By combining hierarchical approaches with curiosity-driven intrinsic learning, this framework promotes implicit coordination among agents with diverse skill sets, leading to improved task performance compared to state-of-the-art MARL algorithms, even in the absence of communication. Furthermore, the thesis addresses the significant challenge of enhancing order fulfillment efficiency in Robotic Mobile Fulfillment Systems (RMFS). It presents an advanced hierarchical reinforcement learning framework with a novel eploration mechanism to navigate operational and sparse rewards constraints while enabling robust collaboration among robots. Empirical evaluations demonstrate substantial improvements in operational efficiency and order completion rates compared to established methods. Finally, the research proposes an innovative strategy for exploration and coverage in complex environments. Leveraging a multi-agent architecture that employs enriched state representations and prioritized experience replay, this strategy ensures comprehensive environmental coverage and efficient navigation, highlighting the potential of MARL in addressing exploratory tasks in new settings. In conclusion, this thesis significantly contributes to the field of reinforcement learning by developing comprehensive frameworks and strategies that enhance multi-agent coordination and collective operational efficiency. The findings have far-reaching implications for practical applications in logistics, robotics, and beyond, establishing MARL as a transformative approach in complex multi-agent systems.
URI: http://localhost:8081/jspui/handle/123456789/19961
Research Supervisor/ Guide: Niyogi, Rajdeep
metadata.dc.type: Thesis
Appears in Collections:DOCTORAL THESES (CSE)

Files in This Item:
File Description SizeFormat 
18911004_MARAM HASAN.pdf6.02 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.