Please use this identifier to cite or link to this item:
http://localhost:8081/jspui/handle/123456789/18555| Title: | LEARNING TO EXPLORE: A DEEP REINFORCEMENT APPROACH FOR AUTONOMOUS ROBOT DISCOVERY |
| Authors: | Pawar, Rohit Singh |
| Issue Date: | Jun-2024 |
| Publisher: | IIT, Roorkee |
| Abstract: | The capacity to independently investigate is essential for mobile robots operating in foreign environments. The basis of conventional exploration strategies like frontier-based techniques is geometric representations of the environment, which can be challenging to precisely derive from sensor data. A novel goal-driven exploration method based on deep reinforcement learning is presented in this paper, enabling a robot to explore an unknown environment on its own without making explicit maps. To map robot sensor observations directly to control commands for border region movement, train a deep neural network strategy. It is the central idea. The policy is learned by reinforcement learning, and its reward function is configured to promote travel to new areas while avoiding pitfalls and making trips back to previously visited locations. Phrases pertaining to distance covered, unexplored territory for the robot’s sensors, and avoiding obstructions make up its prize. The authors evaluate their approach in simulated environments of varying sizes and complexity. Using the learned exploration policy, it is demonstrated that the robot can effectively navigate most of the unknown settings. Because it is not dependent on geometric mapping as frontier-based techniques are, the deep reinforcement learning strategy surpasses them. |
| URI: | http://localhost:8081/jspui/handle/123456789/18555 |
| Research Supervisor/ Guide: | Kumar, Neetesh |
| metadata.dc.type: | Dissertations |
| Appears in Collections: | MASTERS' THESES (CSE) |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 22535026_ROHIT SINGH PAWAR.pdf | 3.09 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
