Please use this identifier to cite or link to this item: http://localhost:8081/jspui/handle/123456789/18861
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSharma, Ayush-
dc.date.accessioned2026-02-05T10:34:07Z-
dc.date.available2026-02-05T10:34:07Z-
dc.date.issued2024-06-
dc.identifier.urihttp://localhost:8081/jspui/handle/123456789/18861-
dc.guideGupta, Manu Kumaren_US
dc.description.abstractTraffic Light Control (TLC) and efficient logistics Transportation Problem (TP) are two important and challenging problems in today’s world. An attempt has been made in this work to solve for dynamic TLC using Q-Learning. We also provide feedback to traffic light controller by solving the appropriate mathematical problem (Transportation Problem). This feedback is in terms of number of vehicles allotted for every source-destination pair. The objective of traffic light controller is to minimize average waiting time whereas Transportation Problem (TP) minimizes the total distance travelled based on the topology of the network. We first simulate the traffic flow of a synthetic 4x4 grid road network using Sumo Simulator, and then implement Q-Learning algorithm for Intelligent Traffic Light Control (ITLC), based on the feedback given by TP. We further provide a detailed analysis for Dehradun City Network. Our results are validated using Little's Law, and we notice that average waiting time for Q-Learning is smaller when compared to Fixed Light in both the cases (4x4 grid model and Dehradun city network). We further study the objective function related to tail probability, such as max waiting time and our results demonstrate that Fixed Light policy might beat Q-Learning algorithm for such non-traditional objectives. We have used state of the art linear programming (LP) solvers to generate optimal solution to the TP, and this solution is fed as an input to the network topography. We have used Sumo Eclipse simulation software to model and analyze traffic flow dynamics in this work. Intelligent Traffic Light Control is a basic need for controlling modern traffic scenarios. Due to traffic’s dynamic nature, there is a need for an algorithm that can change its strategy/policy according to the need of the environment. This brings reinforcement learning into picture, where the agent learns by interacting with the environment. Agent takes the action in accordance to the algorithm employed, and environment provides the reward after the action.en_US
dc.language.isoenen_US
dc.publisherIIT, Roorkeeen_US
dc.titleQ-LEARNING FOR INTELLIGENT TRAFFIC LIGHT CONTROL WITH FEEDBACK FROM OPTIMIZATION PROBLEMen_US
dc.typeDissertationsen_US
Appears in Collections:MASTERS' THESES (MFSDS & AI)

Files in This Item:
File Description SizeFormat 
22566005_AYUSH SHARMA.pdf3.62 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.