IIIT-Delhi Institutional Repository

Adapting vehicular planning and communications for optimized driving

Show simple item record

dc.contributor.author Pal, Mayank Kumar
dc.date.accessioned 2021-03-24T05:00:35Z
dc.date.available 2021-03-24T05:00:35Z
dc.date.issued 2020-07
dc.identifier.uri http://repository.iiitd.edu.in/xmlui/handle/123456789/845
dc.description.abstract Connected Autonomous Vehicles (CAVs) have for long had the attention of the intelligent transportation systems community due to their promise of improving road safety and efficiency via increased perception. CAVs broadly rely on two components: (a) wireless technologies such as DSRC, WiFi, and 5G, to enable information exchange amongst the vehicles and the roadside infrastructure, and (b) a vehicle planner that uses this information along with local information from the vehicle's sensors to find a motion plan that maximizes the vehicle's driving utility. Most existing works on vehicular planning either don't assume any communications network or neglect network constraints and costs. On the other hand, works on vehicular networks ignore motion planning. In our work, we consider motion planning that adapts to the available communications resource. Further, by associating costs with communication, we adapt the use of the network to physical on road constraints. We consider an on-road environment that consists of an autonomous (ego) vehicle, human driven vehicles, and roadside infrastructure. The ego vehicle would like to optimize its driving utility by using information from its sensors and that obtained by querying the infrastructure over the constrained network while being cognizant of associated costs. We formulate the above as a reinforcement learning (RL) problem. The ego vehicle would like to learn a policy function, which at every decision instant, chooses (a) a motion planning action responsible for the longitudinal and latitudinal behavior of the ego vehicle and (b) a communications action that queries relevant information from the infrastructure. We use deep reinforcement learning to make the vehicle learn the optimal policy, in a model-free setting, using a custom made simulator that integrates trace scenarios, communications, and reinforcement learning algorithms. We demonstrate via simulations, the ability of the ego vehicle to smartly choose communications and planning actions while achieving huge gains in driving utility from the use of communications. en_US
dc.language.iso en_US en_US
dc.publisher IIIT-Delhi en_US
dc.subject Connected Autonomous Vehicles, CAVs, Autonomous Vehicle, Intelligent Driver Model, Adaptive Cruise Control en_US
dc.title Adapting vehicular planning and communications for optimized driving en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Repository


Advanced Search

Browse

My Account