ADOPEL: ADAPTIVE DATA COLLECTION PROTOCOL USING REINFORCEMENT LEARNING FOR VANETS
- 1 , France
Abstract
Efficient propagation of information over a vehicular wireless network has usually remained the focus of the research community. Although, scanty contributions have been made in the field of vehicular data collection and more especially in applying learning techniques to such a very changing networking scheme. These smart learning approaches excel in making the collecting operation more reactive to nodes mobility and topology changes compared to traditional techniques where a simple adaptation of MANETs propositions was carried out. To grasp the efficiency opportunities offered by these learning techniques, an Adaptive Data collection Protocol using reinforcement Learning (ADOPEL) is proposed for VANETs. The proposal is based on a distributed learning algorithm on which a reward function is defined. This latter takes into account the delay and the number of aggregatable packets. The Q-learning technique offers to vehicles the opportunity to optimize their interactions with the very dynamic environment through their experience in the network. Compared to non-learning schemes, our proposal confirms its efficiency and achieves a good tradeoff between delay and collection ratio.
DOI: https://doi.org/10.3844/jcssp.2014.2182.2193
Copyright: © 2014 Ahmed Soua and Hossam Afifi. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
- 4,421 Views
- 2,820 Downloads
- 1 Citations
Download
Keywords
- Data Collection
- Vehicular Ad Hoc Networks (VANETs)
- Reinforcement Learning
- Qlearning
- Collection Ratio
- Number of Hops