Transfer Learning in Attack Avoidance Games
- 1 University of Los Andes, Bogota, Colombia
Abstract
Transfer knowledge is a human characteristic that has been replicated in machine learning algorithms to improve learning performance measures. However, little success has been accomplished in reinforcement learning tasks when a function approximation is needed to estimate the value functions. In this study, we present a new strategy to facilitate knowledge transfer when an agent is learning to solve a sequence of increasing difficulty tasks. We show that the tasks sequence is an effective scenario to segment the function approximation hypothesis space allowing a faster learning especially in the last task of the sequence. Moreover, the sequence allows the design of a similarity function that helps the agent to determine in which moment is more appropriated to use the transfer autonomously. We empirically show the importance of the presence of all the tasks in the established ordering to accomplish the best improvement in the learning time for the last task.
DOI: https://doi.org/10.3844/jcssp.2020.1465.1476
Copyright: © 2020 Edwin Torres and Fernando Lozano. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
- 3,004 Views
- 1,097 Downloads
- 0 Citations
Download
Keywords
- Reinforcement Learning
- Neural Networks
- Transfer Learning