Quintía Vidal, PabloIglesias Rodríguez, RobertoRodríguez González, Miguel ÁngelVázquez Regueiro, Carlos2018-11-142018-11-142013Quintía Vidal, P., Iglesias Rodríguez, R., Rodríguez González, M., & Vázquez Regueiro, C. (2013). Learning on real robots from experience and simple user feedback. Journal of Physical Agents, 7(1), 57-65. doi:https://doi.org/10.14198/JoPha.2013.7.1.081888-0258http://hdl.handle.net/10347/17703In this article we describe a novel algorithm that allows fast and continuous learning on a physical robot working in a real environment. The learning process is never stopped and new knowledge gained from robot-environment interactions can be incorporated into the controller at any time. Our algorithm lets a human observer control the reward given to the robot, hence avoiding the burden of defining a reward function. Despite the highly-non-deterministic reinforcement, through the experimental results described in this paper, we will see how the learning processes are never stopped and are able to achieve fast robot adaptation to the diversity of different situations the robot encounters while it is moving in several environmentsengThis document is under a Creative Commons Attribution license 4.0 International (CC BY 4.0)Atribución 4.0 Internacionalhttp://creativecommons.org/licenses/by/4.0/Autonomous robotsReinforcement learningLearning on real robots from experience and simple user feedbackjournal article10.14198/JoPha.2013.7.1.08open access