Hierarchical Reinforcement Learning for Humanoids
Abhishek Warrier1, Arpit Kapoor2, T. Sujithra3

1Abhishek Warrier, Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattankulathur (Tamil Nadu), India.
2Arpit Kapoor, Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattankulathur (Tamil Nadu), India.
3Dr. T. Sujithra, Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattankulathur (Tamil Nadu), India.

Manuscript received on 18 April 2019 | Revised Manuscript received on 25 April 2019 | Manuscript published on 30 April 2019 | PP: 1070-1074 | Volume-8 Issue-4, April 2019 | Retrieval Number: D6455048419/19©BEIESP
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: The control of humanoid robots has always been difficult as humanoids are multi-body systems with many degrees of freedom. With the advent of deep reinforcement learning techniques, such complex continuous control tasks can now be learned directly without the need for explicit hand tuning of controllers. But most of these approaches only focus on achieving a stable walking gait as teaching a higher order task to a humanoid is extremely hard. But there have been recent advances in Hierarchical Reinforcement learning, in which a complex task is broken down into a hierarchy of sub-tasks and then learned. In this paper, we demonstrate how a hierarchical learning inspired approach can be used to teach a higher order complex task, such as solving a maze, to a humanoid robot.
Keywords: Humanoids, Hierarchical Reinforcement Learning, Reinforcement Learning, Deep Reinforcement Learning.

Scope of the Article: Deep Learning