Reinforcement Learning Agents in Procedurally-generated Environments with Sparse Rewards

Show simple item record

dc.contributor.author Nahirnyi, Oleksii
dc.date.accessioned 2022-07-22T10:01:28Z
dc.date.available 2022-07-22T10:01:28Z
dc.date.issued 2022
dc.identifier.citation Nahirnyi, Oleksii. Reinforcement Learning Agents in Procedurally-generated Environments with Sparse Rewards / Oleksii Nahirnyi; Supervisor: Dr. Pablo Maldonado; Ukrainian Catholic University, Faculty of Applied Sciences, Department of Computer Sciences. – Lviv 2022. – 45 p. uk
dc.identifier.uri https://er.ucu.edu.ua/handle/1/3165
dc.description.abstract Solving sparse-reward environments is one of the most considerable challenges for state-of-the-art (SOTA) Reinforcement Learning (RL). Recent usage of sparse-rewards in procedurally-generated environments (PGE) to more adequately measure agent’s generalization capabilities via randomization makes this challenge even harder. Despite some progress of newly created exploration-based algorithms in MiniGrid PGEs, the task remains open for research in terms of improving sample complexity. We contribute to solving this task by creating a new formulation of exploratory intrinsic reward. We base this formulation on a thorough review and categorization of other methods in this area. Agent that optimizes an RL objective with such a formulation performs better than SOTA methods in some small or medium sized PGEs. uk
dc.language.iso en uk
dc.subject reinforcement learning uk
dc.subject exploration uk
dc.subject sparse rewards uk
dc.subject procedurally-generated environment uk
dc.subject intrinsic reward uk
dc.title Reinforcement Learning Agents in Procedurally-generated Environments with Sparse Rewards uk
dc.type Preprint uk
dc.status Публікується вперше uk


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Browse

My Account