Показати скорочений опис матеріалу
dc.contributor.author | Shyshkin, Oleh | |
dc.date.accessioned | 2019-02-19T16:04:10Z | |
dc.date.available | 2019-02-19T16:04:10Z | |
dc.date.issued | 2019 | |
dc.identifier.citation | Shyshkin, Oleh. Semantic segmentation for visual indoor localization : Master Thesis : manuscript / Oleh Shyshkin ; Supervisor Juan Pablo Maldonado Lopez, Dr. ; Ukrainian Catholic University, Department of Computer Sciences. – Lviv : [s.n.], 2019. – 26 p. : ill. | uk |
dc.identifier.uri | http://er.ucu.edu.ua/handle/1/1337 | |
dc.language.iso | en | uk |
dc.subject | Music Generation | uk |
dc.subject | TCN based models | uk |
dc.subject | PerformanceRNN | uk |
dc.title | Music Generation Powered by Artificial Intelligence | uk |
dc.type | Preprint | uk |
dc.status | Публікується вперше | uk |
dc.description.abstracten | Music is an essential part of human life in our days. Despite a long history of the phenomena people still explore it and expand the new horizons. For the last ten years quality of computer-generated music significantly improved. State of the art machine learning models like PerformanceRNN can perform music close to a human level. However, it is hard to deal with a generation of long-term music for the systems. In work, we apply a TCN model to a generation music task and evaluate the quality of the music. We show that the models have a significantly better performance than a baseline model for a long-term music generation task. However, it has own weak points in musicality and time generation. We also discuss possible options to resolve the issues. | uk |