Показати скорочений опис матеріалу
dc.contributor.author | Onyshchak, Oleh | |
dc.date.accessioned | 2020-01-29T15:03:32Z | |
dc.date.available | 2020-01-29T15:03:32Z | |
dc.date.issued | 2020 | |
dc.identifier.citation | Onyshchak, Oleh. Image Recommendation for Wikipedia Articles : Master Thesis : manuscript rights / Oleh Onyshchak ; Supervisor: Miriam Redi ; Ukrainian Catholic University, Department of Computer Sciences. – Lviv : [s.n.], 2020. – 39 p. : ill. | uk |
dc.identifier.uri | http://er.ucu.edu.ua/handle/1/1920 | |
dc.language.iso | en | uk |
dc.subject | Image Recommendation | uk |
dc.subject | Unimodal Representation | uk |
dc.subject | Intermediate Representation | uk |
dc.title | Image Recommendation for Wikipedia Articles | uk |
dc.type | Preprint | uk |
dc.status | Публікується вперше | uk |
dc.description.abstracten | Multimodal learning, which is simultaneous learning from different data sources such as audio, text, images; is a rapidly emerging field of Machine Learning. It is also considered to be learning on the next level of abstraction, which will allow us to tackle more complicated problems such as creating cartoons from a plot or speech recognition based on lips movement. In this paper, we will introduce a basic model to recommend the most relevant images for a Wikipedia article based on state-of-the-art multimodal techniques. We will also introduce the Wikipedia multimodal dataset, containing more than 36,000 high-quality articles. | uk |