Semi-supervised feature sharing for efficient video segmentation

Show simple item record

dc.contributor.author Ponomarchuk, Anton
dc.date.accessioned 2019-02-19T15:35:52Z
dc.date.available 2019-02-19T15:35:52Z
dc.date.issued 2019
dc.identifier.citation Ponomarchuk, Anton. Semi-supervised feature sharing for efficient video segmentation : Master Thesis : manuscript / Anton Ponomarchuk ; Supervisor Andrey Luzan ; Ukrainian Catholic University, Department of Computer Sciences. – Lviv : [s.n.], 2019. – 28 p. : ill. uk
dc.identifier.uri http://er.ucu.edu.ua/handle/1/1335
dc.language.iso en uk
dc.subject Semi-supervised feature sharing uk
dc.subject Video semantic segmentation uk
dc.subject Loss function and accuracy uk
dc.title Semi-supervised feature sharing for efficient video segmentation uk
dc.type Preprint uk
dc.status Публікується вперше uk
dc.description.abstracten In robot sensing and automotive driving domains, producing precise semantic segmentation masks for images can help greatly with environment understanding and, as a result, better interaction with it. These tasks usually need to be processed for images with more the 2 object’s classes. Moreover, semantic segmentation should be done for a short period. Almost all approaches that try to solve this task used heavyweight end-to-end deep neural network or external blocks like GRU [14], LSTM[25] or optical flow [1]. In this work, we provide a deep neural network architecture for learning to extract global high-level features and propagate them among the images that describe the same video’s scene, for speeding up image processing. We provide a propagation strategy without any external blocks. We also provide loss function for training such network with the dataset, where the vast number of images don’t have a segmentation mask. uk


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Browse

My Account