Global Motion Understanding in Large-Scale Video Object Segmentation

Показати скорочений опис матеріалу

dc.contributor.author Fedynyak, Volodymyr
dc.date.accessioned 2024-02-14T10:11:49Z
dc.date.available 2024-02-14T10:11:49Z
dc.date.issued 2023
dc.identifier.citation Fedynyak, Volodymyr. Global Motion Understanding in Large-Scale Video Object Segmentation / Volodymyr Fedynyak; Supervisor: Roman Riazantsev; Ukrainian Catholic University, Department of Computer Sciences. – Lviv: 2023. – 36 p.: ill. uk
dc.identifier.uri https://er.ucu.edu.ua/handle/1/4422
dc.language.iso en uk
dc.title Global Motion Understanding in Large-Scale Video Object Segmentation uk
dc.type Preprint uk
dc.status Публікується вперше uk
dc.description.abstracten In this thesis, we show that transferring knowledge from other domains of video understanding combined with large-scale learning can improve robustness of Video Object Segmentation (VOS) under complex circumstances. Namely, we focus on integrating scene global motion knowledge to improve large-scale semi-supervised Video Object Segmentation. Prior works on VOS mostly rely on direct comparison of semantic and contextual features to perform dense matching between current and past frames, passing over actual motion structure. On the other hand, Optical Flow Estimation task aims to approximate the scene motion field, exposing global motion patterns which are typically undiscoverable during all pairs similarity search. We present WarpFormer, an architecture for semi-supervised Video Object Segmentation that exploits existing knowledge in motion understanding to conduct smoother propagation and more accurate matching. Our framework employs a generic pretrained Optical Flow Estimation network whose prediction is used to warp both past frames and instance segmentation masks to the current frame domain. Consequently, warped segmentation masks are refined and fused together aiming to inpaint occluded regions and eliminate artifacts caused by flow field imperfects. Additionally, we employ novel large-scale MOSE 2023 dataset to train model on various complex scenarios. Our method demonstrates strong performance on DAVIS 2016/2017 validation (93.0% and 85.9%), DAVIS 2017 test-dev (80.6%) and YouTube-VOS 2019 validation (83.8%) that is competitive with alternative state-of-the-art methods while using much simpler memory mechanism and instance understanding logic. uk


Долучені файли

Даний матеріал зустрічається у наступних зібраннях

Показати скорочений опис матеріалу

Пошук


Перегляд

Мій обліковий запис