dc.description.abstracten |
In this thesis, we show that transferring knowledge from other domains of video
understanding combined with large-scale learning can improve robustness of Video
Object Segmentation (VOS) under complex circumstances. Namely, we focus on integrating scene global motion knowledge to improve large-scale semi-supervised
Video Object Segmentation. Prior works on VOS mostly rely on direct comparison of semantic and contextual features to perform dense matching between current and past frames, passing over actual motion structure. On the other hand,
Optical Flow Estimation task aims to approximate the scene motion field, exposing global motion patterns which are typically undiscoverable during all pairs similarity search. We present WarpFormer, an architecture for semi-supervised Video
Object Segmentation that exploits existing knowledge in motion understanding to
conduct smoother propagation and more accurate matching. Our framework employs a generic pretrained Optical Flow Estimation network whose prediction is
used to warp both past frames and instance segmentation masks to the current
frame domain. Consequently, warped segmentation masks are refined and fused
together aiming to inpaint occluded regions and eliminate artifacts caused by flow
field imperfects. Additionally, we employ novel large-scale MOSE 2023 dataset to
train model on various complex scenarios. Our method demonstrates strong performance on DAVIS 2016/2017 validation (93.0% and 85.9%), DAVIS 2017 test-dev
(80.6%) and YouTube-VOS 2019 validation (83.8%) that is competitive with alternative state-of-the-art methods while using much simpler memory mechanism and
instance understanding logic. |
uk |