Показати скорочений опис матеріалу
dc.contributor.author | Babin, Igor | |
dc.date.accessioned | 2024-08-22T09:52:15Z | |
dc.date.available | 2024-08-22T09:52:15Z | |
dc.date.issued | 2024 | |
dc.identifier.citation | Babin Igor. Image inpainting in latent space.Ukrainian Catholic University, Faculty of Applied Sciences, Department of Computer Sciences. Lviv 2024, 41 p. | uk |
dc.identifier.uri | https://er.ucu.edu.ua/handle/1/4662 | |
dc.language.iso | en | uk |
dc.subject | Image inpainting | uk |
dc.subject | latent space | uk |
dc.title | Image inpainting in latent space | uk |
dc.type | Preprint | uk |
dc.status | Публікується вперше | uk |
dc.description.abstracten | This thesis introduces a framework in which training image autoencoders by apply- ing losses in latent space improves the quality of decodings and can significantly decrease training time. Furthermore, within this framework, we propose a mask guidance mechanism that combines mask features and image features in the early phases of image encoding, which supports the encoder in reconstructing the image embeddings. These methods are demonstrated in the context of the ill-posed image inpainting problem, which seeks to reconstruct regions of the image that have been occluded by a mask. We show that latent loss application results in more naturally inpainted textures when used in a state-of-the-art inpainting architecture. We also show that a mask-controlled embedding gives superior results across every com- mon inpainting metric when compared to a state-of-the-art approach, which pro- vides mask conditioning only in the image space. The final component of our study involves the visualization of latent space to highlight damaged areas of features that need refinement. | uk |