Video Coding with Spatio-Temporal Texture Synthesis
In this context, the goal of this PhD is to investigate more precisely the coding of spatio-temporal textures but considering the perceptual point of view, namely to get a rendering of the decoded texture consistent visually with the original texture. For the conventional video coding, the texture information is included in the residual prediction error that is transformed and quantized. For this thesis, we aim at better represent the spatio-temporal textures by using adapted perceptual models, while taking into account the coding constraints, including a compatibility with conventional coding systems.
Video coding using spatio-temporal texture synthesis
The proposed system introduces a spatiotemporal correspondence matching method to ensure that each pixel in the input image gets bijectively mapped to a reference pixel.
To extract visually salient features, we construct a spatio-temporal saliency map by analyzing the video using a combined bottom-up and top-down visual saliency model.