Attention-based Fusion for Multi-source Human Image Generation - Archive ouverte HAL Access content directly
Conference Papers Year : 2019

Attention-based Fusion for Multi-source Human Image Generation

Abstract

We present a generalization of the person-image generation task, in which a human image is generated conditioned on a target pose and a set X of source appearance images. In this way, we can exploit multiple, possibly complementary images of the same person which are usually available at training and at testing time. The solution we propose is mainly based on a local attention mechanism which selects relevant information from different source image regions, avoiding the necessity to build specific generators for each specific cardinality of X. The empirical evaluation of our method shows the practical interest of addressing the person-image generation problem in a multi-source setting.

Dates and versions

hal-02369194 , version 1 (18-11-2019)

Identifiers

Cite

Stéphane Lathuilière, Enver Sangineto, Aliaksandr Siarohin, Nicu Sebe. Attention-based Fusion for Multi-source Human Image Generation. IEEE Winter Conference on Applications of Computer Vision, Mar 2020, Snowmass village, United States. ⟨hal-02369194⟩
64 View
3 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More