Skip to Main content Skip to Navigation
Conference papers

Attention-based Fusion for Multi-source Human Image Generation

Abstract : We present a generalization of the person-image generation task, in which a human image is generated conditioned on a target pose and a set X of source appearance images. In this way, we can exploit multiple, possibly complementary images of the same person which are usually available at training and at testing time. The solution we propose is mainly based on a local attention mechanism which selects relevant information from different source image regions, avoiding the necessity to build specific generators for each specific cardinality of X. The empirical evaluation of our method shows the practical interest of addressing the person-image generation problem in a multi-source setting.
Complete list of metadata
Contributor : Stéphane Lathuilière Connect in order to contact the contributor
Submitted on : Monday, November 18, 2019 - 6:37:08 PM
Last modification on : Wednesday, November 3, 2021 - 6:17:46 AM

Links full text


  • HAL Id : hal-02369194, version 1
  • ARXIV : 1905.02655


Stéphane Lathuilière, Enver Sangineto, Aliaksandr Siarohin, Nicu Sebe. Attention-based Fusion for Multi-source Human Image Generation. IEEE Winter Conference on Applications of Computer Vision, Mar 2020, Snowmass village, United States. ⟨hal-02369194⟩



Record views