pith. machine review for the scientific record. sign in

arxiv: 1610.05586 · v2 · submitted 2016-10-18 · 💻 cs.CV

Recognition: unknown

Deep Identity-aware Transfer of Facial Attributes

Authors on Pith no claims yet
classification 💻 cs.CV
keywords attributenetworktransferimagefacialdiatidentity-awaremask
0
0 comments X
read the original abstract

This paper presents a Deep convolutional network model for Identity-Aware Transfer (DIAT) of facial attributes. Given the source input image and the reference attribute, DIAT aims to generate a facial image that owns the reference attribute as well as keeps the same or similar identity to the input image. In general, our model consists of a mask network and an attribute transform network which work in synergy to generate a photo-realistic facial image with the reference attribute. Considering that the reference attribute may be only related to some parts of the image, the mask network is introduced to avoid the incorrect editing on attribute irrelevant region. Then the estimated mask is adopted to combine the input and transformed image for producing the transfer result. For joint training of transform network and mask network, we incorporate the adversarial attribute loss, identity-aware adaptive perceptual loss, and VGG-FACE based identity loss. Furthermore, a denoising network is presented to serve for perceptual regularization to suppress the artifacts in transfer result, while an attribute ratio regularization is introduced to constrain the size of attribute relevant region. Our DIAT can provide a unified solution for several representative facial attribute transfer tasks, e.g., expression transfer, accessory removal, age progression, and gender transfer, and can be extended for other face enhancement tasks such as face hallucination. The experimental results validate the effectiveness of the proposed method. Even for the identity-related attribute (e.g., gender), our DIAT can obtain visually impressive results by changing the attribute while retaining most identity-aware features.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. LatRef-Diff: Latent and Reference-Guided Diffusion for Facial Attribute Editing and Style Manipulation

    cs.CV 2026-04 unverdicted novelty 6.0

    LatRef-Diff replaces semantic directions in diffusion models with latent and reference-guided style codes, uses a hierarchical style modulation module, and applies forward-backward consistency training to achieve stat...

  2. AttDiff-GAN: A Hybrid Diffusion-GAN Framework for Facial Attribute Editing

    cs.CV 2026-04 unverdicted novelty 5.0

    AttDiff-GAN decouples attribute manipulation via feature-level adversarial learning and guides diffusion generation with the edited features, plus PriorMapper and RefineExtractor modules, to achieve more accurate edit...