A Survey on Face Generation Using Deep Learning
Keywords:
Deep learning frameworks (e.g., TensorFlow, PyTorch), Image-to-image translation, Image synthesis. Unsupervised learningAbstract
The field of deep learning has witnessed remarkable advancements in the generation of realistic and high-quality human faces. This abstract provides an overview of the techniques and methodologies employed in face generation using deep learning models. Face generation using deep learning primarily relies on Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). These models have demonstrated the ability to synthesize lifelike faces, presenting a wide range of applications in computer vision, entertainment, and beyond.
References
- Image-to-image Translation with Conditional Adversarial Neworks (Pix2Pix) by Isola et al.(2016)
- CycleGAN: Unpaired Image-to-image Translation using Cycle-Consistent Adversarial Networks by Zhu et al.(2017)
- Progressive Growing of GANs for Improved Quality, Stability, and Variation by Karras et al.(2018)
- Star GAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-image Translation by Choi et al.(2018)
- SPADE: Semantic Image Synthesis with Spatially-Adaptive Normalization by Park et al.(2019)
- MUNIT: Multidimensional Unsupervised Image-to-image Translation by Huang et al.(2018)
Downloads
Published
Issue
Section
License
Copyright (c) IJSRCSEIT

This work is licensed under a Creative Commons Attribution 4.0 International License.