Understanding Text to Image Generation Techniques
In recent years, Generative Adversarial Networks (GANs) have been successful in generating realistic looking detailed images and has applications in data augmentation, astronomy, photo editing. This survey gives a brief introduction and review’s recent improvement in Generative Adversarial Networks (GAN’s) to generate images from their text descriptions. We present their effectiveness, their improvement and further scope of research and discussion on this topic. Introduction: text to image generation is the process of generating images from any set of data based on their text descriptions. The text description forms a part of the model which guides the model to create the image. Text to image generation forms a part of descriptive designing and multimodal learning and drives research for simplification and idea generation based on text description. Text to image generation solves the core problem of creating computer aided design by simple text description. Thus, all T2I problems encompass a GAN which is conditioned on text and thus unlike VanillaGAN or DCGAN will generate image based on the condition which is given to it. Why is t2i optimal for text to image? The improvement in GAN are towards global and local images to improve clarity of different parts of image and is also geared towards better semantic meaning of the given input.
Keywords: image generation techniques, GAN, T2I, text description, semantic technology
Cite this Article: Apoorv Khanduri, Piyush Kumar, Ashish Joshi, Rahul Kumar. Understanding Text to Image Generation Techniques. Journal of Software Engineering Tools & Technology Trends. 2020; 7(1): 18–24p.
- There are currently no refbacks.