Creating Artificial Avians: A Novel Neural Network Generates Realistic Bird Pictures from Text using Common Sense

Summary: Researchers in China have developed a new neural network that generates high-quality bird images from textual descriptions using common-sense knowledge to enhance the generated image at three different levels of resolution, achieving competitive scores with other neural network methods. The network uses a generative adversarial network and was trained with a dataset of bird images and text descriptions, with the goal of promoting the development of text-to-image synthesis.

Source: Intelligent Computing

In an effort to generate high-quality images based on text descriptions, a group of researchers in China built a generative adversarial network that incorporates data representing common-sense knowledge.

Their method uses common sense to clarify the starting point for image generation and also uses common sense to enhance different specific features of the generated image at three different levels of resolution. The network was trained using a database of bird images and text descriptions.

The generated bird images achieved competitive scores when compared with those produced using other neural network methods.

The group’s research was published Feb. 20 in Intelligent Computing, a Science Partner Journal.

Given that “a picture is worth a thousand words,” the shortcomings of the currently available text-to-image frameworks are hardly surprising. If you want to generate an image of a bird, the description you give to a computer might include its size, the color of it body and the shape its beak. To produce an image, the computer must still decide many details about how to display the bird, such as which way the bird is facing, what should be in the background and whether its beak is open or closed.

If the computer had what we think of as common-sense knowledge, it would make decisions about depicting unspecified details more successfully. For example, a bird might stand on one leg or two legs, but not three.

When quantitatively measured against its predecessors, the authors’ image generation network achieved competitive scores using metrics that measure fidelity and distance from real images. Qualitatively, the authors characterize the generated images as generally consistent, natural, sharp and vivid.

“We firmly believe that the introduction of common sense can greatly promote the development of text-to-image synthesis,” the research article concludes.

The authors’ neural network for generating images from text consists of three modules. The first one enhances the text description that will be used to generate the image. ConceptNet, a data source that represents general knowledge for language processing as a graph of related nodes, was used to retrieve pieces of common-sense knowledge to be added to the text description.

This shows an AI generated bird
If the computer had what we think of as common-sense knowledge, it would make decisions about depicting unspecified details more successfully. For example, a bird might stand on one leg or two legs, but not three. Image is in the public domain

The authors added a filter to reject useless knowledge and select the most relevant knowledge. To randomize the generated images, they added some statistical noise. The input to the image generator thus consists of the original text description, analyzed as a sentence and as separate words, plus selected bits of common-sense knowledge from ConceptNet, plus noise.

The second module generates images in multiple stages. Each stage corresponds to an image size, starting with a small image of 64 x 64 pixels and increasing to 128 x 128 and then 256 x 256. The module relies on the authors’ “adaptive entity refinement” unit, which incorporates common-sense knowledge of the details needed for each size of image.

The third module examines generated images and rejects those that do not match the original description. The system is a “generative adversarial network” because it has this third part that checks the work of the generator. Since the authors’ network is “common-sense driven,” they call their network CD-GAN.

CD-GAN was trained using the Caltech-UCSD Birds-200-2011 dataset, which catalogs 200 bird species using 11,788 specially annotated images.

Guokai Zhang of Tianjin University performed the experiments and wrote the manuscript. Ning Xu of Tianjin University contributed to the conception of the study. Chenggang Yan of Hangzhou Dianzi University performed the data analyses. Bolun Zheng of Hangzhou Dianzi University and Yulong Duan of the 30th Research Institute of CETC contributed significantly to analysis and manuscript preparation. Bo Lv of the 30th Research Institute of CETC and An-An Liu of Tianjin University helped perform the analysis with constructive discussions.

About this artificial intelligence research news

Author: Lucy Day Werts
Source: Intelligent Computing
Contact: Lucy Day Werts – Intelligent Computing
Image: The image is in the public domain

Original Research: Open access.
CD-GAN: Commonsense-driven Generative Adversarial Network with Hierarchical Refinement for Text-to-Image Synthesis” by Ning Xu et al. Intelligent Computing


Abstract

CD-GAN: Commonsense-driven Generative Adversarial Network with Hierarchical Refinement for Text-to-Image Synthesis

Synthesizing vivid images with descriptive texts is gradually emerging as a frontier cross-domain generation task. However, it is obviously inadequate to generate the high-quality image with one single sentence accurately due to the information asymmetry between modalities, which needs external knowledge to balance the process.

Moreover, the limited description of the entities in the sentence cannot guarantee the semantic consistency between text and generated image, causing the deficiency of details in foreground and background.

Here, we propose a commonsense-driven generative adversarial network to generate photo-realistic images depending on entity-related commonsense knowledge.

Commonsense-driven generative adversarial network contains 2 key commonsense-based modules: (a) Entity semantic augment is designed to enhance entity semantics with common sense for abating the information asymmetry, and (b) adaptive entity refinement is used to generate the high-resolution image guided by various commonsense knowledges in multistage for keeping text-image consistency.

We demonstrated extensive synthetic cases on the wildly used CUB-birds (Caltech-UCSD Birds-200-2011) dataset, where our model achieves the competitive results compared to the other state-of-the-art models.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.