Animals slowly shift their likelihood of detecting stimulus changes over ten minutes. Researchers found the activity of neural populations from the V4 visual area and prefrontal cortex slowly drifted together with the behavioral fluctuations. The slow drift acts as an impulsivity signal.
When it comes to processing information about motion, neurons in the ventral intraparietal area of the brain are more flexible in switching between reference frames. The findings could be used to develop neural prosthetics designed for motion control.
People are oblivious to change when color is removed from peripheral vision. Research reports the brain likely fills in for much of our perceptual experience when it comes to seeing the entire picture in color.
Prediction errors play a role in the context of dynamic perceptual events that take place within fractions of a second. Findings support the hypothesis that visual perception occurs as a result of a decision process.
Visually represented information is a functional part of conceptual knowledge. The extend of the visual representations is influenced by visual experience.
Study provides new evidence supporting the theory that perceptual limitations are caused by a correlated noise in neural activity.
Understanding how the brain decides what it should pay attention to is key to understanding how prediction plays a tole in autism.
Reward does not improve visual perception learning unless it is followed by a good night's sleep.
Researchers have identified a circuit in the brains of fruit flies, which enables them to see in color. The network is similar to that which allows human color vision. The findings could help in the development of AI technologies.
New computer software sheds light on animal visual processing and perception.
Crashes in visual processing occur when neurons processing one image are tasked with processing another too quickly. This results in either one or both images being unable to reach our conscious awareness.
EmoNet, a new convolutional neural network, can accurately decode images into eleven distinct emotional categories. Training the AI on over 25,000 images, researchers demonstrate image content is sufficient to predict the category and valence of human emotions.