Neuroscience research articles are provided.
What is neuroscience? Neuroscience is the scientific study of nervous systems. Neuroscience can involve research from many branches of science including those involving neurology, brain science, neurobiology, psychology, computer science, artificial intelligence, statistics, prosthetics, neuroimaging, engineering, medicine, physics, mathematics, pharmacology, electrophysiology, biology, robotics and technology.
– These articles focus mainly on neurology research. – What is neurology? – Definition of neurology: a science involved in the study of the nervous systems, especially of the diseases and disorders affecting them. – Neurology research can include information involving brain research, neurological disorders, medicine, brain cancer, peripheral nervous systems, central nervous systems, nerve damage, brain tumors, seizures, neurosurgery, electrophysiology, BMI, brain injuries, paralysis and spinal cord treatments.
What is Psychology? Definition of Psychology: Psychology is the study of behavior in an individual, or group. Psychology news articles are listed below.
Artificial Intelligence articles involve programming, neural engineering, artificial neural networks, artificial life, a-life, floyds, boids, emergence, machine learning, neuralbots, neuralrobotics, computational neuroscience and more involving A.I. research.
Robotics articles will cover robotics research press releases. Robotics news from universities, labs, researchers, engineers, students, high schools, conventions, competitions and more are posted and welcome.
Genetics articles related to neuroscience research will be listed here.
Neurotechnology research articles deal with robotics, AI, deep learning, machine learning, Brain Computer Interfaces, neuroprosthetics, neural implants and more. Read the latest neurotech news articles below.
Summary: Contextual information, especially space and sequence, contributes to the distortion of perception in short-term memory.
Source: Goethe University Frankfurt
We learned it as children: to cross the street in exemplary fashion, we must first look to the left, then to the right, and finally once more to the left. If we see a car and a cyclist approaching when we first look to the left, this information is stored in our short-term memory. During the second glance to the left, our short-term memory reports: bicycle and car were there before, they are the same ones, they are still far enough away. We cross the street safely.
This is, however, not at all true. Our short-term memory deceives us. When looking to the left the second time, our eyes see something completely different: the bicycle and the car do not have the same colour anymore because they are just now passing through the shadow of a tree, they are no longer in the same location, and the car is perhaps moving more slowly. The fact that we nonetheless immediately recognise the bicycle and the car is due to the fact that the memory of the first leftward look biases the second one.
Scientists at Goethe University, led by psychologist Christoph Bledowski and doctoral student Cora Fischer reconstructed the traffic situation – very abstractly – in the laboratory: student participants were told to remember the motion direction of green or red dots moving across a monitor. During each trial, the test person saw two moving dot fields in short succession and had to subsequently report the motion direction of one of these dot fields. In additional tests, both dot fields were shown simultaneously next to each other. The test persons all completed numerous successive trials.
The Frankfurt scientists were very interested in the mistakes made by the test persons and how these mistakes were systematically connected in successive trials. If for example the observed dots moved in the direction of 10 degrees and in the following trial in the direction of 20 degrees, most people reported 16 to 18 degrees for the second trial. However, if 0 degrees were correct for the following trial, they reported 2 to 4 degrees for the second trial. The direction of the previous trial therefore distorted the perception of the following one – “not very much, but systematically,” says Christoph Bledowski. He and his team extended previous studies by investigating the influence of contextual information of the dot fields like colour, spatial position (right or left) and sequence (shown first or second). “In this way we more closely approximate real situations, in which we acquire different types of visual information from objects,” Bledowski explains. This contextual information, especially space and sequence, contribute significantly to the distortion of successive perception in short-term memory. First author Cora Fischer says: “The contextual information helps us to differentiate among different objects and consequently to integrate information of the same object through time.”
What does this mean for our traffic situation? “Initially, it doesn’t sound good if our short-term memory reflects something different from what we physically see,” says Bledowski. “But if our short-term memory were unable to do this, we would see a completely new traffic situation when we looked to the left a second time. That would be quite confusing, because a different car and a different cyclist would have suddenly appeared out of nowhere. The slight ‘blurring’ of our perception by memory ultimately leads us to perceive our environment, whose appearance is constantly changing due to motion and light changes, as stable. In this process, the current perception of the car, for example, is only affected by the previous perception of the car, but not by the perception of the cyclist.”
About this neuroscience research article
Source: Goethe University Frankfurt Media Contacts: Christoph Bledowski – Goethe University Frankfurt Image Source: The image is in the public domain.
Original Research: Open access “Context information supports serial dependence of multiple visual objects across memory episodes”. by Cora Fischer, Stefan Czoschke, Benjamin Peters, Benjamin Rahm, Jochen Kaiser, Christoph Bledowski. Nature Communications doi:10.1038/s41467-020-15874-w
Context information supports serial dependence of multiple visual objects across memory episodes
Serial dependence is thought to promote perceptual stability by compensating for small changes of an object’s appearance across memory episodes. So far, it has been studied in situations that comprised only a single object. The question of how we selectively create temporal stability of several objects remains unsolved. In a memory task, objects can be differentiated by their to-be-memorized feature (content) as well as accompanying discriminative features (context). We test whether congruent context features, in addition to content similarity, support serial dependence. In four experiments, we observe a stronger serial dependence between objects that share the same context features across trials. Apparently, the binding of content and context features is not erased but rather carried over to the subsequent memory episode. As this reflects temporal dependencies in natural settings, our findings reveal a mechanism that integrates corresponding content and context features to support stable representations of individualized objects over time.
Feel Free To Share This Visual Neuroscience News.