This shows a brain and a network of images.
Hence the idea of System 0, which is essentially a form of "external" thinking that relies on the capabilities of AI. Credit: Neuroscience News

How AI is Reshaping Human Thought and Decision-Making

Summary: A new study introduces “System 0,” a cognitive framework where artificial intelligence (AI) enhances human thinking by processing vast data, complementing our natural intuition (System 1) and analytical thinking (System 2). However, this external thinking system poses risks, such as over-reliance on AI and a potential loss of cognitive autonomy.

The study emphasizes that while AI can assist in decision-making, humans must remain critical and responsible in interpreting its outputs. The researchers call for ethical guidelines to ensure that AI enhances human cognition without diminishing our ability to think independently.

Key Facts:

  • “System 0” refers to AI as an external thinking tool that complements human cognition.
  • Over-reliance on AI risks reducing human autonomy and critical thinking.
  • Ethical guidelines and public education are crucial for responsible AI use in decision-making.

Source: Universita Cattolica del Sacro Cuore

The interaction between humans and artificial intelligence is shaping a new thinking system, a new cognitive scheme, external to the human mind, but capable of enhancing its cognitive abilities.

This is called System 0, which operates alongside the two models of human thought: System 1, characterized by intuitive, fast, and automatic thinking, and System 2, a more analytical and reflective type of thinking.

However, System 0 introduces an additional level of complexity, radically altering the cognitive landscape in which we operate, and could thus mark a monumental step forward in the evolution of our ability to think and make decisions.

It will be our responsibility to ensure that this progress will be used to improve our cognitive autonomy without compromising it.

This is reported by the prestigious scientific journal Nature Human Behaviour, in an article titled “The case for human-AI interaction as System 0 thinking” by a team of researchers led by Professor Giuseppe Riva, director of the Humane Technology Lab at Università Cattolica’s Milan campus and the Applied Technology for Neuropsychology Lab at Istituto Auxologico Italiano IRCCS, Milan, and by Professor Mario Ubiali (I NEED THE COMPLETE AFFILIATION) from Università Cattolica’s Brescia campus.

The study was directed with Massimo Chiriatti from the Infrastructure Solutions Group, Lenovo, in Milan, Professor Marianna Ganapini from the Philosophy Department at Union College, Schenectady, New York, and Professor Enrico Panai from the Faculty of Foreign Languages and Language of Science at Università Cattolica’s Milan campus.

A new form of external thinking

Just as an external drive allows us to store data that are not present on the computer, we can work by connecting our drive to a PC wherever we are, artificial intelligence, with its galactic processing and data-handling capabilities, can represent an external circuit to the human brain capable of enhancing it. Hence the idea of System 0, which is essentially a form of “external” thinking that relies on the capabilities of AI.

By managing enormous amounts of data, AI can process information and provide suggestions or decisions based on complex algorithms. However, unlike intuitive or analytical thinking, System 0 does not assign intrinsic meaning to the information it processes.

In other words, AI can perform calculations, make predictions, and generate responses without truly “understanding” the content of the data it works with.

Humans, therefore, have to interpret on their ones and giving meaning to the results produced by AI. It’s like having an assistant that efficiently gathers, filters, and organizes information but still requires our intervention to make informed decisions. This cognitive support provides valuable input, but the final control must always remain in human hands.

The risks of System 0: loss of autonomy and blind trust

“The risk,” professors Riva and Ubiali emphasize, “is relying too much on System 0 without exercising critical thinking. If we passively accept the solutions offered by AI, we might lose our ability to think autonomously and develop innovative ideas. In an increasingly automated world, it is crucial that humans continue to question and challenge the results generated by AI.”

Furthermore, transparency and trust in AI systems represent another major dilemma. How can we be sure that these systems are free from bias or distortion and that they provide accurate and reliable information?

“The growing trend of using synthetic or artificially generated data could compromise our perception of reality and negatively influence our decision-making processes,” the professors warn.

AI could even hijack our introspective abilities, they note—i.e., the act of reflecting on one’s thoughts and feelings—a uniquely human process.

However, with AI’s advancement, it may become possible to rely on intelligent systems to analyze our behaviors and mental states.

This raises the question: to what extent can we truly understand ourselves through AI analysis? And can AI replicate the complexity of subjective experience?

Despite these questions, System 0 also offers enormous opportunities, the professors point out. Thanks to its ability to process complex data quickly and efficiently, AI can support humanity in tackling problems that exceed our natural cognitive capacities.

Whether solving complex scientific issues, analyzing massive datasets, or managing intricate social systems, AI could become an indispensable ally.

To leverage the potential of System 0, the study’s authors suggest it is urgent to develop ethical and responsible guidelines for its use.

“Transparency, accountability, and digital literacy are key elements to enable people to critically interact with AI,” they warn.

“Educating the public on how to navigate this new cognitive environment will be crucial to avoid the risks of excessive dependence on these systems.”

The future of human thought

They conclude: If left unchecked, System 0 could interfere with human thinking in the future.

“It is essential that we remain aware and critical in how we use it; the true potential of System 0 will depend on our ability to guide it in the right direction.”

About this AI and human cognition research news

Author: Nicola Cerbino
Source: Universita Cattolica del Sacro Cuore
Contact: Nicola Cerbino – Universita Cattolica del Sacro Cuore
Image: The image is credited to Neuroscience News

Original Research: Closed access.
The case for human-AI interaction as System 0 thinking” by Giuseppe Riva et al. Nature Human Behavior


Abstract

The case for human-AI interaction as System 0 thinking

The rapid integration of these artificial intelligence (AI) tools into our daily lives is reshaping how we think and make decisions.

We propose that data-driven AI systems, by transcending individual artefacts and interfacing with a dynamic, multiartefact ecosystem, constitute a distinct psychological system.

We call this ‘system 0’, and position it alongside Kahneman’s system 1 (fast, intuitive thinking) and system 2 (slow, analytical thinking).

System 0 represents the outsourcing of certain cognitive tasks to AI, which can process vast amounts of data and perform complex computations beyond human capabilities.

It emerges from the interaction between users and AI systems, which creates a dynamic, personalized interface between humans and information.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.