The Future and Fears: Public Perception of AI Explored

Summary: A new study seeks to understand the public perception of artificial intelligence (AI) and software in general. The online survey aims to gather insights into people’s hopes, fears, and general sentiments towards AI.

The research intends to shed light on the public’s stance on immediate concerns like racial and sexist biases in AI. By seeking public opinions, the researchers aim to influence ethical and responsible development of AI and software.

Key Facts:

  1. The study led by Lero and University College Cork aims to gauge public opinion about AI and software, focusing on hopes, fears, and perceptions.
  2. Immediate concerns such as racial and sexist biases being programmed into AI systems are among the topics the survey seeks to explore.
  3. The goal is to understand the public’s views on making software more responsible and ethical, thereby influencing the future development of AI.

Source: Lero

Will artificial intelligence (AI) end civilization?

Researchers at Lero, the Science Foundation Ireland Research Centre for Software and University College Cork, are seeking help determining what the public believes and knows about AI and software more generally.

This shows a robotic woman.
Dr Robinson said that, for example, human rights abuses are happening through AI and facial recognition software. Credit: Neuroscience News

Psychologist Dr Sarah Robinson, a senior postdoctoral researcher with Lero, is asking members of the public to take part in a ten-minute anonymized online survey to establish what peoples’ hopes and fears are for AI and software in general.

“As the experts debate, little attention is given to what the public thinks – and the debate is raging. Some AI experts express concern that others prioritise imagined apocalyptic scenarios over immediate concerns – such as racist and sexist biases being programmed into machines.

“As software impacts all our lives, the public is a key stakeholder in deciding what being responsible for software should mean. So, that’s why we want to find out what the public is thinking,” added the UCC-based researcher.

Dr Robinson said that, for example, human rights abuses are happening through AI and facial recognition software.

“Research by my Lero colleague Dr Abeba Birhane and others found that data used to train some AI is contaminated with racist and misogynist language. As AI becomes widespread, the use of biased data may lead to harm and further marginalisation for already marginalised groups.

“While there is a lot in the media about AI, especially ChatGPT, and what kind of world it is creating, there is less information about how the public perceives the software all around us, from social media to streaming services and beyond.

“We are interested in understanding the public’s point of view ­– what concerns the public have, what are their priorities in terms of making software responsible and ethical, and the thoughts and ideas they have to make this a reality?” outlined Dr Robinson.

Participants in the survey will be asked for their views and possible concerns on a range of issues and topics, with the hope of clarifying their views on critical issues. Lero is asking members of the public to donate 10 minutes of their time for this short survey.

About this artificial intelligence research news

Author: Nicola Corless
Source: Lero
Contact: Nicola Corless – Lero
Image: The image is credited to Neuroscience News

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.