Researchers Identify 6 Challenges Humans Face With Artificial Intelligence

Summary: Study identifies six factors humans must overcome to insure artificial intelligence is trustworthy, safe, reliable, and compatible with human values.

Source: University of Central Florida

A University of Central Florida professor and 26 other researchers have published a study identifying the challenges humans must overcome to ensure that artificial intelligence is reliable, safe, trustworthy and compatible with human values.

The study, “Six Human-Centered Artificial Intelligence Grand Challenges,” was published in the International Journal of Human-Computer Interaction.

Ozlem Garibay ’01MS ’08PhD, an assistant professor in UCF’s Department of Industrial Engineering and Management Systems, was the lead researcher for the study. She says that the technology has become more prominent in many aspects of our lives, but it also has brought about many challenges that must be studied.

For instance, the coming widespread integration of artificial intelligence could significantly impact human life in ways that are not yet fully understood, says Garibay, who works on AI applications in material and drug design and discovery, and how AI impacts social systems.

The six challenges Garibay and the team of researchers identified are:

  • Challenge 1, Human Well-Being: AI should be able to discover the implementation opportunities for it to benefit humans’ well-being. It should also be considerate to support the user’s well-being when interacting with AI.
  • Challenge 2, Responsible: Responsible AI refers to the concept of prioritizing human and societal well-being across the AI lifecycle. This ensures that the potential benefits of AI are leveraged in a manner that aligns with human values and priorities, while also mitigating the risk of unintended consequences or ethical breaches.
  • Challenge 3, Privacy: The collection, use and dissemination of data in AI systems should be carefully considered to ensure protection of individuals’ privacy and prevent the harmful use against individuals or groups.
  • Challenge 4, Design: Human-centered design principles for AI systems should use a framework that can inform practitioners. This framework would distinguish between AI with extremely low risk, AI with no special measures needed, AI with extremely high risks, and AI that should not be allowed.
  • Challenge 5, Governance and Oversight: A governance framework that considers the entire AI lifecycle from conception to development to deployment is needed.
  • Challenge 6, Human-AI interaction: To foster an ethical and equitable relationship between humans and AI systems, it is imperative that interactions be predicated upon the fundamental principle of respecting the cognitive capacities of humans. Specifically, humans must maintain complete control over and responsibility for the behavior and outcomes of AI systems.

The study, which was conducted over 20 months, comprises the views of 26 international experts who have diverse backgrounds in AI technology.

“These challenges call for the creation of human-centered artificial intelligence technologies that prioritize ethicality, fairness and the enhancement of human well-being,” Garibay says.

This shows computer code and a face
The study, which was conducted over 20 months, comprises the views of 26 international experts who have diverse backgrounds in AI technology. Image is in the public domain

 “The challenges urge the adoption of a human-centered approach that includes responsible design, privacy protection, adherence to human-centered design principles, appropriate governance and oversight, and respectful interaction with human cognitive capacities.”

Overall, these challenges are a call to action for the scientific community to develop and implement artificial intelligence technologies that prioritize and benefit humanity, she says.

The group of 26 experts include National Academy of Engineering members and researchers from North America, Europe and Asia who have broad experiences across academia, industry and government. The group also has diverse educational backgrounds in areas ranging from computer science and engineering to psychology and medicine.

Their work also will be featured in a chapter in the book, Human-Computer Interaction: Foundations, Methods, Technologies, and Applications.

Five UCF faculty members co-authored the study:

  • Gavriel Salvendy, a university distinguished professor in UCF’s College of Engineering and Computer Science and the founding president of the Academy of Science, Engineering and Medicine of Florida.
  • Waldemar Karwowski, a professor and chair of the Department of Industrial Engineering and Management Systems and executive director of the Institute for Advanced Systems Engineering at the University of Central Florida.
  • Steve Fiore, director of the Cognitive Sciences Laboratory and professor with UCF’s cognitive sciences program in the Department of Philosophy and Institute for Simulation & Training.
  • Ivan Garibay, an associate professor in industrial engineering and management systems and director of the UCF Artificial Intelligence and Big Data Initiative.
  • Joe Kider, an associate professor at the IST, School of Modeling, Simulation and Training and a co-director of the SENSEable Design Laboratory.

Garibay received her doctorate in computer science from UCF and joined UCF’s Department of Industrial Engineering and Management Systems, part of the College of Engineering and Computer Science, in 2020.

About this artificial intelligence research news

Author: Robert Wells
Source: University of Central Florida
Contact: Robert Wells – University of Central Florida
Image: The image is in the public domain

Original Research: Open access.
Six Human-Centered Artificial Intelligence Grand Challenges” by Ozlem Garibay et al. International Journal of Human-Computer Interaction


Six Human-Centered Artificial Intelligence Grand Challenges

Widespread adoption of artificial intelligence (AI) technologies is substantially affecting the human condition in ways that are not yet well understood.

Negative unintended consequences abound including the perpetuation and exacerbation of societal inequalities and divisions via algorithmic decision making.

We present six grand challenges for the scientific community to create AI technologies that are human-centered, that is, ethical, fair, and enhance the human condition.

These grand challenges are the result of an international collaboration across academia, industry and government and represent the consensus views of a group of 26 experts in the field of human-centered artificial intelligence (HCAI).

In essence, these challenges advocate for a human-centered approach to AI that (1) is centered in human well-being, (2) is designed responsibly, (3) respects privacy, (4) follows human-centered design principles, (5) is subject to appropriate governance and oversight, and (6) interacts with individuals while respecting human’s cognitive capacities.

We hope that these challenges and their associated research directions serve as a call for action to conduct research and development in AI that serves as a force multiplier towards more fair, equitable and sustainable societies.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.