Human-Centered Artificial Intelligence

From HandWiki

Human-Centered Artificial Intelligence (HCAI) is a recent extension of artificial intelligence, which shifts attention to human aspects of technology. HCAI is a set of processes for designing applications that are reliable, safe, and trustworthy,[1][2] and effectively serve people's needs. These extend the processes of user experience design such as user observation and interviews. Further processes include discussions with stakeholders, usability testing, iterative refinement and continuing evaluation in use of systems that employ AI and machine learning algorithms. Human-Centered AI manifests in products that are designed to amplify, augment, empower and enhance human performance. These products ensure high levels of human control and high levels of automation. HCAI research includes governance structures that include safety cultures within organizations and independent oversight by experienced groups that review plans for new projects, continuous evaluation of usage, and retrospective analysis of failures.[3]

The rise of HCAI is visible in topics such as explainable AI, transparency, audit trail, oversight, regulation, fairness, trustworthiness, and controllability.

Influential HCAI writers include Ruha Benjamin, Kate Crawford, Virginia Dignum, Timnit Gebru, Jaron Lanier, Cathy O'Neil, Cynthia Rudin, Stuart J. Russell, Ben Shneiderman, Brian Cantwell Smith, and Shoshana Zuboff.

Defining the Human in HCAI

It is important to consider the user perspective when designing computer systems. In the section "Problems, Paradoxes and Overlooked Social Realities" Kling and Star [4] talk about how SAP is not a human-centered application, but an organization centered application. SAP is adaptable but many organizations change the way their employees work to accommodate such applications. Kling and Star also state that a large system of workable computers needs the support of a strong socio-technical infrastructure. The bottom line is that good human-centered application is a three-way partnership between designers, users, and social scientists.

To get researchers thinking about whether their systems are human-centered, Kling and Star [5] recommend they understand the goals of the system. If the end user's needs are met, then it is a human-centered system. A complicated design process that does not freeze at one development stage and takes into account the complexity of human decision-making is human-centered. The social relationship between the system and humans must be considered. The relationship between stakeholders and the design process, similar to the SAP example should be understood before work begins on a new system.

Shneiderman has a different approach to HCAI. They believe in creating a two-dimensional framework. There are 4 levels in this scenario - High Human Control and High Computer Automation; Low Human Control and High Computer Automation; High Human Control and Low Computer Automation; Low Human Control and Low Computer Automation. Shneiderman says that the best HCAI system is one where there is high Human Control and High Computer Automations. These types of systems are reliable and trustworthy. Some examples of such systems include Elevators and Cameras. On the other end, systems with dangerously excessive human control or computer automation are unreliable. A recent example can be the excessive automation the Boeing 737 Max MCAS system that was dependent on a faulty system. [6]

Importance of Unbiased Datasets

Datasets carry immense power in the behavior of machine learning models. Since most machine learning models are trained and evaluated on static datasets, they can be subject to the inherent societal biases present within training data.[7] These biases may be amplified when the context of the dataset creation and collection are different from the model's deployment context. Dataset consumers may not have insight into the dataset's background and intended usage.

Dataset creators, on the other hand, have better knowledge about the context of the data and its underlying assumptions. Therefore, in order to mitigate unintended behaviors in machine learning models, the creation of datasheets for datasets is suggested as a possible solution.[7] These datasheets would detail the creators' motivation, data collection process, and suggested uses for the data. Although the content of datasheets may vary based on factors such as the domain and organizational workflows, overall, datasheets would present a list of questions for data creators to elicit information that increases transparency regarding the data's characteristics.

Counterexamples of Neglecting the Human Factors in AI

Counterexamples show that applying AI or ML technology without emphasizing human factors could cause serious issues. The algorithms may bring unfairness and bias to the stakeholders involved. Here are a few examples of AI technologies with gender or ethnicity bias or neglect human factors.

AI recruiting tool shows gender bias against women in resume screening. Every year big tech companies will receive thousands of resumes for the various jobs they are hiring. Some companies decided to use AI/ML recruiting tools to help screen those resumes. AI algorithms use the resumes that humans have selected as a training dataset. Then it can figure out how the features or information of each human selected resume relate to how likely someone can get an interview. At the training stage of the machine learning algorithm, based on the original recruiter screened resumes, the algorithm can see the pattern of the ideal candidate that those recruiters were looking for when deciding on whom to interview. And the bias starts to happen when AI screening tools start to screen on new resumes that humans have never screened before. The system will select the preferred candidates based on what it learned during the training stage. The results show that even if “Gender” was never explicitly input into the systems as a training feature, the technology will favor candidates who described themselves using masculine languages such as “executed” and “captured” it also prefers candidates who graduate from male-dominated colleges. In other words, the AI system will pick up the traits related to gender. [8]


AI shows bias in the health care industry, the bias in those systems could potentially put women’s lives at risk. Over decades cardiovascular diseases were mainly considered men’s conditions, so the data points were primarily collected from male patients. These self-diagnosis apps which incorporate those AI algorithms may suggest a different level of emergency for treating patients. When female patients consult their pain symptoms using App, the App will indicate that the pain is due to non-urgent associate diseases and recommend scheduling non-urgent visits. In contrast, male users will be asked to contact their doctors immediately due to potential heart attacks. But women could also suffer a heart attack. This gender bias could lead to fatal results and put women’s lives at risk. Besides, the Berlin Institute of Health stated that many medical algorithms are based on U.S. military personnel data, where women in some areas only represent 6%. [9]

Workshops

A series of workshops on HCAI topics were conducted by the U.S. National Institute of Standards and Technology.[10] Established conferences such as CHI run workshops on Human-Centered Machine Learning in 2016,[11] and 2019.[12] NeurIPS is another conference that ran a workshop on Human-Centered AI,[13] and the Human-Computer Interaction International[14] held a day-long set of Special Thematic Sessions on Human-Centered AI in 2021.[15]

Academic research groups

Academic research groups at leading universities have emerged to cover human-centered topics such as ethics, trustworthiness, autonomy, policy, and responsibility. The international participation and diverse approaches are represented by key labs such as Berkman Klein Center for Internet & Society, National University of Singapore, Singapore (Centre for AI Technology for Humankind),[16] Stanford University, U.S. (Human-centered AI (HAI) Institute),[17] Center for Human-Compatible Artificial Intelligence, and University of Oxford, U.K. (Institute for Ethics in AI).[18]

Industry research groups

Another indicator of the strength of the Human-Centered AI movement is the commitment of major technology companies, as shown by leaders such as the Human-Centered AI team, IBM Research,[19] the People and AI Research, Google,[20] and the Responsible AI Resources, Microsoft [21]

Policy initiatives

Since Human-Centered AI has profound impacts on society, non-governmental organizations and civil society groups have arisen to shape policy responses by governmental and regulatory bodies. Leading examples are the International Outreach for a Human-Centric Artificial Intelligence, Europe (InTouchAI.eu),[22] the Center for AI and Digital Policy,[23] AI Now Institute, and ForHumanity [24]


References

  1. "Human-Centered AI". https://hcai.site/. 
  2. Shneiderman, Ben (2020-03-23). "Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy". International Journal of Human–Computer Interaction 36 (6): 495–504. doi:10.1080/10447318.2020.1741118. ISSN 1044-7318. http://dx.doi.org/10.1080/10447318.2020.1741118. 
  3. Shneiderman., Ben (2022). Human-Centered AI.. Oxford Univ Press. ISBN 978-0-19-284529-0. OCLC 1258219484. http://worldcat.org/oclc/1258219484. 
  4. Kling, R., & Star, S. L. (n.d.). Human Centered Systems in the Perspective of Organizational and Social Informatics. http://www.ifp.uiuc.edu/nsfllcs/.
  5. Kling, R., & Star, S. L. (n.d.). Human Centered Systems in the Perspective of Organizational and Social Informatics. http://www.ifp.uiuc.edu/nsfllcs/.
  6. Shneiderman, B. (2020). Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy. International Journal of Human-Computer Interaction, 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118
  7. 7.0 7.1 Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM 64, 12 (December 2021), 86–92. DOI:https://doi.org/10.1145/3458723
  8. Dastin, Jeffrey. "Amazon scraps secret AI recruiting tool that showed bias against women". https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G. 
  9. Niethammer, Carmen. "AI Bias Could Put Women’s Lives At Risk - A Challenge For Regulators". https://www.forbes.com/sites/carmenniethammer/2020/03/02/ai-bias-could-put-womens-lives-at-riska-challenge-for-regulators/?sh=4474459e534f. 
  10. "NIST AI Workshops & Events". 16 June 2020. https://www.nist.gov/artificial-intelligence/nist-ai-workshops-events. 
  11. "EAVI - Embodied AudioVisual Interaction Group - Goldsmiths". http://hcml2016.goldsmithsdigital.com/. 
  12. "Human-Centered Machine Learning Perspectives Workshop.". https://gonzoramos.github.io/hcmlperspectives/. 
  13. "HCAI Human Centered AI workshop at NeurIPS 2021". https://sites.google.com/view/hcai-human-centered-ai-neurips/home. 
  14. "HCI International 2021". https://2021.hci.international/index.html. 
  15. "HCII2021 Special Thematic Sessions on 'Human-Centered AI' | HCI International 2021". https://2021.hci.international/Human-Centered_AI_Thematic_Sessions.html. 
  16. "Welcome to AITH : Homepage". https://bschool.nus.edu.sg/aith/. 
  17. "Home". https://hai.stanford.edu/home. 
  18. "The Ethics in AI Institute". https://www.schwarzmancentre.ox.ac.uk/ethicsinai. 
  19. "Human-Centered AI". 9 February 2021. https://research.ibm.com/teams/human-centered-ai. 
  20. https://pair.withgoogle.com/
  21. https://www.microsoft.com/en-us/ai/responsible-ai-resources
  22. "International outreach for AI | Shaping Europe's digital future" (in en). https://digital-strategy.ec.europa.eu/en/policies/international-outreach-ai. 
  23. https://www.caidp.org/
  24. https://forhumanity.center/