Tuesday, September 26, 2023

A Grassroots Initiative to Bridge Practice, Education, and Research.

Book Review: Balancing Human and AI Control to Achieve Meaningful Customer Engagement

Human-Centered AI
by Ben Shneiderman

Avi Parush
Israel Institute of Technology

The introduction of smart systems, automation, and autonomy is strong evidence of the proliferation of artificial intelligence (AI) technologies. In every facet of our lives, AI is increasingly present through various service systems including targeted, personalized, and anticipatory marketing techniques rooted in big data analytics, chatbots that provide automated service, sometimes at critical touchpoints with customers, and more.

We assume that AI technologies within service systems will increase the customer’s engagement with the service itself and the company providing that service. Yet, paradoxically, one of the biggest concerns about the proliferation of AI is that people will gradually lose control, become less involved, and disengaged. Is AI’s use for human engagement a double-edged sword?

It is in scrutinizing this question that I value the timely significance of the book, Human-Centered AI, by Professor Ben Shneiderman from the University of Maryland.1 It voices a critical call for AI to be human-centered and provides a constructive and practical guide on how to go about it. The book is organized logically: starting with AI’s fundamental concepts and its philosophical and practical underpinnings, introducing Shneiderman’s human-centered AI (HCAI) framework, and moving on to the implications of AI model design, a discussion of AI governance structures, and a future agenda.

The book offers insightful lessons and practical takeaways, too many for me to cover in this short review. I would like instead to focus on what I consider to be the key aspects and most important ideas: the HCAI framework and its broader implications, especially in the context of AI for customer engagement. I offer an interpretative commentary, rather than a traditional review, which revolves around several facets of human engagement with AI as a way to view Human-Centered AI.


When I first heard about human-centered AI, I had a feeling of déjà vu. I was reminded of the birth and evolution of human-centered design (HCD) in human-computer interaction (HCI). Shneiderman has been one of the torchbearers for incorporating the human in the HCI equation. Is HCAI then a natural evolution of HCD, or are we in need of a paradigm shift? How disruptive will AI be when it comes to tackling the challenges of taking a human-centered approach to the design and development of any technology that affects people?

Shneiderman’s HCAI framework and its broader implications suggest that the transition to human-centeredness in AI requires a more fundamental change than the movement towards HCD.

Figure 1: Shneiderman’s Two-dimensional HCAI Framework, (Reproduced from the book with permission.)
Figure 1: Shneiderman’s Two-dimensional HCAI Framework, (Reproduced from the book with permission.)

Balanced engagement

Quite early in the book, the author mentions ‘human control’ in HCI. I was immediately reminded of a series of public debates on user interface design between professors Shneiderman and Pattie Maes about a quarter century ago. Shneiderman has been a long-time proponent of direct manipulation, or giving people control and predictability in user interfaces, and thus more human control in HCI, while Maes was in favor of giving software agents a larger role in the interaction – that is, more autonomy to act on the user’s behalf given the ever-increasing proliferation of choices.2

The HCAI framework reframes our thinking about human control vs. computer automation. One of the key points Shneiderman makes in this book is that more control to one does not necessarily mean less control to the other; this is not a zero-sum game. It is also not the unidimensional scale of automation and autonomy that commonly appears in contemporary thinking about the evolution of AI and intelligent systems, which holds that there are certain tasks only humans are good at and others that computers can do better. The lines have blurred. Tasks that typically fall in the realm of humans, like medical diagnoses and driving a car, are increasingly being automated.

In a way, the book is a call to shift away from the good old MABA-MABA (men-are-better-at vs. machines-are-better-at) way of thinking and to reframe the allocation of functions in human-AI systems.

The HCAI framework suggests a more balanced, win-win view of the degree of control and engagement by humans and AI software in the design, implementation, and use of intelligent systems. The book depicts the problem of human and software automation and autonomy as two-dimensional: vertical for low to high levels of human control, and horizontal for low to high levels of computer automation.

The HCAI framework calls for an appropriate balance between human control and computer automation. The idea is to design the system in such a way that it strikes a balance between human capabilities and software automation and autonomy.

The framework also suggests that the balance is not static and prescriptive, but rather dynamic and adaptive. It depends on a variety of factors and circumstances, such as human and technology capabilities and the need for more − or less − active human involvement and engagement. The implication of such a balance for AI-based customer engagement is that it should be bidirectional.

It is not only the AI that drives the engagement; it is also the extent of human involvement and control, and specifically meaningful control. The concept of meaningful human control is mentioned several times in the book and often is used in the context of lethal autonomous robots. Through it, we can talk about meaningful customer engagement, which means shifting some of the control to the customer through greater customer awareness, understanding, and influence over an AI-driven engagement.

Figure 2: A Proposal Adapted from Shneiderman’s HCAI Framework
Figure 2: A Proposal Adapted from Shneiderman’s HCAI Framework

Multilevel and continuous engagement

Human-Centered AI is a call to go beyond HCD approaches to design and testing, to go beyond considering only the end user, and beyond the specific context of the usage or the presence of AI. It urges us to go beyond user experience and usability, and even beyond designing explainable AI (XAI), to make it more human-centered.

Indeed the book suggests that the implications of this question are far wider and potentially more critical than those of our struggle for human-centeredness in HCI. The author urges us to consider reliability, safety, trustworthiness, fairness, and ethics, as well as organizational and governance issues. He calls for multilevel and continuous human involvement and engagement in the entire lifecycle of the AI system.

This involvement starts with individuals and extends to organizations, regulatory bodies, agencies, nations, and societies. Involvement and engagement are not only incorporated in the design, development, and deployment of the AI, but also run throughout the lifecycle of the system, embodied in governance structures that emphasize organizational and business aspects, regulation, and oversight. Human-centeredness in AI is an ephemeral constant: It should always be there, yet it is ever changing.

There is a lot of interest in ‘responsible AI,’ which is good news. The disconcerting news is that there is far less interest in ‘safe AI.’

Multi-faceted engagement

The human-centeredness of AI, according to Shneiderman, is also about the reliability, trustworthiness, fairness, responsibility, ethics, and safety of AI systems. All of these are tightly linked. Yet, a quick look at online search trends at the time of writing this review reveals this: There is a lot of interest in ‘responsible AI,’ which is good news. The disconcerting news is that there is far less interest in ‘safe AI.’

I would like to pause here and dwell for a moment on this neglected safety question. Safe AI is becoming a critical challenge. Mica Endsley, the former U.S. Air Force chief scientist, has said that “autonomy systems that drive vehicles (whether fully autonomous or not) should have to pass driving tests just like people do.”3

Shneiderman’s emphasis on safety and, importantly, safety within organizations, their business strategies, and their culture, is a significant and timely component of the shift needed in our thinking about humans and AI. In AI-driven customer engagement, such as customer targeting and personalization, it is critical to mitigate errors, failures, and accidents.

Implementing human-centered AI can ensure a relevant transparency through which customers can recognize and understand system errors and failures, like irrelevant targeting or a privacy breach.

HCAI future research agenda

In both Part 3 (Design Metaphors) and Part 5 (Where do We Go from Here?), Shneiderman proposes scientific and practical ways to use the key messages of the book as we move forward. One important way forward is to focus on the challenge of metrics. What are the metrics for better AI systems? Shneiderman addresses such challenges particularly with respect to assessing trustworthiness of AI systems (chapter 25 in Part 5).

His message is that any future scientific research and applied innovation will have to use valid and reliable metrics to measure and assess the extent to which the AI system is human-centered. This in turn will allow us to implement adequate governance structures, ensuring that AI systems are reliable, trustworthy, ethical, safe, and human-centered.

In AI-driven customer interactions, the degree of meaningful customer engagement is a potentially important metric.

In AI-driven customer interactions, the degree of meaningful customer engagement is a potentially important metric. In the spirit of human-centered AI and balanced engagement, this metric could assess the impact of engagement in which some control is shifted to the customer and then balanced by some AI control, all while ensuring that the organization meets its strategic goals.

Closing and practical takeaway

I started this commentary with the following conundrum: AI can make people gradually lose control, becoming less involved and less engaged. Is AI, then, a double-edged sword with regard to human engagement?

Shneiderman’s HCAI proposal implies that the use of AI in customer engagement should adhere to an appropriate balance between human and automated control so as to drive meaningful customer engagement. I propose adjusting the HCAI approach, in keeping with the schematic diagram above.

The ultimate goal for HCAI in building meaningful customer engagement is that it be transparent, reliable, safe, trustworthy, fair, responsible, and ethical. To achieve these goals, consider my adaptation of the HCAI framework for a balanced engagement.

The ultimate goal for HCAI in building meaningful customer engagement is that it be transparent, reliable, safe, trustworthy, fair, responsible,
and ethical.

Creating human engagement with systems in general, and specifically with service systems, is mostly about drawing people in and keeping them loyal and engaged.

In Human-Centered AI, Schneiderman suggests strategies and tactics for implementing the HCAI framework. I find several of these to be particularly relevant to AI-driven customer engagement. These include sound software engineering and human-centered design methods; appropriate governance and management, including timely and proportionate interventions, and independent oversight throughout the system’s lifecycle. These measures help managers to enact the HCAI framework and achieve their goals.

Shneiderman’s HCAI argues that human engagement with systems should not be just a result of employing AI technologies, but an inherent part of the system. Human engagement should be built into AI’s conception, design, development, and deployment, ensuring that humans play a meaningful and active role in AI’s entire lifecycle.

Author Bio

Avi Parush

Avi Parush earned his PhD in Experimental Psychology from McGill University, Montreal in 1984. He is an associate professor in the faculty of data and decision sciences at The Technion, Israel as well as an emeritus professor at Carleton University, Ottowa and an adjunct professor at Queen’s University, Kingston. His current research focuses on teamwork in complex and critical situations, human factors in healthcare, human-robot interaction, human behavior relative to autonomous vehicles, and conventional and simulation-based training.


  1. Shneiderman, B. Human-Centered AI. Oxford University Press, 2022.
  2. “Direct Manipulation vs. Interface Agents.” Interactions, pp. 42-61, November & December issue, 1997.
  3. Mica Endsley, May 15 2022, LinkedIn