Florida State University
Artificial Intelligence for Sustainable Value Creation is an edited volume whose authors offer a detailed and insightful exploration of both the possibilities and the challenges of widespread use of artificial intelligence (AI). It analyzes the effects of AI on business and society with an eye to assessing what we know about managing other information systems, strategy, and marketing, reexamining this knowledge in the context of AI. The book’s contributors explore how human-centric AI systems create value for organizations, discussing three main categories: ethical value, societal value, and business value.
We have already seen, and so do not need to imagine, what happens when algorithms devoid of human ethics are tuned to optimize consumer engagement. Such algorithms have succeeded in producing massive engagement, but they have also harmed societal welfare and fomented polarization and alienation.1
Making AI human-centric, as these authors insist we must, would make a promising start for any AI designed to foster consumer engagement. If you are a science fiction fan, you may by now have been reminded of Asimov’s First Law of Robotics, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Similarly, an AI trained for consumer engagement should follow the dictum “first, do no harm.”
In Chapter 2, Yihyun Lim takes a similar Asimovian tack, contemplating a values-driven design for AI. Lim notes that human values need to be deeply embedded in design, which is not so easy, because these values must be carefully elicited from stakeholders. What values are precious to those who act and engage on behalf of the brand or with respect to it?
One implication here is obvious: few consumers hope or intend to become outraged and polarized in service to some brand’s pursuit of profit.2 Instead, Lim suggests that an AI designed to increase engagement or other organizational goals should do so with an eye to such values as protection, acceptance, assistance, and acknowledgement.
Platforms vis-à-vis other business models
Chapter 3 addresses more ordinary business concerns as Omar El Sawy, Milan Miric and Margherita Pagani point out that AI generally exists in the context of some platform business model. This context is especially important because customer engagement tends to take place on a platform. Just as the rich are not like you and me, platforms are clearly not like other business models.
The authors go on to enumerate the ways in which they are different including network effects, in which more users equal more value; a community focus; and the need for governance mechanisms. They also point out that, in some cases such as network effects that accelerate data collection, engagement platforms and AI actually complement each other.
What remains less clear, however, is how factors like the need for governance relate to the AI imperative of maximizing efficiency, utility, and predictability. El Sawy et al. propose configurational theories as a way of understanding the dynamics, but it is unclear how these might enlarge our understanding of how AI and platform governance could coexist, unless it is at the expense of one over the other.
And if governance is important, which it surely is, should we insist that AI design adhere to the principles of common governance? Put another way, if I am going to have my attention engaged by an AI more clever than I, should I not have some say in this process?
In Chapter 5, Christine Balagué touches upon many of the ethical issues noted above, and adds a concern about the opacity of AI algorithms, their black-boxiness, as it were. It would be interesting to know whether opacity is part of what makes clickbait so engaging. Put this another way, if an AI suggests that I read a bit of content x and then carefully describes in detail how it predicted that I would be drawn to x, would the explanation make x less attractive to me?
Balagué also painstakingly documents the problem of ethnic and racial discrimination. Lacking moral judgment, an AI trained merely to boost engagement would be happy to push hateful material for those who might be drawn to it. This, too, is a platform governance issue. Neither I nor the authors are prepared to offer any easy solutions but conceptualizing it as a governance issue is a good start.4
In conclusion, several of the book’s contributors suggest that there are important factors that companies should consider as companies go merrily about training their AI models to foster customer engagement. Unfortunately, the consequences of ignoring these factors have already been realized on Twitter and its less prominent brethren.
More generally, we might benefit from thinking of engagement as one of a set of behaviors that generate customer value. In this light, the perils of AI running amok in its efforts to generate engagement are a special case in our broader efforts to optimize the lifetime value of customers.3
Artificial Intelligence for Sustainable Value Creation sheds a bright light on how human-centric artificial intelligence could create sustainable value for customers and society. It also suggests practical ways for us to develop ethical customer engagement.
Any way you look at it, it is ever more apparent that the fabled Silicon Valley admonishment to move fast and break things has driven AI to do just that with respect to consumer engagement. This book offers some hope that academics, industry, and enlightened regulators might take the implications seriously and begin to mitigate the breakage all around us.
Charles Hofacker is the Persis E. Rockwood Professor of Marketing at Florida State University’s College of Business. His research investigates the intersection of marketing and information technology and has appeared in the Journal of Marketing Research, Journal of the Academy of Marketing Science, Psychometrika, Management Science, Journal of Advertising Research and more. Hofacker is the moderator of ELMAR, an electronic newsletter and community platform for academic marketing with more than 8,000 subscribers.
- Berman, Ron and Zsolt Katona (2020), “Curation Algorithms and Filter Bubbles in Social Networks,” Marketing Science, 39 (2), 296-316.
- Hollebeek, Linda D., David E. Sprott, Valdimar Sigurdsson, and Moira K. Clark (2022), “Social Influence and Stakeholder Engagement Behavior Conformity, Compliance, and Reactance,” Psychology & Marketing, 39 (1), 90-100.
- Libai, Barak, Yakov Bart, Sonja Gensler, Charles F. Hofacker, Andreas Kaplan, Kim Kötterheinrich, and Eike Benjamin Kroll (2020), “Brave New World? On Ai and the Management of Customer Relationships,” Journal of Interactive Marketing, 51, 44-56.
- Zeng, Helen Shuxuan, Brett Danaher, and Michael D. Smith (2022), “Internet Governance through Site Shutdowns: The Impact of Shutting Down Two Major Commercial Sex Advertising Sites,” Management Science, 68 (11), 8234-8248.