Alan Schulman
UpperRight
Stacey Lynn Schulman
Hi: Human Insight
Alan Schulman and Stacey Lynn Schulman describe how generative AI can make classic, iconic musical styles feel new again, helping people to engage with artists who are gone, but not forgotten. This technology could fuel consumer engagement with brands that invest in musical cultures and branding.
Since the earliest days of written and recorded music, many of history’s greatest composers, musicians, and performing artists have created their own unique sound, a signature style or approach that is instantly recognizable and identified with them. Most of these legendary artists also left behind unfinished works or died far too young, leaving us to wonder what they might have produced had they continued to develop and expand their craft.
In recent years, technologists have tasked artificial intelligence and machine learning (AI/ML) with replicating, reimagining, and expanding upon the work of these historic artists – how might their sound have evolved had they lived on?
AI is already creating content, particularly imagery and text. However, people have only recently begun to explore generative music through AI-generated machine learning. With this in mind, we set out to assess how AI-generated works in the style of iconic musicians would engage consumers. With so many brands trying to establish relevance with youth culture, expanding the musical library of icons among engaged fans could create new opportunities to ignite consumers’ passion across multiple life stages.
We explored how listeners react to AI-generated music that endeavors to interpret and advance the signature sound of several worldrenowned artists. By analyzing widely available consumer sentiment as well as data drawn from our independent research, we explored the delicate boundaries between art and technology and designed a framework for assessing how AI-generated music can achieve the aesthetic and commercial saliency that consumers expect from human musicians. Our findings will have a broad impact on consumer engagement with brands that invest in associating with musical fan cultures and sonic branding.
AI, creativity, and the human element
Assessing AI for its propensity to engage consumers is complicated in the artistic realm. Art and its value are notoriously debated by human critics, audiences, and creators. In our effort to develop a framework on which to evaluate AI-generated music, we drew upon this fertile ground, unearthing the characteristics that underpin most artistic debates. Whether comparing fine art vs. comic books, rock music vs. pop, or the works of different periods of an artist’s development, four characteristics come up repeatedly: authenticity, emotional vibrance, experiential/experimental triggers of creativity, and engagement with the audience. These concepts are woven together such that the mere act of creation and consumption of art are seen as uniquely defining the human condition.
Artificial intelligence quite simply fails the sniff test when applied to reproducing the essence of our humanness.
With respect to creative pursuits, artificial intelligence suffers from an unfortunate moniker. AI quite simply fails the sniff test when applied to what is considered the essence of our humanness. In a world that is becoming increasingly automated, creativity is the safehouse humans return to in defense of their value and unique contributions to the universe. Whether you believe creativity is defined by the end-product or by the process of creation itself,1 many believe that the value to both the creator and the audience is in the representation and evocation of something machines will never be: deeply emotional and even irrational.
Humans are biased evaluators of AI’s artistic contributions largely because of the importance we ascribe to emotion in the act of creation. If algorithms do not feel, can machines truly create?
Moreover, will consumers engage with and value AI creations as they have human ones?
Selecting AI-generated music to study
We examined two different and discrete sets of AI-generated music encompassing various renowned composers, musicians, and musical genres. These compositions were part of two distinct projects using different AI technologies. Both projects are publicly available online:
The Lost Tapes of the 27 Club2 is an AI project developed by Over the Bridge, a Toronto-based organization that set out to raise awareness about mental health issues in the music community, a group that has struggled with mental health at a rate far exceeding the general adult population.
To draw attention to this issue, the organization used AI to imagine and create what the 27 Club, a group of world-renowned musicians who all died at just twenty-seven years old, might have created if their lives had been longer. The club includes such legends as Jimi Hendrix, Kurt Cobain, Jim Morrison and Amy Winehouse.
Researchers used Google’s machine learning platform Magenta to analyze the previously recorded music of each musician in order to form the basis for generating each new track.3 The AI was fed rhythms, melodies, and lyrics from the artists. This data allowed the AI to produce new songs in the style of the artists themselves, to give a sense of what they might have created had they lived longer.
Once the compositional elements were in place, a production company arranged the musical parts and hired human sound-alike vocalists to record the vocal performances. The Lost Tapes of the 27 Club were distributed publicly on YouTube.4
Jukebox AI5 is a neural network that generates music, including vocals, in a variety of genres and styles of renowned artists. To create the generative model, the JukeBox team crawled the web to curate a dataset of 1.2 million songs (600,000 of which are in English), along with their lyrics and metadata from LyricWiki.6 The metadata includes the artist, genre, and year of the songs, along with common moods or playlist keywords associated with them. The model can learn over time and generate new music in whatever style the user chooses.7
Our research
We designed our quantitative research to mitigate any human bias regarding AI as an artistic creator (not simply used as a tool). Could these AI-generated compositions perform as well as those generated by their human counterparts when assessed across the four characteristics of artistic value – authenticity, emotion, creativity, and engagement? If AI’s musical efforts are to be accepted and embraced by consumers, they will need to go beyond innovative combinations of sounds and motifs.
We partnered with Veritonic, an audio intelligence company, to focus our study on eight specific artists and sixteen musical assets from the selected AI projects.8 For each artist we studied an original seminal composition that would be known to both casual listeners and fans (control) and an AI-generated composition that emulated their style (test). We collected responses from 1,381 casual listeners and fans in the United States from Aug. 17 to 20, 2022.
Veritonic measures audio assets – advertising creatives, podcasts, musical performances and sonic signatures – and has amassed a large and useful normative database worldwide. The company’s ability to measure human-centric attributes at a second by second level of granularity in real time, made it an ideal research partner. Its team mapped our defined characteristics of artistic value with the appropriate set of attributes from Veritonic’s platform.
We also composed several follow-up questions to assess how respondents perceived the value of AI-generated music and thus its potential market viability. Unlike the online social postings of the two AI projects we were exploring, which specifically called attention to the music’s origin, we deliberately omitted any mention of AI until after the songs had been evaluated. We recruited discrete groups of roughly 150-200 online respondents, classifying them as either casual listeners (“I like his/her/ their music but prefer others more”) or fans (“one of my favorite artists”) of each emulated artist. Each respondent then listened to ninety seconds of audio from each song, original and AI-composed, from the beginning of the first vocal element, rotated to reduce positional bias. Respondents scored the songs at any time, allowing multiple responses, using the following Veritonic attributes: authentic, excited, familiar, happy, innovative, likable, trustworthy, and unique.
These attributes are captured in every Veritonic analysis and therefore could be compared to the company’s normative database of thousands of creative tests that include songs, long form sonic DNAs, music beds, mnemonics and jingles. We then mapped the Veritonic attributes to our characteristics of artistic value. Each value corresponded to two or three attributes.
A summary of the research design and assets follows:
Artist | Assets/Songs (Control) | Assets/Songs (AI/Test) | AI |
---|---|---|---|
Amy Winehouse | Back to Black | Man I Know | Magenta AI |
Kurt Cobain | Come As You Are | Drowned in the Sun | Magenta AI |
The Doors | Riders on the Storm | The Roads are Alive | Magenta AI |
Jimi Hendrix | Hey Joe | You’re Going to Kill Me | Magenta AI |
Ella Fitzgerald | Cheek to Cheek | Song in the Style of Ella Fitzgerald | Jukebox AI |
Elvis Presley | I Can’t Help Falling in Love With You | Song in the Style of Elvis Presley | Jukebox AI |
Frank Sinatra | Come Fly With Me | Song in the Style of Frank Sinatra | Jukebox AI |
Katy Perry | Last Friday Night | Song in the Style of Katy Perry | Jukebox AI |
Veritonic Attributes | Corresponding Characteristic of Artistic Value |
---|---|
Authentic Trustworthy Familiar | Authenticity |
Excited Happy Likable | Emotion |
Innovative Unique | Creativity |
Average Frequency per Second of All Attribute Interactions | Engagement |
We presented the songs twice to allow listeners ample opportunity to assess all the attributes. Veritonic calculates and defines the attribute score in the following way:
Data is interactively collected while an audio file is being played, measuring the magnitude of selected emotions and descriptors. Each of the Veritonic metrics are on a scale of 0-100.
To measure engagement, we used Veritonic’s engagement metric which is calculated differently from its attribute score:
A measure of the frequency of attribute interactions relative to asset duration. This is a metric that counts the number of ‘clicks’ respondents make on each of the various attributes on a second-by-second basis and then expresses a score as an average across all seconds.
For the purposes of this study, we have scaled the engagement scores to Veritonic’s attribute scores.
As the audio segment plays, the respondent can select any of the attributes to describe their reactions in real time on a second by second basis. They may also revisit attributes throughout the segment. After they listened, respondents answered a few follow-up questions so that researchers could assess how they perceived the quality of the song, their ability to link the song to the artist, and their intent to further engage with it as a fan, including by streaming or playlisting the track.
At the end of the study, respondents were told that AI had created one of the songs that they had evaluated. They were asked whether they would attribute the AI track to the artist it emulated and rate their level of interest in listening to more AI-generated audio.


Figures 1 and 2: Measuring the Four Characteristics of Artistic Value
What we learned
The AI’s compositions performed remarkably well among all respondents across all four characteristics of artistic value. AI-generated offerings scored ten to fourteen points below the artists’ original works, but equal to or above the norms for music in Veritonic’s global database. This is particularly promising given that the AI songs faced a high bar. Its offerings were completely new and unknown to the respondents, while the artist’s originals were drawn from their largest commercial successes.
The AI’s compositions performed remarkably well among all respondents across all four characteristics of artistic value.
We also observed differences in the respondents’ reactions to the works from the two AI generators. The Magenta AI tracks yielded a superior performance across all characteristics. The most striking differences were in the respondents’ ability to attribute the tracks to the artists whose work they emulated (69 percent likelihood with Magenta AI vs. 53 percent with Jukebox AI).
Since the Jukebox AI is truly autogenerated, including composition, vocals, and lyrics this result is not surprising, particularly given that its lyrics are occasionally incongruent. Its Frank Sinatra composition, for example, includes the lyrics, “It’s Christmas time and you know what that means… It’s hot tub time.” In contrast, Magenta AI’s tracks were composed by the AI, but executed and professionally produced in studios with soundalike human singers.
While it would be easy to attribute the Magenta advantage to a human element, it is worth noting that even though we observed differences between the two AIengines, the scores are not appreciably different. Human intervention may have played a role in this study, but it is not unreasonable to believe that future projects or ones with broader scope would yi
eld more comparable results. Indeed, we observed greater differences between the responses of fans and casual listeners. As Figure 3 shows, fans responded more favorably than casual listeners to both the original tracks and the AI-generated tracks. Across the entire sample, fans made up roughly a third of our respondents, which might account for the lower overall scores when we applied weights.

These results are extremely encouraging for the future of AI-generated music. Self-described fans are likely to be intimately familiar with an artist’s style, perspective, and vocal idiosyncrasies. They can detect anomalies or inconsistencies more readily than casual listeners and would be more likely to rebuff imitations.
While being exposed to the music during the study, our respondents did not seem to factor these issues into their high scores for authenticity, emotion, or creativity. It was only after they were aware of the AI that fans became less confident of the track’s authenticity. Sixty-one percent agreed or strongly agreed that they could not tell the track was generated by AI. We attribute these differences to anti-AI bias.
Toward potential consumer engagement
While the Veritonic platform was able to measure engagement as a compounded interaction metric within a testing platform, true assessments of consumer engagement happen in the open marketplace. To evaluate our last artistic variable, engagement, we questioned consumer intent, a stronger indication of market viability.
In order to get a more realistic response, we asked these questions only after revealing the AI-generation of the songs. After listening to an AI-generated track, 54 percent of respondents indicated that they would like to hear more ‘songs like this,’ compared to 73 percent for the original tracks. Between a third and just over half of respondents said they would be likely to stream or add the song to a playlist, compared to 67 percent for the original tracks (See Figures 4 and 5).

Qualitative findings and insights
Because both of the AI-generated projects described at the outset have enjoyed widespread distribution on social media platforms, we can understand a lot about listeners’ emotional reactions. Where quantitative data allowed us to control for fandom, listenership, and pre-determined variables, social commentary provided us with a more visceral sense of consumer engagement with the music and the possibilities for AI.
All of the AI-generated tracks of The Lost Tapes of the 27 Club, have been posted to YouTube9 where the public has unlimited access and can offer qualitative comments for each individual track.
The AI Tracks from JukeBox AI were posted to Soundcloud which also offers unlimited public access for both listening and reposting, again with the opportunity to leave qualitative comments. Table 3 presents our findings for each track by number of streams completed as well as qualitative measures based on ‘like’ or ‘thumbs up’ responses and comments. Soundcloud does not offer a ‘dislike’ or ‘thumbs down’ option.
AI Platform | Track Artist | Track Name | Listens | Likes | Dislikes | Comments |
---|---|---|---|---|---|---|
Magenta AI | Jimi Hendrix | You’re Going to Kill Me | 68,329 | 1,100 | 16 | 175 |
Magenta AI | Amy Winehouse | Man I Know | 146,203 | 3,400 | 37 | 415 |
Magenta AI | The Doors | The Roads are Alive | 50,817 | 84 | 45 | 198 |
Magenta AI | Nirvana | Drowned in the Sun | 87,222 | 424 | 103 | 208 |
JukeBox AI | Elvis Presley | Rock | 251,000 | 257 | n/a | 62 |
JukeBox AI | Frank Sinatra | Pop | 366,000 | 1060 | n/a | 206 |
JukeBox AI | Katy Perry | Pop | 256,000 | 483 | n/a | 89 |
Figure 6 charts the relationship of three variables –
- Consumer Engagement in the form of listens (x-axis),
- Net Positive Sentiment expressed as likes or dislikes (y-axis), and
- The raw number of comments (size of the bubble).

Across an array of musical genres from traditional pop to grunge and psychedelic rock, most tracks accumulated massive consumer engagement, from 50,800 to 366,000 in the form of likes and comments. Most listeners found the generative samples to be positive enhancements or interpretations of the original artist’s sound. We also found a correlation between high levels of engagement and artists and genres that cross generations such as Frank Sinatra and Elvis Presley as well as pop artist Katy Perry.
Emotion is perhaps the most difficult of our four characteristics to ascribe to AI because it is central to our human identity and admitting that AI can evoke an emotional response challenges our sense of our own value. Nevertheless, some comments revealed that the AI did evoke emotional responses:
“Damn… We are now to the point where AI can make me feel something…”
Figure 7 offers a sampling of comments, organized by our four characteristics, with the source artist noted in parentheses. The responses demonstrate a healthy mix of awe and skepticism.

In light of these findings and the immense library and lexicon of music which has been digitized over the past several decades, the groundwork has been laid for new AI musicians to emerge.
As capabilities become more sophisticated, the commercial opportunities for consumer engagement in the music industry could expand exponentially as music discovery is no longer relegated to new or previously unknown artists, but to expanding the catalog of historical artists as well. Of course, the applications, ethics, and questions that these capabilities raise are equally numerous.
Looking ahead: The culturally curious will lead the way
These two AI platforms, when set to learn and reproduce the works and stylistic nuances of legendary musicians such as Jimi Hendrix, Kurt Cobain, and Jim Morrison generated enormous consumer engagement through social media.
Today most listeners view The Lost Tapes of the 27 and other AI-generated music experiments as novelties. However, as AI/ML continues to grow and learn from an ever increasing archive of music, these innovative applications demonstrate that we are on the cusp of what will be possible as AI is more extensively applied to music, providing entertainment and attracting customer engagement.
The quantitative data results suggest an openness to new AI-generated works rather than what might have been feared by self-appointed afficionados or other cultural gatekeepers. Certainly, the comparative data from our four characteristics of artistic value demonstrate an acceptable level of accomplishment to upwards of two-thirds of music fans.
There can be no doubt that we will hear ever more refined renditions of what legendary composers might have produced had they lived longer. And from a business perspective, a wealth of new compositions wait to be rendered and monetized as AI/ML is applied to music for every purpose, from pure consumer entertainment to advertising, television, motion pictures, and other commercial enterprises. Finer slices of addressability across all media can be expected to increase demand for creative assets beyond human capacity, necessitating further evolution of creative iteration on demand.
There can be no doubt that we will hear ever more refined renditions of what legendary composers might have produced had they lived longer.
Within the music industry specifically, we can already see both artist-to-consumer and business-to-business applications for this technology. AI-generated works that can both imitate and iterate the compositions of human artists without copyright infringement will largely alleviate the cost of creative rights for business applications. Meanwhile, as copyright royalties dwindle, artists will look to AI to help them increase their earnings.
AI presents artists with an enormous opportunity to lean into the coming uniquity movement, in which artistic works can be created in pieces and then assembled in infinite combinations, producing bespoke masterworks for a premium price. For music fans, libraries and playlists are about to expand exponentially as ‘New Music Monday’ may include everything from the latest Billie Eilish single to the newest AI evolution of the Beatles catalog.
Finally, AI will not be working alone in the next iteration of creativity, especially as humans, avatars, and corporations are already intermingling in the metaverse. Creative humans are already exploring innovative applications that surprise and delight the culturally curious. Expect musicians and visual artists to stretch boundaries and adopt new AI partners as they strive to express the duality of life lived both actually and virtually.
Our findings suggest that consumer engagement with AI-generated music is all but certain to increase. We confidently expect the further refinement of this technology to expand this engagement, generating opportunities for both artists and marketers. For now, with the help of AI/ML, the legendary composers, artists, and musicians explored in our study may be gone, but their music, both actual and generative, will definitely not be forgotten.10
Author Bios

Alan Schulman is cofounder and managing partner of UpperRight. As a chief creative officer in marketing and a trained jazz musician, he is a sought-after advisor in applying music to brand ethos, content, and personality. He has received numerous awards in advertising and design and holds degrees in journalism and communications from Ohio State University and Northwestern University, and an MFA from Howard University. He is a voting member of the Recording Academy (Grammys).

Stacey Lynn Schulman is the founder of Hi: Human Insight which focuses on human aspects of data and strategy. Her career spans research leadership at global media and advertising agencies. She is routinely quoted as a recognized fan culture expert and was the first researcher inducted into the American Advertising Federation Hall of Achievement. She maintains a career as a Top 10 Billboard jazz vocalist and is a voting member of the Recording Academy (Grammys).
Endnotes
- Charley, Pease and Colton, 2012
- https://losttapesofthe27club.com/
- https://magenta.tensorflow.org/
- https://losttapesofthe27club.com/
- https://openai.com/research/jukebox
- https://lyrics.fandom.com/wiki/LyricWiki
- For a more comprehensive paper and breakdown of JukeBox AI and the many styles and artists it has been applied to, visit https://openai.com/research/jukebox Jukebox AI compositions can be heard on the Soundcloud® digital distribution platform here.
- https://www.veritonic.com/
- https://www.youtube.com/watch?v=zPkOtW5n8-E
- Detailed results of this study may be found at https://www.hihumaninsight.com/aimusicstudy