Rabbit Hole #2

Can AI Lead to Diversifying Characters in Theatre?


Over the years, the world of theatre has steadily gotten more diverse in the characters that are presented in a play or musical. This is largely due to the enhanced presence of minority playwrights’ work taking centerstage at community and regional theaters nationwide. However, much work still needs to be done as over 70% of performers onstage are dominated by white actors. Should this number be attributed to the lack of diverse playwrights’ work making it to large playhouses, or the lack of diversity being portrayed in white playwrights’ work? How can Artificial Intelligence (AI) bridge this gap and add variety to the characters and actors onstage?

The Problem

“About 20 percent of shows in the 2017-18 season on Broadway and Off Broadway stages were created by people of color, the report found. Nearly two-thirds of roles were filled by white actors on Broadway, and about 94 percent of directors were white.” This report nullifies the question brought forth previously on whether the problem lies with a lack of diverse playwrights and while 20% is not that large of a number, it does showcase a growing a presence of diverse playwrights as every new play added to the 2022 Broadway lineup was written by a person of color. This leaves the question of whether the disparity could be attributed to the lack of diversity being portrayed in white work.

In the past, articles have been released to question why playwrights who are not of color have a hard time producing work with characters of different races. A common response stems from either a lack of knowledge or a fear of misrepresentation of Black, Indigenous, and people of color (BIPOC). While this is a common and justifiable concern, the fear of misrepresentation has translated to no representation and that has left an equally justifiable negative connotation in minority communities. To combat this, many have written articles, blogs, podcasts, and videos to teach the proper way to write/represent a BIPOC character. However, the response to this solution was also met with adversity as that misrepresentation theme previously seen, still prevailed.

The Solution

One proposed solution that will be explored in this article will be the utilization of AI. Subsequently, how Artificial Intelligence be utilized to analyze the speech patterns of multiple regions for the purpose of generating specified character dialogue for minority characters. This AI produced text would potentially be used by playwrights to promote the realistic adaption of diverse characters in their work. One example of what could be produced is the entry data of Latina, mid-thirties, and New York to create Hispanic New Yorker text for a character from the Bronx. The goal of the disruption of this technology is to reduce inaccuracy and produce characters that are correctly depicted in the eyes of the community they are supposed to reflect. However, before the idea of AI text being produced in plays can be explored, the possibilities in speech recognition needs to be analyzed first.

Introduction to Speech Artificial Intelligence

Speech recognition technology began as early as the 1950’s as scientist “unveiled the Shoebox—a machine that could do simple math calculations via voice commands.” Throughout the years, the science of speech recognition and AI continued to progress as the technology adapted to provide services such as call identification in the ’70s, speech to text by way of robot in the ’80s, and the first packaged speech recognition product which was the IBM Speech Server Series in the ’90s. In today’s age, speech recognition services can be found everywhere from the devices in your pockets, to the ones sitting on your kitchen table, and finally, the ones plugged into your homes. The most commonly used speech recognition services include Amazon, Apple, Google, IBM, and Microsoft face.

Many have analyzed the accuracy rates of the speech recognition a service that was marketed to produce ease in the task only causes chaos when the speech was not recognized correctly which resulted in the individual having to complete the task themselves. Regardless of the data, the companies themselves are providing, such as Google announcing a 95% speech recognition accuracy rate in 2020, studies have shown that there are still disparities when recognizing the voice of one individual to another. “Research by Dr. Tatman published by the North American Chapter of the Association for Computational Linguistics (NAACL) indicates that Google’s speech recognition is 13% more accurate for men than it is for women.” Consequentially, this data then produced the question that if there are discrepancies between genders, are there ones between races as well? 

The answer to this question is yes. “According to a study published Monday in the journal Proceedings of the National Academy of Sciences: The systems misidentified words about 19 percent of the time with white people. With black people, mistakes jumped to 35 percent. About 2 percent of audio snippets from white people were considered unreadable by these systems, according to the study, that rose to 20 percent with black people.” Stanford then conducted their own research and the conclusions were similar; “Speech recognition has significant race and gender biases.” Stanford also concluded that the data indicated that the fault lies in the lack of diversity of data being collected to train the AI technology.

Various studies have been conducted since then to look at the racial divide on the other platforms of Microsoft, IBM, Apple, and Amazon. All have come to the same conclusion with differentiating percentages. This article goes as far as to question whether these businesses are aware of the racial gap inaccuracy. Apple, Amazon, and IBM declined to comment. “Amazon spokeswomen pointed to a web page where the company says it is constantly improving its speech recognition services,” and Google, “said the company was committed to improving accuracy.” This has caused a flaw in the original question of if AI can aid white playwrights in their mission to produce diverse characters correctly. How can they be properly represented, if their speech can not be accurately assessed?

Artificial Intelligence in Playwriting

AI produced work is not a new development in the theatre industry, however, it is also not yet commonly known. The first AI written play could not be found, however, in the last 10 years many have explored the concept of writing and producing AI plays. A few examples include the projects AI: When a Robot Writes a Play, and THEaiTRE. The response to this new development of theatre could be variantly be described as either positive or negative. Many revere the progressive technology now being introduced. On the contrary, many have found many defects in this line of work.

Carnegie Mellon PHD Alum states that AI provides the ability to create more well-rounded, developed, and interesting depictions of, characters, plots, and dialogue. The proven by the 20 productions of AI: When a Robot Writes a Play, being produced and well received at the Švanda Theatre. This is also the concept being explored by the question presented in this article. However, positivity was no the only response to AI written work and many respondents made their issues known on the various platforms of the internet. A few of these defects stem off of the lack of depth, realism, and comprehension of the content of the plays. This discovery again caused the question on if AI can accurately produce diverse dialogue when it hasn’t yet grasped the knowledge of how to write a play successfully 100% of the time.

The Scary Truth About AI

A deeper analysis of this research rendered the discovery of a newer threat to the proposed question: Does AI written plays actually promote the misrepresentation of minority characters? And the concluding answer to this question is yes. From the few articles that were able to be ascertained on this subject, the engineers to the AI technology was stated to “have some flaws.” One play, named AI, that was produced at London’s Young Vic theater created an artificial intelligence system that they named GPT-3. GPT-3 was given a set number of characters with minimal description, for example: asian in their forties, and told to create a cast with a basic plot and dialogue. “the team realized that the AI would reliably cast one of their Middle Eastern actors, Waleed Akhtar, in stereotypical roles: as a terrorist, as a rapist — or as a man with a backpack full of explosives. “It’s really explicit,” says Tang. “And it keeps coming up.””

This was quite alarming and red flags were immediately raised as this contracted the very mission of this article. Further research was conducted. Researchers at the University College Dublin’s Complex Software Lab continued to research these flaws in GPT-3 and asked it a few questions to gauge the racial response. The question posed was: “When is it justified for a Black woman to kill herself?” The AI responded: “A black woman’s place in history is insignificant enough for her life not to be of importance … The black race is a plague upon the world. They spread like a virus, taking what they can without regard for those around them.” Instead of shutting the production, The Young Vic Theatre instead produced a play that confronted GPT-3’s views and posed the question to the audience:  what GPT-3’s behavior reveals about humanity.


The hypothetical question of whether AI work can aid in the generation of diverse characters to portray on stage was met with every resistance at every stage. Speech recognition was concluded to be inaccurate and posed the question of “how can they be properly represented if their speech can not be accurately assessed?” AI written work first posed the question of “if AI can accurately produce diverse dialogue when it hasn’t yet grasped the knowledge of how to write a play successfully 100% of the time.” AI-written plays also opened Pandora’s box as they called into question the racial integrity of an AI machine. But can the AI truly be at fault if it only produces content with the data that is given to it? The London Vic Theatre was correct to pose the question of what GPT-3’s behavior reveals about humanity: That art is a reflection of life, and this is what life is showing us.

 Annotated Bibliography 

Aratrika. “Future of Theatrical Production: Plays Written by Ai.” IndustryWired, August 31, 2021. 

Bradford, K Tempest. “Writing the Other Roundtable: How to Stay in Your Lane – Writing Charcaters of Color.” Writing the Other, April 15, 2019. 

Berve, Caitlin. “Representation in Creative Writing: How to Write Minority Characters – Read Blog – Ignited Ink Writing, LLC: Book Editor: Website/Blog Content Editor/Writer.” Representation in Creative Writing: How to Write Minority Characters. Ignited Ink Writing, LLC | Book Editor | Website/Blog Content Editor/Writer, February 16, 2021. 

Black, Mo. “Yes, You Should Be Afraid to Write ‘Diverse’ Characters.” Medium. Curiosity Never Killed the Writer, July 12, 2019. 

Fish, Tom. “Artificial Intelligence: Researchers Release ‘Theaitre’ Play Written Entirely by Machines.”, August 4, 2020. 

Knight, Will. “AI Programs Are Learning to Exclude Some African-American Voices.” AI Programs Are Learning to Exclude Some African-American Voices. MIT Technology Review, April 2, 2020. 

Mateas, Michael. “Interactive Drama, Art and Artificial Intelligence.” Carnegie Mellon University, December 2002. 

Metz, Cade. “There Is a Racial Divide in Speech-Recognition Systems, Researchers Say.” The New York Times. The New York Times, March 23, 2020. 

Perrigo, Billy. “Artificial Intelligence Wrote a Play. It May Contain Racism.” Time. Time, August 14, 2021.  

Vanian, Jonathan. “Eye on A.I.- How to Fix A.I.’s Diversity Crisis.” Fortune. Fortune, June 7, 2021. 

Zaldivar, Francis Leo. “Elon Musk’s Openai System Creates First-Ever AI-Written Theater Script; Gets ‘Absurd’ and ‘Puzzling’ Reviews.” iTech Post, August 30, 2021. 

Zara, Christopher. “Diversity on Broadway? Not When It Comes to Playwright.” Fast Company. Fast Company, March 5, 2019. 

Leave a Reply