What is someone with a doctorate in Cultural Studies doing at an artificial intelligence (AI) Institute? A lot actually. As a Knowledge Mobilization Advisor at Concordia’s Applied AI Institute, I translate feminist principles into AI research and development strategies. If this surprises you, consider how my colleagues (in both worlds) felt. You work where? You studied what? The path from studying feminist stand-up comedy—highlighting the social and political implications of shared laughter—to being a core part of Concordia’s Applied AI Institute (AI2) [1] is not straightforward, but it highlights the importance of interdisciplinary AI research. My arrival, discussed here, demonstrates why my perspective as a feminist scholar is important and necessary for the future of AI research.
Interdisciplinary Institute
First, the institute: Created in 2022, AI2 emerged as a response to the surge of AI research and development taking place at Concordia University in Montreal. As a “second-wave” institute, the “applied” aspect of its name was important—we focus on realworld applications of artificial intelligence rather than abstract or theoretical research. Interdisciplinary in design and leadership, our aim is to unite AI development by coordinating research across Concordia’s four faculties in collaboration with business, civil society, governments, and other researchers. We prioritize our guiding principles in determining the relationships we form, the collaborations we work toward, the training we provide, and the research we invest in….yet, we are also aware that the AI ecosystem is relatively homogeneous where the key actors and decision-makers are men (including at the Institute), and that the masculine cultures they engender compel 41% of women who enter the tech field to quit [6].
Comedy Scholar Walks Into an AI Institute
What drew me to working with the Applied AI Institute? The primary responsibilities listed in the job posting included: “Develop strategies to promote and amplify under-represented perspectives and knowledge.” This is precisely the work I was committed to as a feminist comedy producer with the Hysterics Collective where we work to “transform stages into inclusive spaces,” hiring gender minorities to perform, amplifying their voices and perspectives [10].
While this may seem unrelated to AI, comedy, you may have heard, is similarly exclusionary with audiences frequently asserting that women aren’t (as) funny as men. While men are considered authoritative representatives of the universal human experience, amassing diverse audiences, gender minorities are perceived of as having niche repertoires that make sense only to other women or non-binary people. The perceived relatability of comics diminishes as social categories of identity overlap. Gender is a spectrum, not a binary, and social categories related to gender, gender identity and expression, sexual orientation, ability, physical appearance, body size, race, age, religion, or economic status intersect in a myriad of ways to erect barriers to participation (both within comedy and science, technology, engineering, and mathematics (STEM)).
There is a clear parallel between conceptualizations of comedy clubs and AI labs as masculine spaces, and jokes and technology as for men. As Sutko writes, [AI] technology often gets equated with “men’s power,” while women and girls are portrayed as less technologically skilled and less interested than their male counterparts. Such stereotypes can contribute to the gender gap in women’s participation in related fields [14].
Technology often gets equated with “men’s power,” while women and girls are portrayed as less technologically skilled and less interested than their male counterparts.
And there certainly is a gender gap in women’s participation in AI and related fields. In 2001, the percentage of women in tech in Canada was 21%. In 2023, 24%. Women of color represent 2.5% of the Canadian tech workforce; 1.4% of the tech workforce is Indigenous [8], [15]. Intersectional data—statistics that consider multiple social categories of identity—are rare, preventing a solid analysis.
Historical analysis by Mar Hicks suggests that the masculinization of computer science (and more recently, AI) can be traced to industrialization in the 20th century when women were discouraged from pursuing paid work in a rapidly expanding post-war economy [5]. This has led to a long-standing division of labor along gender lines, the effects of which remain embedded in STEM sectors.
Despite this masculinization, and the biases embedded within AI, they are often represented as neutral technologies. The organization Better Images of AI provides a thorough overview of visual representations of AI used in stock images and media, explaining how disembodied brains and humanoid robots perpetuate myths of abstracted, rational entities. These images are decontextualized, removing AI technologies from the labs they are designed in, the people creating them, the data used to train them, and their real-world implications. As such, there is no one to hold accountable for real and potential harms, and no visibility into the homogeneity of research teams behind the scenes.
Sense Making
As the Knowledge Mobilization Advisor at AI2, I work to facilitate interdisciplinary research and to create programming that invites varied perspectives to engage with key topics in AI. This, too, aligns with comedy studies. A stand-up performance is a meaningful mechanism for translating one’s personal experience into an accessible format, engaging diverse audiences in different ways. For something to register as funny, it has to make sense. Similarly, for AI research to be engaging to audiences, it has to make sense, to feel relevant. How do we translate focused research topics into accessible and engaging material for audiences? One answer is to prioritize interdisciplinary—when researchers collaborate across disciplinary lines, they are required to develop a shared language, translate their methods and theoretical frameworks, and to approach an object of study from different perspectives. As such, interdisciplinary is a key feature of the research and working groups we contribute funding toward. One such project, directed toward achieving gender equity is the working group Affecting Machines.
How do we translate focused research topics into accessible and engaging material for audiences?
Affecting Machines: A Tool-Box for Gender Equality
Early on in my role at AI2, it was clear that gender equality was an important goal for the institute, and I was asked to propose a research project in response to a call for proposals from the Commission des partenaires du marché du travail that was directed toward reducing the barriers women face entering STEM fields. The proposal was successful, and in late spring 2023, the working group and research project Affecting Machines was launched. Through this initiative, I led an interdisciplinary team of research assistants and community consultants to create resources and lead workshops related to gender equality in AI.
The name Affecting Machines, attempts to communicate the interactional relationships we form with machines, how we are affected and affecting as we engage with them. The research enacted a community-based action methodology to collaboratively establish meaningful trajectories for the project. I began by inviting community organizations, researchers, and administrators with shared mandates, who were invested in more equitable tech sectors, to participate. Through a series of meetings, we collaboratively identified key issues and established a framework for how to proceed, committing to an intersectional approach.
Following a series of discussions, proposals, and feedback, we established three points of intervention into the AI pipeline where we can encourage gender equality: Representation, Professional Development, and Research. We were committed to developing practical resources. We sought to increase representation of gender minorities in the field to showcase their often side-lined contributions, and to provide role models to those who tend to self-select out of STEM at a young age. For those who decided to pursue STEM/AI as a profession, we developed hiring and onboarding best practices aimed at reducing the percentage of gender minorities who ultimately leave the field. Finally, we developed a set of normative principles to guide research in the field. The following is a breakdown of these resources.
Gendered Representations
The contributions of gender minorities are underrepresented in STEM fields. In fact, in the early 1940s, women often worked as “human computers”; the ENIAC Six [7] refers to a group of women who programed the first computer. To counter this erasure, we created a set of trading cards [12] shown in Figure 1 that depict the contributions of gender minorities, and taking inspiration from the cyber-feminism index [13], we have mapped ten influential moments on a timeline. These function as both a visual and discursive intervention into narratives of AI research, development, deployment, and the cultures these narratives uphold. Resources that visually represent and map out the field (ideally) serve to attenuate the inevitability of AI and computing as masculine, heteronormative fields.
The contributions of gender minorities are underrepresented in STEM fields.
These resources formed the basis for a participatory workshop where we played with and gamified this deck through a series of hands-on activities. Figure 2 shows participants at the workshop using these cards. We also invited feedback and provided opportunities to reflect on the pedagogical value of these cards.
Professional Development
To complement the gender gap, there is a confidence gap, especially in technical fields, leading gender minorities to pass on jobs they are qualified for. Gender equity and equitable hiring practices aim to eliminate gender bias, ensuring equal opportunities for all candidates. To this end, we created hiring and onboarding best practices to invite gender minorities into the workplace and to create an environment where they can flourish.
Normative Principles for Equitable Research
Similar to the Open Data Charter [2] and Data Feminism [4], we worked to establish a set of normative principles that can be employed to address gender inequity. As Yarger et al. [16] wrote, “computer scientists are often not trained to consider social issues in context,” and this enables the reproduction of stereotypes present in datasets. The normative principles provide guidelines for mitigating this risk. First, these principles call for fostering a welcoming and inclusive AI work environment that adheres to the values of equality, respect, reflexivity, and accountability. Second, the principles highlight the need to critically re-evaluate machine learning (ML)/AI research practices and their social impacts to ensure more inclusive and socially responsible AI systems. These principles formed the basis for another participatory workshop where attendees were given scenarios and asked to apply the principles in groups. Figures 3 and 4 show participants working through these principles and applying them to real-world scenarios. To anchor this discussion, Erin Hassard, from Women on Web, contributed to the event by demonstrating how their work is affected by algorithmic suppression as a case study for the principles.
What’s Next?
Since we have yet to solve gender inequality, Affecting Machines, as a working group, is still active within the institute, and we are turning our focus inward. More specifically, we are working to achieve gender equality within the institute, finding ways to be proactive when it comes to hiring a diverse tech team, for example. We have met important goals, like hiring a Responsible AI Coordinator who works closely with our AI Adoption Team, ensuring that responsible AI is embedded into the process, from development to deployment, not a post-hoc consideration.
Additionally, in partnership with Software Engineering for AI (CREATE SE4AI), we launched a Gender Equity Mentoring in Artificial Intelligence (GEMinAI) Program [9]. Through this program, we connect women and gender non-conforming undergraduate and graduate students with AI professionals for support, encouragement, and assistance along their career journey. Our first year has been a success, and we are looking forward to launching again in Fall 2024.
Achieving gender equality and responsible AI will not be easy, but as an institute, we are committed to continued progress. I am proud of our accomplishments, and I look forward to our continued efforts on this front.
Author Information
Lindsay Rodgers is the advisor, knowledge mobilization at the Applied AI Institute (AI2), Concordia University, Montreal, QC H3G 1M8, Canada. She has experience in qualitative and participatory action-based research methods, and she is always thinking of the interpretive frameworks we use to make sense of our experiences. Rodgers has a PhD from Queen’s University, Kingston, ON, Canada, performing an interdisciplinary analysis of the discursive structures and affective politics of stand-up comedy. She is a member of the Hysterics Collective. Email: lindsay.rodgers@concordia.ca.
_____________
To access the full version of this article, including references, click HERE.