Algorithmic Decision-Making Systems and Risk: An Intersectional Approach
Doctoral thesis

View/ Open
Date
2025Metadata
Show full item recordCollections
- PhD theses (TN-ISØP) [36]
Original version
Algorithmic Decision-Making Systems and Risk: An Intersectional Approach by Joel Tyler Alba, Stavanger : University of Stavanger, 2025 (PhD thesis UiS, no. 823)Abstract
In theory, technological advancements leverage society against risks. Whether it’s detecting imminent disasters, facilitating climate change adaptation, advancing healthcare recognition systems, or unlocking new sociotechnical innovations, such progress is designed to mitigate adverse effects.
As we enter a new age of human-machine synergy, we are undergoing extraordinary developments in our ability to internalize, process, and analyze data. Never has this phenomenon been more evident than with the introduction of Algorithmic Decision-Making Systems (ADMS). However, there lies an unfortunate reality as algorithmic systems increasingly embed within our decision-making arenas: as Artificial Intelligence (AI) systems continue to evolve in sync with our decision-making processes, society faces new, unprecedented risks that upend traditional risk mitigation techniques.
Despite growing understandings of algorithmic systems and the interplay between society and technology, policymakers, technocrats, researchers, and civil society continue to rely on, for lack of a better word, ‘outdated’ societal safety and risk management frameworks. I maintain that this reliance on outdated frameworks leads to misunderstood, mischaracterized, and mismanaged algorithmic risks in the social arena. The work conducted in this thesis argues that the onset and prevalence of ADMS, in conjunction with past risk mitigation frameworks, leads to the emergence of a new risk typology. I define these risks as ‘the existential risks of ADMS.’
Notably, I do not relate the existential risks of ADMS in terms of human extinction à la Beck (1992) or Giddens (1990). Instead, I argue that the existential risks of ADMS are a phenomenon where latent biases become entrenched within ADMS, exacerbating discriminatory practices and attitudes. This project contends that the existential risks of ADMS manifest as targeted prejudice via unfair policy provisions, leading to a loss of community, identity, and sociopolitical power and agency for marginalized communities.
Evidence of this algorithmic exacerbation is seen across various fields, from predictive policing, healthcare, and fiduciary sectors to recidivism and facial recognition (Angwin et al., 2016; Bartlett et al., 2022; Celi et al., 2022; Chapman et al., 2022; Miron et al., 2020; Perkowitz, 2021; Shapiro, 2017, 2019). If left unaddressed, those marginalized may face further democratic disenfranchisement, perpetuating historical and structural inequities (Danks & London, 2017; Gloria, 2021; Jackson, 2021; Resseguier, 2023; Richardson, 2021; Varona & Suarez, 2023).
Though current research focuses on the technical aspects of ADMS, including quantitative analyses of bias and algorithmic principles of Fairness, Accountability, Transparency, and Ethics (FATE), I believe there lies an alternative lens to investigate these systemic injustices (Baum, 2017; Datta et al., 2016; Désigaud, 2021; Jarrahi et al., 2023; Mehrabi et al., 2019; Starke et al., 2022).
While these studies offer valuable insights into the complexity of “Black Box” technologies, they do not sufficiently address the broader societal and historical implications behind ADMS integration (Klugman, 2021). This project argues that intersectional analysis may provide a more holistic framework for understanding and addressing the existential risks posed by ADMS.
To bridge this technical-intersectional gap, this dissertation is predominantly rooted in theoretical exploration, supplemented by a selection of empirical case studies. Through these methodologies, I contend that adopting an intersectional approach enables policymakers to move beyond purely quantitative understandings of ADMS. The goal is to demonstrate that an intersectional lens not only reveals the multidimensional impacts of ADMS but also provides a critical framework for reshaping risk governance to account for the lived experiences of those disproportionately affected.
Unfortunately, however, contemporary risk management and societal safety frameworks are not constructed to identify nor address the underlying historical and institutionalized power dynamics that buttress ADMS. Much like the “shift toward resilience” (Aven, 2020, p. 188), this project argues that ADMS may be reconceptualized through a broader lens, emphasizing a more comprehensive understanding of risks, their effects, and the risk perception of those at the cross-hairs of algorithmic risks.
At the time of writing, little to no research has adopted and operationalized intersectional analysis within risk management and societal safety frameworks—especially to that of ADMS. Therefore, by drawing on intersectionality, risk theory, and the Pressure and Release frameworks (PAR frameworks), this project proposes that ADMS need not be solely assessed through a purely technical lens.
To assist in this endeavor, this project explores the following research questions in four papers (See Part II):
• How do risk theory, the Risk Society, intersectionality, and posthumanism interlink to challenge traditional conceptions of ADMS?
• How has civil society evolved in response to the advent and proliferation of the Risk Society?
• How can a decolonial-intersectional lens be applied to Algorithmic Decision-Making Systems (ADMS) to reveal and address the risks perpetuated by algorithmic power structures and processes?
• How can the Pressure and Release (PAR) frameworks be adapted to better situate and locate the societal vulnerabilities underlying ADMS existential risks?
Current understandings of ADMS do not consider these research questions and, therefore, reside within an intersectional-free paradigm. As such, this thesis generates new insights to address the existential risks of ADMS by systematically analyzing ADMS through an intersectional lens. The accumulated work provides theoretical and empirical recommendations to address these existential risks by proposing new algorithmic governance forms, decolonial interpretations of ADMS, and emancipatory decision-making practices.
The research culminates in adapting the PAR frameworks to re-envision the existential risks of ADMS as complex amplifications of societal vulnerabilities already present in our institutions. These arguments are developed through a dynamic re-understanding of the algorithmic and societal structures, norms, and practices that systematically disadvantage certain groups based on their identities, social location, and hegemonic conceptions of the self, knowledge, processes, power, and agency. I challenge traditional ADMS conceptions and posit how intersectional analysis may lay bare the underlying Root Causes, Dynamic Pressures, and Unsafe Conditions that serve to exacerbate entrenched biases and modes of discrimination (Blaikie et al., 2004; Collins, 1990; Richardson, 2021; Wisner et al., 2012).
I demonstrate that the adapted PAR framework unlocks actionable insights and identifies hidden governance opportunities so policymakers can make risk-informed decisions to build trust. In understanding the complex amplifications of societal vulnerabilities to that of the existential risks of ADMS, policymakers, researchers, and civil society may be able to strengthen the tenets of algorithmic FATE while also instituting equitable ADMS policies.
Recognizing the existential risks of ADMS as byproducts of societal vulnerabilities requires new approaches to identifying, assessing, and governing the foundational structures that interlink to amplify and-or attenuate algorithmic risks. This project asserts that risk governance in the 21st Century requires intersectionally focused societal safety and risk management frameworks to manage ADMS existential risks.
Of note, I do not claim nor am I under the impression that this project is the panacea to algorithmic injustice. The goal is to provide a new lens to understand how and why these institutional injustices continue to plague our marginalized and most vulnerable communities. Simply, this project aims to add new knowledge on identifying, managing, and governing ADMS risks outside of purely technical analyses.
That said, further research is required to determine the practical applicability of the project when addressing the existential risks of ADMS. In particular, introducing the adapted PAR frameworks in various ADMS sectors should provide a more tangible understanding of its potential utility in establishing a more equitable decision-making system. Ideally, this project’s operationalization of intersectional analysis within societal safety and risk management frameworks may spur new conceptions of how society adopts these technologies into our decision-making arenas.
Description
PhD thesis in Risk management and societal safety
Has parts
Paper I: Alba, J.T., & Scharffscher, K. (2024). Intersectionality’s New Frontier: Artificial Intelligence, Risk Society, and the Post-Humanist Self. Constellations Journal, Under review. Not included in the repository.Paper II: Alba, J.T. (2023). Intersectionality Incarnate: A Case Study of Civil Society, Social Capital, and its Metamorphosis. Journal of Civil Society, 1-33. doi:https://doi.org/10.1080/17448689.2023.2226253
Paper III: Alba, J.T. (2024). Insights into Algorithmic Decision-Making Systems via a Decolonial-Intersectional Lens: A Cross-Analysis Case Study. Digital Society: Ethics, Socio-Legal and Governance of Digital Technology, doi:https://doi.org/10.1007/s44206-024-00144-9
Paper IV: Alba, J.T., & Scharffscher, K. (2024). Risk Governance in the 21st Century: Addressing the Existential Risks of Algorithmic Decision- Making Systems. AI & Society, Undergoing second round of review. Not included in the repository.
Publisher
University of Stavanger, NorwaySeries
PhD thesis UiS;;823