Collective Trust? Comparative Commons-Centric Design for AI and Algorithmic Trust
Trust has emerged as a central concern in contemporary societies due in part to the influential, yet opaque, role played by algorithmic decision-making. This heightened significance of trust and trustworthiness is evident in official discourses and policies surrounding Artificial Intelligence (AI), not only in Western societies but also globally. There is a growing agenda to foster trust in AI among the general public, as emphasized by state-funded research in the United States and calls from the European Commission for trustworthy algorithms from a societal perspective. These demands give rise to more fundamental questions regarding the nature of trust and the role of digital tools: What does it mean for trust to be built and experienced collectively in democratic societies? How can and should the role and potential of algorithmic tools be conceptualized in shaping, and potentially reconditioning, trust?
This project’s thesis is that “collective trust” is ontologically different from private trust (individual-to-individual), and essential to democratic practice. New and remerging experiments with direct democracy – often developing in tandem with global networked communications and intelligent computing – require practices of decision making, sharing of authority, and security/access control that can help to conceptualize collective trust in design, and can perhaps be further supported or developed through algorithmic tools. At the same time, horizonal community groups and social movements are often stymied in their democratic practices at the layer of digital security, where power bottlenecks around the holder of the passwords or the managers of the social media accounts, and struggles ensue. More challenges arise as these groups scale up across distances, work remotely, or automate decisions, requiring the use of digital tools for a form of consensus-minded decision-making that generally does not rely on binary logic. As projects in collective self-governance continue to advance, globalize, and digitize, these problems will only become more acute and a greater threat to progressive democratic practice. While research proliferates on “trustable AI” from a marketing perspective, and algorithmic governance research considers how AI can simplify or inform existing bureaucratic processes, this research project seeks to understand the co-formation of collective trust, democratic practice, and algorithmic decision-making.
Jessica Feldman’s research project bridges theory and practice, combining comparative philosophy, ethnographic work, and values-in-design analysis, to contribute to her work on collective-based access control design, a related artistic project, and a forthcoming book on trust and algorithms.
Feldman, J. “The street, the square, and the net: how urban activists make and use digital networks,” in Data Justice & the Right to the City ed. Morgan Currie, Callum McGregor, Jeremy Knox (Edinburgh: University of Edinburgh Press, 2022).
Gallagher, K., Torres-Arias, S., Memon, N. and Feldman, J. 2021. COLBAC: Shifting Cybersecurity from Hierarchical to Horizontal Designs. In NSPW ’21: In New Security Paradigms Workshop (NSPW ’21). Association for Computing Machinery, New York, NY, USA, 13–27. DOI:https://doi.org/10.1145/3498891.3498903
Feldman, J. “Listening and Falling Silent: Towards Technics of Collectivity”. Sociologica, vol. 14, no. 2, Sept. 2020, pp. 5-12, doi:10.6092/issn.1971-8853/11286.