May 6th 2021, 1:45pm-3:15pm ET
Digital Culture and Algorithmic Governance
Chair: Sara Bannerman (McMaster University)
Format: Pre-recorded with live Q&A
Alex Mayhew (Western University)
Paper Title: Social Selection of Algorithms
Increasingly algorithms are being used to govern complex decisions, such as criminal sentencing and insurance premiums. The increasing influence of algorithms has brought the question of algorithmic bias to prominent attention. If the data we generate to power the algorithms captures our prejudices, then it is little surprise that algorithms themselves reproduce those same prejudices. Worse still, at the moment most algorithms are blackboxs, leaving this bias hidden.
One potential response to this challenge is Explainable AI (XAI): often these are algorithms that analyze other algorithms and explain their ‘reasoning’, exposing the hidden bias and enabling us to respond. While this is a promising approach, it poses its own challenges. Obviously any XAI system would itself be an algorithm, subject to prejudiced data and biased outcomes.
But the case of XAI reveals another challenge. Like any software, XAI systems will increasingly exist as a population, with future versions preferentially based on particular versions of XAIs under use in the previous generation. This creates an evolutionary environment where the selection of each generation is influenced by nebulous social measures, like user satisfaction or mollification. Cognitive Science has shown us that humans typically prefer coherence over truth. This could result in the XAI optimizing for what is convincing instead of what is true, without anyone intending such an outcome.
As computer system designers well know, the machine does what you tell it to do, not what you want it to do. In this case the ‘telling’ is not an intentional act. Compounding the problem, in this case it is possible to still generate results that are superficially acceptable to the stakeholders of the system. The evolutionary perspective can be helpful in framing and understanding some of the challenges surrounding XAI.
Fenwick McKelvey and Robert Hunt (Concordia University) (Robert Hunt presenting)
Paper Title: Algorithmic Regulation in the Era of Platform Governance
In 2017, YouTube re-built its artificially intelligent (AI) recommendation algorithm “to maximize users’ engagement over time by predicting which recommendations would expand their tastes and get them to watch not just one more video but many more” (Roose, 2019). According to the New York Times, Reinforce, YouTube's new reinforcement learning-driven recommender, changed which videos the site suggested to viewers and arguably led them to watch more extreme videos. We argue that deploying Reinforce was as much an act of cultural and media policymaking as an act of programming. Platforms rely on AI algorithms to filter, rank, recommend, sort, classify, and promote information and content. Unlike the debatable but public policies that motivate governments, these black-boxed algorithmic regulations are driven by inscrutable, profit-oriented optimizations, leaving this emerging area of cultural policy largely unaccountable.
Our paper provides a framework for evaluating the barriers to holding algorithms accountable as instruments of cultural policy. Drawing on cultural studies’ use of circuits and moments to interpret culture, we identify three moments—input, code, and context—to evaluate how different algorithms act as cultural and media policy. These moments do not simply offer the chance to make algorithmic governance transparent, but provide opportunities to situate algorithms within larger systems of power and structural inequity. Building on our analytical framework, we conclude by offering recommendations for policymakers and other stakeholders to begin to address the algorithmic regulation of culture. We provide suggestions for governments (either national or international governmental organizations), cultural institutions (such as civil society, independent public media, or unions of cultural workers) and technology and media firms (such as content platforms and social media companies).
James Meese (RMIT University)
Paper Title: News, algorithms and regulatory responses in Australia and Europe
The ability of social media platforms to independently adjust their recommender systems and preference certain types of content over others has been an issue of ongoing concern, particularly with respect to journalism. Scholars, policymakers and the media industry have been increasingly worried about the critical gatekeeping role that platforms play, and whether the provision of platform metrics influences how journalism is produced (Tandoc Jr. 2014). In recent years, this concern has turned into action. Regional groupings and individual countries have introduced a number of regulatory interventions to regulate recommender systems and other forms of algorithmic distribution.
The leading reform agenda has arguably emerged from the European Union through the proposed Digital Services Act. The Act requires specified platforms to outline “the main parameters of their recommender system are and the options for users to modify or influence those parameters” (Helberger 2021). Australia has also embraced some sort of enforced transparency through its News Media Bargaining Code. The reform introduced a standard that required platforms to give news outlets advance notification of algorithm changes if it affected referral traffic to news content. Other interventions address algorithmic distribution from the perspective of media diversity. Both Germany and Australia also have introduced non-discrimination requirements that requires platforms that are deemed large enough to not unfairly discriminate between news outlets (Nelson and Jaursch 2020). Elsewhere in Russia, a reform attempts to make algorithmic systems responsible for the circulation of ‘fake news’, but this law operates within a fraught political context. In this paper, I outline these reforms, develop a taxonomy of the dominant regulatory approaches and assess their effectiveness.
Charlotte Panneton (Western University)
Paper Title: Regulating Twitch.tv: Prefiguring the Policy Implications of Game-Oriented Live-Streaming
Amazon’s Twitch.tv is a gaming-oriented, online live-streaming platform that enable users to broadcast/stream themselves playing video games and interact with viewers for free and in real-time. Twitch is branded as an interest-specific alternative to traditional broadcast media, catering to niche audiences centered around video games and e-sports.
However, the emergence and growth of Twitch, and platforms like it, have challenged existing understandings of ownership, intellectual property and fair use related to user-generated content. Game-oriented live-streaming platforms complicate the legal and regulatory precedents set by more generalized, video-sharing platforms such as YouTube. This is namely due to, one, the position they occupy within the video game industry and, two, their technical features. Twitch has become central to the operation of competitive gaming (e-sports) and the promotional initiatives of game developers and publishers. Further, the live-streaming affordances of these platforms pose their own difficulties with regards to content regulation and moderation.
The aim of this presentation is to highlight the multitude of legal ‘grey-areas’ involved in the use of gaming-oriented live-streaming platforms. This presentation will describe the commercial and technical dimensions of Twitch, addressing its ownership structure, its partnerships within the video game industry, and its user activity. Ultimately, the goal of this presentation is to problematize notions of intellectual property and copyright as they manifest on Twitch, and to incite a discussion about the potential role of cultural policy in the advent of these complex media platforms.
Sara Bannerman, Emmanuel Appiah, Fizza Kulvi, and Charnjot Shokar (McMaster University)
Paper Title: Platform lobbying and relational sovereignty in Canada
This paper examines the relationship between digital platforms and the Canadian government. Drawing on the concepts of relationality and relational sovereignty outlined in part one, it examines the complex interactions between the Canadian government and digital platforms (Amazon, Facebook, Google and its sister company Sidewalk Labs, Netflix, and Twitter), highlighting the ways that digital platforms are simultaneously the objects of regulation, stakeholders and lobbyists in shaping regulation, and tools used in regulation and government service provision. Our examination is based on records contained in the Canadian lobbying registry and a large corpus of government documents obtained via access to information requests; our method is outlined in part two. In part three, we present an overview of the interactions between the Canadian government and digital platforms between 2008 and the present, drawing on records in the lobbying registry and our corpus of access to information documents. We find that many areas of federal regulation and governance are increasingly mediated by digital platforms. We note growing numbers of meetings and involvements between foreign digital platforms and government officials. We see that platforms provide knowledge to government that is required to regulate or govern (Beretta 2020, 142). We find that platforms influence regulations, or play a role in regulation, beyond simple lobbying.