Peer reviewed publications

Roos, C. (2026). Resisting Big Tech: Countergovernance and the future of AI democracy. In N. A. Smuha, V. Hendrickx, & J. Petroons (Eds.), Blog symposium 2026 (Law, Ethics and Policy of AI Blog, KU Leuven) (p. 34).

Roos, C. (2026). Resisting Big Tech: Countergovernance and the future of AI democracy. In N. A. Smuha, V. Hendrickx, & J. Petroons (Eds.), Blog symposium 2026 (Law, Ethics and Policy of AI Blog, KU Leuven) (p. 34). https://www.law.kuleuven.be/ai-summer-school/blogpost/Blogposts/symposium-on-ai-and-democracy_law-ethics-and-policy-of-ai-blog_march.pdf

This article examines Big Tech as a form of political power that reshapes democratic life and the governance of artificial intelligence. It argues that contemporary AI governance regimes privilege corporate and state interests while failing to protect collective rights, particularly those of marginalized communities. Drawing on empirical cases from Brazil, Canada, and the United States, the article demonstrates how platform infrastructures, weakened content moderation, data extractivism, and military entanglements reinforce structural inequalities and expand corporate influence beyond traditional state boundaries. Building on agonistic democratic theory and the concept of countergovernance, the article proposes a shift from consensus-driven and design-centered approaches toward institutionalized forms of contestation, oversight, and collective judgment. It highlights how civil society mobilizations, legal actions, and grassroots organizing can challenge corporate dominance, disrupt political fatalism, and reclaim democratic agency in AI governance. The analysis further engages with the concept of AI countergovernance to emphasize the need to address not only technological systems, but also the political and economic infrastructures that sustain them.

Melo, C. de O., & Roos, C. (2024, April). Desenhando organizações com equidade: Inovações de gênero para além do 50:50. Computação Brasil. https://doi.org/10.5753/compbr.2021.44.4436

Melo, C. de O., & Roos, C. (2024, April). Desenhando organizações com equidade: Inovações de gênero para além do 50:50. Computação Brasil. https://doi.org/10.5753/compbr.2021.44.4436

This article examines the limitations of gender parity approaches in organizational contexts, arguing that numerical targets such as the “50:50” model are insufficient to achieve substantive gender equity. Drawing on empirical data from the Brazilian technology sector and interdisciplinary research on gender and work, the article demonstrates how structural barriers emerge across the entire career pipeline, from early socialization and education to hiring, retention, and advancement, resulting in persistent inequalities despite formal commitments to diversity. The analysis highlights how organizational cultures, recruitment practices, and evaluation systems often reproduce gender biases, contributing to exclusionary environments that disproportionately affect women, particularly those from marginalized backgrounds. It engages with the concept of gendered innovations to propose a shift toward redesigning organizational structures and processes through an intersectional lens, incorporating diverse lived experiences into decision-making and institutional design. The article further emphasizes the role of transparency, accountability, and public policy frameworks, such as gender mainstreaming, in fostering more sustainable and systemic change. It concludes that advancing gender equity requires coordinated efforts across organizations, ecosystems, and governance structures, moving beyond symbolic representation toward transformative institutional redesign.

Media and Policy Publications (selection)

Roos, C. (2025, May 26). Gendered disinformation as infrastructure: How tech billionaires shape political power. Tech Policy Press.https://www.techpolicy.press/gendered-disinformation-as-infrastructure-how-tech-billionaires-shape-political-power/

Roos, C. (2025, May 26). Gendered disinformation as infrastructure: How tech billionaires shape political power. Tech Policy Press. https://www.techpolicy.press/gendered-disinformation-as-infrastructure-how-tech-billionaires-shape-political-power/

This article conceptualizes gendered disinformation as a sociotechnical infrastructure embedded in the political economy of digital platforms. It argues that gendered disinformation operates through the intersection of misogyny, platform design, and political influence, functioning as a strategic mechanism to silence women, particularly those from marginalized groups, and restrict their participation in democratic spaces. Drawing on empirical examples from Brazil and transnational cases, the article demonstrates how platform architectures, algorithmic amplification, and monetization models actively sustain and profit from gender-based harassment and coordinated disinformation campaigns. It further examines how recent shifts in content moderation policies by major technology companies have contributed to the normalization of harmful content under the guise of neutrality and free expression. By positioning tech billionaires as political actors who shape both the boundaries and conditions of public debate, the article highlights the structural role of platforms in reinforcing democratic inequalities. It concludes that addressing gendered disinformation requires moving beyond content moderation toward a structural reconfiguration of platform governance, including stronger regulatory frameworks, accountability mechanisms, and a critical reassessment of the attention economy.

Roos, C. (2025, January 24). IA não pode ser sustentável se o impacto dos data centers não for reconhecido. Exame.https://exame.com/bussola/ia-nao-pode-ser-sustentavel-se-o-impacto-dos-data-centers-nao-for-reconhecido/

Roos, C. (2025, January 24). IA não pode ser sustentável se o impacto dos data centers não for reconhecido. Exame. https://exame.com/bussola/ia-nao-pode-ser-sustentavel-se-o-impacto-dos-data-centers-nao-for-reconhecido/

This article examines the environmental and social impacts of data centers as a critical yet often overlooked dimension of artificial intelligence governance. It argues that current narratives of “sustainable AI” obscure the material infrastructures that sustain large-scale computational systems, particularly their intensive consumption of water and energy and their disproportionate effects on vulnerable communities. Drawing on global examples, including cases from the United States, Chile, and China, the article highlights how the expansion of data infrastructures can exacerbate existing environmental inequalities and reproduce patterns of resource extraction. Building on emerging debates in AI ethics, the article engages with the concept of a “third wave” of AI ethics to advocate for a broader framework that integrates environmental justice and social responsibility into the assessment of technological systems. It also critiques corporate sustainability practices, such as the use of renewable energy certificates, which may create misleading impressions of environmental responsibility while masking the actual impact of operations. The article concludes that achieving truly sustainable AI requires stronger regulatory frameworks, greater transparency in resource consumption, and governance approaches that center environmental justice and community impact. Without addressing the material and ecological foundations of AI, claims of sustainability risk reinforcing existing inequalities rather than mitigating them.

Roos, C. (2025, March 14). Quando a liberdade se torna uma ferramenta de exclusão. Nexo Jornal.https://www.nexojornal.com.br/redes-sociais-regulacao-liberdade-censura-2

Roos, C. (2025, March 14). Quando a liberdade se torna uma ferramenta de exclusão. Nexo Jornal. https://www.nexojornal.com.br/redes-sociais-regulacao-liberdade-censura-2

This article examines how the discourse of freedom of expression has been increasingly mobilized as a mechanism of exclusion in contemporary digital and political contexts. It argues that the deregulation of digital platforms, particularly in the United States, combined with state-led attacks on diversity, academic freedom, and media independence, contributes to the systematic silencing of marginalized voices and the erosion of democratic debate. Drawing on recent policy developments and global trends, the article shows how language itself becomes a site of political struggle, where terms related to gender, equity, and human rights are strategically removed or reframed to legitimize exclusionary agendas. The analysis highlights the role of digital platforms in amplifying disinformation and polarization, particularly through the weakening of content moderation and the monetization of misogynistic and anti-LGBTQIA+ content. It further demonstrates how these dynamics are embedded in broader structures of surveillance capitalism, where corporate actors shape information flows and the conditions of public discourse. The article concludes that the defense of freedom of expression requires rethinking digital governance beyond a binary opposition between regulation and censorship. Instead, it calls for robust regulatory frameworks, democratic oversight, and participatory mechanisms capable of addressing structural inequalities and ensuring that digital spaces remain inclusive, plural, and accountable.

Roos, C. (2025, January 22). O futuro da inteligência artificial e a necessidade da ética relacional para uma governança inclusiva. IT Forum. https://itforum.com.br/noticias/inteligencia-artificial-etica-governanca-inclusiva/

Roos, C. (2025, January 22). O futuro da inteligência artificial e a necessidade da ética relacional para uma governança inclusiva. IT Forum. https://itforum.com.br/noticias/inteligencia-artificial-etica-governanca-inclusiva/

This article examines the limitations of data-driven approaches to artificial intelligence by critically engaging with the process of datification and its implications for social inequality. It argues that the transformation of complex human experiences into quantifiable data often obscures contextual realities and reproduces historical patterns of exclusion embedded in datasets. As a result, AI systems risk reinforcing structural inequalities rather than mitigating them, challenging assumptions of algorithmic neutrality and objectivity. Building on debates in AI ethics, the article advocates for a shift from technical approaches to fairness toward a relational ethics framework grounded in care, inclusion, and social justice. It highlights the importance of participatory governance models, such as the proposal of an “AI Public Body” by Jude Browne, which seeks to incorporate the perspectives of communities most affected by AI systems into decision-making processes. By emphasizing deliberative and inclusive forms of governance, the article reframes AI as a sociopolitical infrastructure that requires democratic accountability. The article concludes that developing more equitable AI systems requires integrating ethical reflection with structural change, ensuring that governance frameworks address lived experiences, power asymmetries, and the broader societal impacts of technological systems.

Media inquiries

Available for interviews, expert commentary, and speaking engagements on topics related to AI governance, gender and democracy, platform power, and inclusive digital futures.