Adobe>
21 February 2025
Digital platforms empower civic engagement and activism, but also pose serious risks, such as government surveillance, targeted cyberattacks, and sophisticated disinformation tactics. Ransomware attacks on healthcare systems, government networks, and infrastructure illustrate how cyber threats can disrupt essential services and national security. Disinformation campaigns, amplified by AI-generated deepfakes and bot-driven misinformation, have been used to shape political narratives, weaken trust in democratic institutions, and incite social divisions.
Our latest research brief ‘Behind the Lens: Exploring the Problematic Intersection of Surveillance, Cyber Targeting, and Disinformation’ examines the complex relationship between digital technologies and their misuse in surveillance, cyberattacks, and disinformation campaigns. This joint study written by Erica Harper, Jonathan Andrew, Florence Foster, Joshua Niyo, Beatrice Meretti and Catherine Sturgess details how the increasing reliance on digital systems has made them primary targets and tools for controlling societies - with deep implications for human rights, human agency and global security.
Using global examples the authors highlight the role of technology companies in regulating these threats, and emphasize the need for a balanced approach that preserves digital freedoms while implementing safeguards. The research brief concludes by outlining policy recommendations for governments to enforce rights-based regulations, private companies to enhance transparency and ethical oversight, and civil society to advocate for digital rights.
This report is part of the Academy’s broader work related to new technologies, digitalization, and big data. Our research in this domain explores whether these new developments are compatible with exis ting rules and whether international human rights law and IHL continue to provide the level of protection they should.
Adobe
Our research brief 'Neurotechnology - Integrating Human Rights in Regulation' examines the human rights challenges posed by the rapid development of neurotechnology.
Geneva Academy
The Geneva Human Rights Platform contributed to key discussions on AI, human rights, and sustainable digital governance at the World Economic Forum 2025.
ICRC
Co-hosted with the ICRC, this event aims to enhance the capacity of academics to teach and research international humanitarian law, while also equipping policymakers with an in-depth understanding of ongoing legal debates.
Participants in this training course will be introduced to the major international and regional instruments for the promotion of human rights, as well as international environmental law and its implementation and enforcement mechanisms.
This training course will delve into the means and mechanisms through which national actors can best coordinate their human rights monitoring and implementation efforts, enabling them to strategically navigate the UN human rights system and use the various mechanisms available in their day-to-day work.
Adobe Stock
This project addresses the human rights implications stemming from the development of neurotechnology for commercial, non-therapeutic ends, and is based on a partnership between the Geneva Academy, the Geneva University Neurocentre and the UN Human Rights Council Advisory Committee.
CCPR Centre
The Geneva Human Rights Platform collaborates with a series of actors to reflect on the implementation of international human rights norms at the local level and propose solutions to improve uptake of recommendations and decisions taken by Geneva-based human rights bodies at the local level.
Geneva Academy
Geneva Academy