Ihor SAMOKHODSKYI, Ukraine
PROJECT: Weaponised Algorithms: Auditing AI in the Age of Conflict and Propaganda

Ihor Samokhodskyi is a public policy strategist and AI governance practitioner focused on intersection of AI and policy – building applied AI tools for public policy and training policy teams to integrate AI into everyday decision-making.

With over a decade of experience at the intersection of technology, law, and governance, he works on turning AI from an abstract policy topic into operational capacity for institutions. Ihor leads the ICT Policy Team at the Better Regulation Delivery Office (BRDO), where he has contributed to Ukraine’s alignment with the EU AI Act and broader digital legislation under wartime conditions.

His work combines policy design with hands-on system building. He has led the development of AI-powered tools auditing court decisions for bias and inconsistencies, built automated transparency channels for judicial data, and created AI-assisted legal translation workflows accelerating EU integration. Across these initiatives, his focus is on low-resource, high-impact AI adoption that institutions can realistically deploy.

Ihor is the founder of Policy Genome, a platform developing applied AI governance tools and practical training for policymakers, regulators, journalists, and civil society across Europe. Through Policy Genome, he works with teams to design AI workflows, quality control mechanisms, and safeguards against misuse – moving AI governance from compliance language to daily practice.

Through his EaP Fellowship project, Weaponised Algorithms, Ihor is developing a replicable AI audit methodology to detect algorithmic bias, cross-language narrative drift, and propaganda in AI systems – supporting journalists, researchers, and civic actors in protecting information ecosystems during conflict.

He collaborates internationally with partners including the German Marshall Fund, GIZ, and RUSI, and serves on the Public Council of Ukraine’s Ministry of Digital Transformation.

His algorithm and research on AI-enabled cognitive warfare has gained international attention, highlighting the security implications of language-conditioned AI behavior for European institutions.

“AI systems are the new infrastructure of reality – they shape what societies see, believe, and do. Governance must move beyond abstract theory into operational tools. My work is about building the practical systems that allow democracies to audit AI, understand its biases, and integrate it responsibly into the daily work of governing.” – Ihor SAMOKHODSKYI

 

Weaponised Algorithms: Auditing AI in the Age of Conflict and Propaganda

Fellowship Summary: Auditing how leading AI models respond to politically sensitive questions on conflicts and wars, and disseminate the bias patterns and produce a replicable method for CSOs and journalists to hold AI systems accountable in fragile information environments.

Updates coming soon!

Fellowship Programs 2025
Country Ukraine
Areas of Interest Advocacy
Awareness raising
Capacity development
Topics Civic tech & digital transformation
Mitigating misinformation
Project duration October 2025 - March 2026