Hi, I'm Fazl Barez
AI safety and Interpretability researcher.
I’m a Research Fellow at the University of Oxford, leading research in AI safety and interpretability. Through collaboration across academia, AGI labs, AI Safety Institutes, government organizations and public sector, I work to accelerate progress in AI safety research and translate technical insights into practical solutions that benefit society. My research has helped shape both academic directions and industry safety practices.
My academic affiliations include the Centre for the Study of Existential Risk (CSER) and Kruger AI Safety Lab (KASL) at University of Cambridge, the Digital Trust Centre at Nanyang Technological University, and the School of Informatics at University of Edinburgh. As a member of ELLIS European Laboratory for Learning and Intelligent Systems (ELLIS), I contribute to advancing AI safety research across Europe. I also serve as a Research consultant at Anthropic.
Prior to my current roles, I served as Co-Director and Advisor at Apart Research, Technology and Security Policy Fellow at RAND Corporation, developed interpretability solutions at Amazon and The DataLab, built recommender systems at Huawei, and created financial forecasting tools at RBS Group.