Fazl Barez

AI safety and Interpretability Researcher

Fazl Barez

With over 5+ years of experience in artificial intelligence research, I specialize in developing advanced yet interpretable machine learning systems. In the past I have consulted with think tanks, charities, education and financial companies.

Consulting Expertise

  • Custom ML solution architecture - Collaborating with organizations to design impactful ML pipelines, ensuring robustness, accuracy, and transparency.
  • Algorithmic auditing - Performing rigorous assessments of AI systems to identify issues related to bias, interpretability, and ethical risks.
  • AI governance and oversight - Advising companies on responsible AI practices and helping craft internal policies and best practices.
  • Thought leadership - Staying updated on the latest developments in AI safety and algorithmic transparency, actively publishing, reviewing papers, and running workshops.

As an experienced researcher, I have served as a reviewer for conferences including NeurIPS, ICLR and ICML. I also mentor promising young academics looking to enter this exciting field.

I welcome inquires from forward-thinking companies seeking to integrate safe and trustworthy AI.

Get in touch if you'd like to work together.