MI²RedTeam

MI²RedTeam analyses machine and deep learning predictive models through the lens of AI explainability, fairness, security and human trust. We develop methods and tools for explanatory model analysis and apply them in practice.

MI²RedTeam is a group of researchers experienced in XAI who perform a rigorous evaluation of AI solutions in order to improve their transparency and security. We apply state-of-the-art methods and introduce new ones to tailor our analysis to the specific predictive task.

We openly collaborate on various topics related to explainable and interpretable machine learning. Feel free to reach out to us with research ideas and development opportunities. We help organizations to better understand the vulnerabilities of their AI systems, and take steps to mitigate them.

Our current core research topics of interest include:

  • [ARES] Robustness of explanations and explanations for model robustness
  • [xSurvival] Explanatory analysis of machine learning survival models
  • [Large Model Analysis] Explanatory analysis of large models, e.g. transformers

Methods and methodologies introduced by our team:

Tools developed by our team:

  • DALEX, breakDown, auditor & modelStudio for explainable machine learning in R
  • dalex for explainable and fair machine learning in Python
  • survex dedicated to explaining machine learning survival models
  • fairmodels for fairness analysis of machine learning classification models

Applications supported by our team:

This initiative is generously supported by the following institutions.