MI²RedTeam
MI²RedTeam analyses machine and deep learning predictive models through the lens of AI explainability, fairness, security and human trust. We develop methods and tools for explanatory model analysis and apply them in practice.
MI²RedTeam is a group of researchers experienced in XAI who perform a rigorous evaluation of AI solutions in order to improve their transparency and security. We apply state-of-the-art methods and introduce new ones to tailor our analysis to the specific predictive task.
We openly collaborate on various topics related to explainable and interpretable machine learning. Feel free to reach out to us with research ideas and development opportunities. We help organizations to better understand the vulnerabilities of their AI systems, and take steps to mitigate them.
Our current core research topics of interest include:
- [ARES] Attack-Resistant Explanations towards Secure AI, i.e. a critical evaluation of the state-of-the-art analysis techniques
- [xSurvival] Explanatory analysis of machine learning survival models
- [Large Model Analysis] Explanatory analysis of large models, e.g. transformers
Methods and methodologies introduced by our team:
- Evaluating explanations of vision transformers
- InteractiveEMA towards human-model interaction in explainable machine learning for tabular data
- SurvSHAP(t) for time-dependent analysis of machine learning survival models
- LIMEcraft for human-guided visual explanations of deep neural networks
- Fooling PD & Manipulating SHAP for stress-testing widely-applied explanation methods
- Checklist towards responsible deep learning on medical images
- SAFE for lifting interpretability-performance trade-off via automated feature engineering
- WildNLP for stress-testing deep learning models in NLP
- Explanatory Model Analysis towards comprehensive examination of predictive models
Tools developed by our team:
- DALEX, breakDown, auditor & modelStudio for explainable machine learning in R
- dalex for explainable and fair machine learning in Python
- survex dedicated to explaining machine learning survival models
- fairmodels for fairness analysis of machine learning classification models
Applications supported by our team:
- In medicine, we analyzed hundreds of models predicting among others: survival in uveal melanoma eye cancer, survival in sepsis, type of lung cancer, lung cancer risk in screening, lung cancer mortality, COVID-19 mortality, hospital length of stay, progression of Alzheimer’s disease.
- In credit scoring, we analyzed the transparency, auditability, and explainability of machine learning models.
- In football analytics, we analyzed expected goal models for performance analysis.
- …
This initiative is generously supported by the following institutions.