Skip to content

holistic-ai/mitigation-roadmaps

Repository files navigation

Risk Mitigation Jupyter Notebooks

Our mission at Holistic AI is to reduce risks connected to AI and data projects.

We introduce here the risk mitigation roadmaps, a set of guides that will help you mitigate some of the most common AI risks. A roadmap explains outlines the technical risk and presents potential solutions, usually composed of two or more steps. The roadmaps are accompanied by Jupyter notebooks available on this repository.

How to navigate the Roadmaps

We can think of AI risks as being divided in 5 different areas: Efficacy, Robustness, Privacy, Bias and Explainability. For each one of these verticals, we have created a guide explaining how we can measure and mitigate such risk. We link the guides below

  • Efficacy: Risk that the system underperforms relative to its use-case.

Improving generalisation through model validation

Hyperparameter optimisation

  • Robustness: Risk that the system fails in response to changes or attacks.

Handling dataset shift

Adversarial training for robustness

  • Privacy: Risk that the system is sensitive to personal or critical data leakage.

Data minimization techniques

  • Bias: Risk that the system treats individuals or groups unfairly.

Measuring Bias and Discrimination

Mitigating Bias and Discrimination

  • Explainability: Risk that an AI system may not be understandable to users and developers.

Documentation for improved explainability of ML models

Extracting explanations from ML models

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published