Skip to content

Project

MAMMOth

Multi-Attribute, Multimodal Bias Mitigation in AI Systems

is a 36 – month  (November 2022 – October 2025) project co-funded by the Horizon Europe Programme of European Union under the call Tackling gender, race and other biases in AI – HORIZON-CL4-2021-HUMAN-01-24 (RIA).

Objectives

Redefine bias based on multiple (protected) characteristics instead of a single attribute.

Create standardized AI solutions to address bias across all phases of development of AI systems.

Develop and advance new technologies to evaluate and mitigate AI bias.

Ensure reliability, traceability and explainability of AI solutions.

Increase availability and deployment of unbiased and bias-preventing AI solutions.

Increase awareness and skills preventing AI bias and the uptake of the MAMMOth solutions by involving affected stakeholders.

Advance European approach to excellence in AI.

Study AI biases in case-by-case basis – providing insights on high risk applications.

Expected Outcomes

A single-entry point to access bias evaluation and mitigation solutions.

Provide suggestions to the developers about the underlying protected characteristics.

Develop tailored approaches to specific domains, exploring similarities and differences across a range of sectors.

Create new methods to identify and mitigate bias that go beyond single protected characteristics.

Improve financial inclusion: Identify all those socio-economic factors (gender, ethnicity, orientation) that influence credit scoring and debt repayment and help build AI models that do not reflect historical biases.

Improve social inclusion: Improve accuracy of AI algorithms for identity authentication taking into consideration further characteristics and combinations of attributes such as age, skin colour and ethnicity.

Engage with underrepresented groups and relevant CSO to identify areas of bias.

Increase awareness as well as provide information about the types of bias and fairness definitions across the MAMMOth use cases.

Improve understanding of bias mitigating techniques and qualitative evaluation of results.

Provide a venue where developers, civil organisations and the community can gather and exchange important information.

Provide training to technology providers and education sector in developing an understanding about algorithmic and societal bias in AI and integrating ethical AI strategies in their own practices.

Target groups

Scientists & Researchers

Technology & Service Providers

Under-represent groups

Social Innovation Sector

Policymakers

Society