Skip to content

Financial Case

EXUS – MAMMOth: Financial Case

EXUS specializes and focuses on debt collections and recovery technologies. Through its flagship product, EXUS Financial Suite (EFS), EXUS helps financial institutions and utility companies manage credit risk along the whole lifecycle of accounts. The company continues to innovate in the field by generating knowledge for EFS through participation in several research projects, and investing significant resources into R&D, thus ensuring that the product remains competitive and addresses the customers’ needs. Through these R&D projects, EXUS has developed proprietary knowledge and leading-edge features within EFS, giving them a competitive edge in the market. EXUS’ commitment to research and development is critical to ensuring that they maintain a leading position in the financial sector.

In recent times, EXUS, has been looking deeply into an issue prevalent in today’s AI-powered world: algorithmic bias. Addressing current concerns is not merely an ethical mandate but an integral part of EXUS operational excellence, who is committed to understanding, identifying, and mitigating this systematic bias, which can have a profound impact on decision-making.

As part of this goal, EXUS is participating in MAMMOth (Multi-Attribute, Multimodal Bias Mitigation in AI Systems, https://mammoth-ai.eu/project/), where along with a multidisciplinary consortium is trying to address issues related to algorithmic bias, across all stages of development. EXUS is participating as a technical partner, developing the MAMMOth toolkit and integrating the MAMMOth algorithms, as well as a use case provider, examining bias within existing datasets and ways to detect and mitigate it.

Within MAMMOth the team is tapping into the invaluable insights from underrepresented communities (with the help of IASIS (https://www.iasismed.eu/), DAF (https://donaactiva.org/), DDG (https://www.diversitygroup.lt/en/) and organized by UNIBO (https://www.unibo.it/en) ) to shape AI systems. 

The unique perspectives and experiences of underrepresented communities such as LGBTQ+ individuals, and immigrants can significantly inform and shape the development of AI solutions related to debt repayment. These groups often face distinct financial challenges and barriers, which can be overlooked in AI models built without their input. By actively seeking their insights, we can create more inclusive and equitable solutions that address a wider range of financial scenarios and debt repayment issues. Furthermore, engaging these communities is not just about gathering data; it is also an opportunity to inform them about, debt management, and the role of AI in these processes. Eventually the goal is to have a two-way exchange of information that will not only enrich the development of AI solutions but also empower these communities with increased financial literacy, contributing to a more inclusive and fairer financial ecosystem. This proactive stance aligns with EXUS ‘vision of not just creating efficient financial solutions for institutions—but demonstrating how tech can be a tool for equitable change.

Insights were gathered through workshops that were organized during an initial co-creation phase. 

IASIS NGO, conducted 3 workshops, in the scope of the MAMMOth Horizon Project, aiming to educate participants on AI biases and their impact on use cases, specifically focusing on requesting loans. Participants initially had limited knowledge of AI but recognized their regular use of narrow AI. Feedback on AI in banking systems was mostly negative, citing concerns about the loss of personal touch, employment, and biases. Some positive comments highlighted the benefits of digitalization. Participants emphasized the need for human oversight and a fair outcome when utilizing AI in banking. Continued research is crucial for addressing biases creating equitable AI systems and gaining the trust of these communities.

In parallel another co-creation workshop was conducted by DDG on May 25, 2023, and analyzed the perception and sentiment of non-EU citizens, particularly migrants, and temporary protection receivers, towards AI in the financial sector and identity verification. The respondents acknowledged their routine interaction with AI through digital platforms, although they expressed a mixed understanding of AI’s objectivity. A unanimous belief was observed that AI-based credit scoring would largely favor white males, indicating concerns about biases within AI algorithms. The dominant sentiment regarding AI involvement in decision-making was skepticism, emphasizing issues of transparency and AI’s lack of ability to comprehend human factors. Participants’ discomfort with being evaluated by AI due to factors like migration status was noted, highlighting the fear of AI perpetuating inherent biases against marginalized groups. Despite recognizing certain benefits, the participants extensively discussed potential pitfalls of AI in the financial sector, such as data privacy issues, cybersecurity threats, job losses, and opacity of AI processes. The workshop further illuminated diverse interpretations of fairness in AI decisions, with participants expressing doubts about the financial sector’s commitment to fairness. They also highlighted the role of various factors, including design and data quality, in determining the fairness of AI decisions. Aligning with the vision of ethical, equitable, and transparent AI integration in the financial sector, the findings strongly underline the need for continuous research, robust dialogue, and policy development, especially involving marginalized groups.

Associació Forum Dona Activa- DAF organized a co-creation workshop with underrepresented groups in AI research to analyze bias in AI with a focus on gender inequality. DAF works with women communities, vulnerable and entrepreneur women. In the workshop, 19 women ( some migrants, professionals, and entrepreneurs) participated. The workshop was focused on the analysis of the finance/loan and identity verification use cases. In the discussion of the financial use case, participants affirmed that an AI system can help to evaluate the request for a loan to buy a house and other requests to a bank, but the participants were concerned that there might be some bias in gender considerations such as; non-approval of a credit or bank loan for women, more difficulty accessing a mortgage, a shorter term for repayment (belief risk of non-payment), a limitation of the offer of savings or investment products, and a lack of funding to start or develop business. Participants identified the importance of data evaluation, algorithmic transparency, ai regulations and standards in order to advance gender and social equality.

The workshops conducted by IASIS, DDG, and DAF highlighted the skepticism of marginalised groups about AI’s objectivity and its potential to perpetuate inherent biases. To address these concerns, it is necessary to increase transparency in AI processes, enhance algorithmic fairness, and regulate AI applications in the financial sector. To build trust and develop ethical AI systems continuous research and dialogue are necessary. 

Tags: