Fairness and bias in AI have become increasingly pertinent as AI-based decision support systems find widespread application across industries, public and private sectors, and policy-making. These systems guide decisions in critical societal domains such as hiring, university admissions, loan approvals, medical diagnoses, and crime prediction. Given the problematic rise in societal inequalities and intersectional discrimination, it's crucial to prevent AI systems from reiterating these issues and instead work toward mitigating them. As we leverage automated decision support systems to formalize, scale, and expedite processes, we're presented with both the opportunity and the obligation to reassess existing procedures for the better. This entails avoiding perpetuating existing injustices by identifying, diagnosing, and rectifying them. Establishing trust in these systems requires the confidence of domain experts and stakeholders in the decisions made. Despite the increased focus on this area in recent years, there remains a lack of comprehensive understanding regarding the interpretation of bias or discrimination concepts in the realm of AI. Moreover, fairness and bias in AI are deeply intertwined with the principles of inclusion, cultural representation, and responsible AI. To mitigate bias, AI systems must be inclusive by design, ensuring diverse perspectives and underrepresented groups are meaningfully involved throughout development. Cultural representation is essential to avoid the marginalization of certain communities, ensuring AI systems respect diverse social contexts and avoid perpetuating harmful stereotypes. Identifying socio-technical solutions to fight bias and discrimination that are both realistically achievable and ethically justified is an ongoing challenge. Incorporating the roles of generative AI and the evolving legal landscape such as the AI Act will be critical in advancing these discussions and shaping the future of ethical AI implementation.
This workshop serves as a platform for exchanging ideas, presenting findings, and exploring preliminary work in all facets linked to fairness and bias in AI. This includes, but is not restricted to:
Bias and Fairness by Design
Fairness measures and metrics
Counterfactual reasoning
Metric learning
Impossibility results
Multi-objective strategies for fairness, explainability, privacy, class-imbalancing, rare events, etc.
Federated learning
Resource allocation
Personalized interventions
Debiasing strategies on data, algorithms, procedures
Human-in-the-loop approaches
Methods to Audit, Measure, and Evaluate Bias and Fairness
Auditing methods and tools
Benchmarks and case studies
Standard and best practices
Explainability, traceability, data and model lineage
Visual analytics and HCI for understanding/auditing bias and fairness
HCI for bias and fairness
Software engineering approaches
Legal perspectives on fairness and bias
Social and critical perspectives on fairness and bias
Inclusive AI and cultural representation