AI-based decision support systems are increasingly deployed in industry, in the public and private sectors, and in policymaking to guide decisions in important societal spheres, including hiring decisions, university admissions, loan granting, medical diagnosis, and crime prediction. As our society is facing a dramatic increase in inequalities and intersectional discrimination, we need to prevent AI systems to amplify this phenomenon but rather mitigate it. As we use automated decision support systems to formalize, scale, and accelerate processes, we have the opportunity, as well as the duty, to revisit the existing processes for the better, avoiding perpetuating existing patterns of injustice, by detecting, diagnosing and repairing them. To trust these systems, domain experts and stakeholders need to trust the decisions. Despite the increased amount of work in this area in the last few years, we still lack a comprehensive understanding of how pertinent concepts of bias or discrimination should be interpreted in the context of AI and which socio-technical options to combat bias and discrimination are both realistically possible and normatively justified.