Algorithmic Fairness in HRM Balancing AI-Driven Decision Making with Inclusive Workforce Practices
Main Article Content
Abstract
The growing adoption of Artificial Intelligence (AI) in Human Resource Management (HRM) has changed the process of recruitment, assessment, promotion, and retention of employees in organizations. On the one hand, AI is associated with efficiency and data-based decision-making opportunities, but, on the other hand, it creates an urgent concern about fairness, inclusiveness, and responsibility. The review is a systematic literature review of the literature published since 2010 and covering 201 studies related to algorithmic fairness in HRM. Results show that the research focus is mainly on recruitment and selection, mostly using natural language processing, machine learning classifier, and chatbots, where gender and racial-related bias is the most common. Functional areas like performance assessment, promotion, retention as well as training received relatively less attention, however they demonstrated very serious challenges associated with transparency, cultural bias and unequal access. Review of mitigation strategies reveals that in-processing methods have the highest adoption, but governance framework and human oversight proves to be points of great importance in ensuring sustainable fairness. In quality evaluation, the methodological rigor is skewed with a significant percentage of studies not being transparent about datasets and fairness measures. The review has identified the necessity of standardized means of evaluation, interdisciplinary work, and fairness-by-design principles to match algorithmic tools with the aims of diversity, equity, and inclusion. Finally, the issue of responsible AI in HRM is that it involves a compromise between efficiency and ethical requirements of technology to ensure fair and inclusive workforce policies.