Global MPM Insight
AI Innovation in Public Sector Human Resource Management: Possibilities and Threats Global MPM Insight Vol.5 AI Innovation in the Recruitment Process: Balancing Efficiency and Fairness The recruitment and hiring process represents one of the areas where AI adoption is most prevalent. Driven by AI’s capacity to rapidly search for suitable candidates within vast talent pools and objectively select high-caliber individuals aligned with organizational values through consistent evaluation, government agencies in many countries have moved proactively to introduce AI. For instance, ten government agencies in Singapore implemented an AI- driven application pre-screening service, automating repetitive tasks such as r é sum é reviews, document screening, and candidate Q&A sessions. Furthermore, this system utilized chatbots to facilitate written examinations and automated scoring. Through these measures, they achieved significant mi lestones by efficiently processing over 3,000 applications, realizing cost savings of approximately €44,000, and reducing recruiter labor by more than 150 days (OECD, 2025). HM Revenue & Customs (HMRC) in the UK automated the entire video interview process for entry-level recruitment using an AI platform called Outmatch. Applicants recorded video responses to six AI-generated interview questions, which then analyzed these recordings and assigned scores. This approach has proven effective in efficiently screening large applicant pools based on consistent evaluation standards (OECD, 2025). Such innovative cases illustrate the potential for AI to make public sector recruitment both faster and more objective. However, the use of AI at the recruitment stage comes with significant challenges to public values. The most prominent concerns are algorithmic bias and discrimination. There have already been many reported cases in which AI recruiting tools learned biases embedded in historical data and produced unfavorable evaluations for certain groups. For example, global IT company Amazon discontinued a project after it was revealed that its internally developed AI recruitment engine operated to the disadvantage of female applicants (Dastin, 2018). The same risks also exist in the public sector. Concerns have surfaced that AI-driven screening may disadvantage applicants with disabilities, migrants, and the digitally marginalized, and that it may undermine the transparency and public acceptance of recruitment processes, as AI conducts evaluations in ways that hiring managers cannot fully explain (Lee et al., 2023). In practice, starting around 2019, a number of public institutions in Korea-including the Korea International Cooperation Agency (KOICA) and KEPCO KDN-introduced AI-enabled document screening tools and AI interviewers into their recruitment processes. However, through an information-disclosure lawsuit filed against those institutions by digital rights advocacy groups in 2020, it was revealed that many public agencies had adopted unverified AI tools without securing sufficient data on the systems’ operational logic, potential bias, or personal information protection measures (Institute for Digital Rights et al., 2022). To mitigate these risks, normative responses are rapidly emerging worldwide. The European Union (EU) classifies AI in human resource management (HRM) like recruitment, selection, performance evaluation, and promotion as a “high-risk” area, imposing obligations such as explainability, transparency, conformity assessments, and human oversight. Notably, for the use of high-risk AI in the public sector, the EU mandates, so as to strengthen transparency, that the relevant system must be registered in the EU high-risk AI database (EU, 2024). In the UK, the Department for Science, Innovation and Technology (DSIT) uses its Data and AI Ethics Framework to provide tailored self-assessment guides, helping public HR officials assess ethical risks at each stage of recruitment while also supporting the verification of technical specifications (DSIT, 2024). South Korea is likewise developing AI ethical standards for personnel administration, as the Ministry of Personnel Management incorporates specific principles for AI use into its Fair Recruitment Guidelines. Personnel and Performance Management: The Bright and Dark Sides of Data- Driven Decision-Making The potential use of AI technology is also garnering signi f icant at tent ion in the realms of personnel and performance management. By analyzing vast administrative datasets and work-performance records, AI can bui ld models of an organi zat ion’s human resources, enabling data-driven personnel strategies such as optimizing assignments based on matching employees’ capabilities to job requirements, analyzing performance patterns, predicting promotion candidate pools, and predicting and identifying talent at high risk of turnover or attrition (Tambe et al., 2019). As AI can proactively inform personnel officials about optimal assignments, future high-performers, and departments facing increased attrition risks, it is nothing short of a remarkable innovation. In fact, according to the U.S. Office of Personnel Management (OPM), roughly 75% of 24 major federal agencies are beginning to use AI-enabled predictive analytics to support HRM, particularly in areas such as turnover and attrition, competency analysis, and performance management (OPM, 2024). Illustratively, the U.S. Department of State (DOS) has developed and is operating an AI-enabled career-path exploration tool that offers employees tailored career-mobility options and capacity building opportunities; the Department of Homeland Security (DHS) is applying AI to visualize and flag anomalies in HR data, including hiring and turnover trends; and the Small Business Administration (SBA) has introduced an AI simulation coach to provide performance feedback (OPM, 2024). However, using AI in personnel and performance management demands an even more cautious approach than its use in recruitment (Cappelli & Rogovsky, 2023). The core challenge is that “performance” is difficult to define and translate into data. For AI to generate accurate predictions, it must be fed precise and sufficient data regarding employee characteristics, job requirements, and performance indicators – yet few public institutions have systematically accumulated all three types of data (OECD, 2025). Moreover, human performance involves numerous qualitative elements and contexts that are hard to quantify, meaning AI models relying solely on existing metrics are inherently limited. Finally, if algorithms trained on historically biased data are used to identify future high performers and recommend promotion candidates, past discrimination may be repeated today. Accordingly, over- reliance on AI for performance evaluation risks misjudging employees on meaningless or biased indicators and, ultimately, producing unfair personnel outcomes. An even greater concern is the potential for privacy infringement and the erosion of trust arising from AI- enabled workplace monitoring and control. Recently, some companies have employed AI to analyze employees’ working hour patterns, emails and messaging app content, to generate productivity scores or even predict misconduct. If such digital surveillance technologies were introduced into the public sector, they would raise serious concerns about violations of constitutionally protected privacy and labor rights, regardless of whether the stated purpose is to strengthen discipline or enhance performance of the civil servants. Notably, the U.S. Consumer Financial Protection Bureau (CFPB) has expressed strong concern over employer practices that use AI to monitor workers and, through “black-box” algorithms, score behavior or predict the likelihood of attrition or union activity (CFPB, 2024). The ILO similarly warns that blind reliance on data-driven AI in HRM can erode fairness, transparency, and trust (Berg & Johnston, 2025). Moreover, such applications of AI risk eroding organizational commitment and morale so profoundly that they could ultimately nullify the efficiency gains the technology is meant to deliver. 26 27
Made with FlippingBook
RkJQdWJsaXNoZXIy MTc1NDAy