Global MPM Insight
AI Innovation in Public Sector Human Resource Management: Possibilities and Threats Global MPM Insight Vol.5 To address these threats, some countr ies have introduced AI ethics guidel ines tai lored to the HR field, requiring personnel officials to evaluate AI use through the lens of digital ethics. For instance, France’s Directorate General for Administration and the Civil Service (DGAFP) has released a strategic plan that includes an HRM-specific framework for the responsible use of AI in public service personnel management. The framework outlines three core principles - protection of staff and the administration, human oversight, and ethics - while also identifying ethically unacceptable AI use cases and offering detailed guidance for the implementation of AI applications in the HR domain (DGAFP, 2024). Given these considerations, the implementation of AI in personnel and performance management should be preceded by rigorous personnel data quality control and thorough verification of algorithmic bias. Moreover, even as AI is integrated into personnel and performance management, it is essential to maintain an appropriate division of roles between human managers and AI while ensuring human oversight over the technology. In essence, AI should serve as a decision-support tool for HR professionals rather than a replacement for human judgment. Indeed, a survey of U.S. public-sector personnel organizations found that 65% of HR professionals view a hybrid approach of combining AI with human judgment as the most desirable model (McGraw, 2024). A Balanced Approach to Innovation In the age of AI, public sector personnel administration stands at a crossroads of profound transformation. Across the entire HR lifecycle from recruitment to promotion and compensation, AI is introducing new tools and solutions, creating meaningful opportunities to streamline talent identification, enhance the objectivity of decision- making, and offer more tailored support for employees’ career development. However, AI also reveals the inherent limitations of data and algorithms; if poorly governed, it could erode the long-standing public sector principles of fairness, transparency, equity, and accountability. Biased AI systems can automate discrimination, and AI driven by “black-box” algorithms risks becoming a tool for evading accountability. In this context, HR professionals must bear in mind - as Barry Bozeman’s Public Value Failure theory underscores - that even if economic efficiency is enhanced, it remains a clear administrative failure if equity or procedural justice is undermined (Bozeman, 2007). In essence, personnel officials in the era of AI administration must successfully secure both innovation and public value. In this regard, the Strategic Triangle model proposed by Professor Mark Moore of Harvard University (Moore, 1995) for effective public value creation provides a useful lens for personnel officials seeking a balanced approach to AI. In defining the value objectives of AI- driven personnel administration, personnel officials must explicitly incorporate fairness, transparency, equity, and accountability alongside efficiency and economy. Fur thermore, they must secure and manage the operational capabilities required to realize these values within administrative processes and garner legitimacy and support from the diverse stakeholders involved in personnel administration. Bui lding on these frameworks, several concrete principles can be proposed. First, the principle of ultimate human responsibility should be codified; it must be made clear that AI is merely an assistant and that the final decision-maker is human, ensuring that procedures for human review and approval of AI decisions remain in place. Second, robust data governance must be established; because AI judgments depend on training data, institutions must ensure the quality of personnel data and protect diversity within personnel datasets. Third, agencies must establish the capacity to validate and control algorithms by conducting verification both before and after AI deployment, continuously monitoring for bias during operation, and retaining the ability to promptly modify-or if necessary, eliminate algorithms when issues arise. Fourth, personnel officials’ competencies should be strengthened through education and support, including AI l i teracy t ra i n i ng and prac t i ca l gu i dance tha t encourages practitioners to reflect on AI-enabled personnel administration through a public value lens. Finally, a normative consensus on AI-driven personnel administration must be forged. This requires establishing governance structures that reflect the voices of diverse internal and external stakeholders through expanded par t icipat ion. I t also necessi tates st rengthening transparency and explainability - not only clearly notifying candidates and employees of AI usage but also articulating key decision-making criteria both internally and externally. In addition, procedures must be established for raising objections to AI-enabled processes, procedures, and outcomes. Only then can public support for AI personnel administration be secured from key stakeholders, including staff, applicants, policymakers, and citizens. When supported by these principles, the public sector can harness AI responsibly and wisely to achieve innovative personnel administration that earns the trust of the public. References Berg, J., & Johnston, H. (2025). AI in human resource management: The limits of empiricism (ILO Working Paper No. 154). International Labour Organization. https://doi.org/10.54394/NMSH7611 Bozeman, B. (2007). Public Values and Public Interest: Counterbalancing Economic Individualism. Georgetown University Press. Cappelli, P., & Rogovsky, N. G. (2023). Artificial intelligence in human resource management: A challenge for the human-centred agenda? (ILO Working Paper No. 95). International Labour Organization. Consumer Financial Protection Bureau. (2024, October 24). CFPB takes action to curb unchecked worker surveillance [Press release]. https://www.consumerfinance.gov/about-us/newsroom/cfpb-takes-action-to-curb-unchecked-worker-surveillance/ Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against- women-idUSKCN1MK08G Department for Science, Innovation and Technology. (2024). Responsible AI in recruitment: Guidance. GOV.UK . https://assets.publishing.service.gov.uk/media/65fda1b9f1d3a0001132ae5b/Responsible_AI_in_Recruitment.pdf Direction g é n é rale de l’administration et de la fonction publique (DGAFP). (2024). Strategy for the use of artificial intelligence in human resources. https://www.fonction-publique.gouv.fr/files/files/publications/publications-dgafp/guide-strategie-usage-intelligence-artificielle-en.pdf European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the Europ.L series. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX :32024R1689 McGraw, M. (2024, July/August). Exploring the public sector impact of AI. Public Eye Magazine, 6–7. Moore, M. H. (1995). Creating Public Value: Strategic Management in Government. Harvard University Press. OECD. (2025). Governing with artificial intelligence: The state of play and way forward in core government functions. OECD Publishing. https://doi.org/10.1787/795de142-en Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial intelligence in human resources management: Challenges and a path forward. California Management Review, 61(4), 15–42. https://doi.org/10.1177/0008125619867910 U.S. Office of Personnel Management. (2024). FY 2024 human capital reviews. https://www.opm.gov/policy-data-oversight/fy-2024-human-capital-reviews/print/ MINBYUN-Lawyers for a Democratic Society, Digital Information Committee; Institute for Digital Rights; Korean Progressive Network Center (Jinbonet). (2022, July 7). “[Joint Commentary] Partial Victory in AI Interview Information Disclosure Lawsuit Confirms Severe Lack of Accountability in Public Institutions.” https://www.minbyun.or.kr/?p=52432. Lee, H. S., Jang, H. J., Choi, D. W., Keum, J. D., Park, J. S., Oh, D. J., Shin, K. G., Ha, H. S., & Heo, H. J. (2023). Artificial Intelligence (AI) and Public Value in the Public Sector: An Exploration of Recruitment Processes. National Research Council for Economics, Humanities and Social Sciences (NRC) Cooperative Research Series 23-08-01. https://sky.kipa.re.kr/%24/10210/contents/7137811. 28 29
Made with FlippingBook
RkJQdWJsaXNoZXIy MTc1NDAy