Automating hiring processes with the help of AI promises not only to significantly reduce hiring costs for companies and find more suitable employees, but also to reduce HR bias, creating a fairer application system. While this seems like a financial and economic dream for employers, job seekers could also look forward to a more efficient and less biased process. Yet, policymakers at EU level have already called for caution when using AI in these sensitive areas. In their Artificial Intelligence Act proposal from 2021 and explicitly mentioned in the Briefing on the Artificial Intelligence Act from 2022 AI systems that are used for employment purposes are classified as ’High-Risk AI’. This means that such AI systems are generally allowed to be deployed but are subject to tight regulations as outlined in.
This paper first outlines the criteria for a system to be classified as ’High-Risk AI’ or as a ’Prohibited AI Practice’ as well as the regulations that come with this. The benefits and problems of automated AI hiring are then summarized. A focus is placed on the possibility of retrieving personal data and emotional profiling to evaluate applicants. Based on this assessment it is then argued that the generic classification of such systems as ’High-Risk’ comes short and that certain practices, especially the one creating personality profiles, need to be moved to the prohibited category to protect applicants. Furthermore, it is advocated that only job-related skills are tested in an easy-to-supervise fashion, while personal/behavioral traits are outside the scope of these practices.
Tim Rein studied physics, philosophy and computational linguistics in Heidelberg before crossing the small channel to do a Phyiscs master’s in Cambridge. Through these studies, he has begun to explore the topic of AI from both a technical and ethical perspective. Writing about AI policy allows him to combine the two areas and address the important challenges that come with the unfolding AI revolution.