dc.description.abstracten |
This thesis proposes a methodology for assessing demographic biases in hiring
systems powered by artificial intelligence (AI), evaluates existing bias mitigation
techniques, and conducts a comparative analysis between English and Ukrainian
at all stages. Our study highlights the importance of Responsible AI practices in
shaping fair and equitable hiring processes.
We initiated this research by creating a dataset of anonymized CVs and job de-
scriptions. We then developed a robust framework for benchmarking AI-assisted
hiring systems to evaluate potential biases across a range of categories called pro-
tected groups. Having detected biases across these groups, we experimented with
the known pre- and post-processing mitigation techniques to alleviate the level of
bias.
Our results show that bias mitigation remains a complex and multifaceted chal-
lenge. While certain strategies demonstrated positive results, they haven’t fully
fixed the bias problem in AI-assisted hiring.
Our work is a foundational step towards fostering fairness and inclusivity within
AI-driven recruitment systems. We aim to continue this research, exploring novel
approaches to handle bias problems and promote equitable hiring practices. |
uk |