What AI, machine learning and neural networks mean for the hiring process
I heard a great question recently: “If you have a business process that requires a lot of person-to-person interaction, is this because it’s what your customers value, or is it because you don’t offer an equally efficient digital alternative?”
When it comes to assessment centers, wholly digital alternatives are currently hard to find. In the assessment industry, we employ many humans – from consultants, to coaches, to HR professionals and assessors. We rely on professional judgement to synthesize complex information for decision making; per O*net, 66% of industrial and organizational psychology (I/O) professionals reported that they have “a lot of freedom” to make decisions. Taking some of the decisions off our collective plate and automating assessment centers for selection, development and HR decision-making seems like a great idea… at first blush.
AI doesn’t get tired, it role plays consistently, and it never has a bad hair day. There is no halo/horn effect in its evaluations. AI eliminates biases of different populations and reduces potential legal ramifications. However, using technology to score participants does come with its own set of concerns.
Challenges exist when considering the long-term implications of AI replacing human assessors. If we utilize AI evaluation alongside assessors, at first, this will eliminate the more rote, first-level assessor jobs in the I/O field. As technology improves and is able to handle more complex conversations, mid-level assessors could also potentially be replaced. This raises the question, “Will executive assessments also be machine role-played and scored?” If not, and executive assessments remain in the realm of humans but the first- and mid-level assessments become machine-based, we will eventually have a lack of trained assessors with the experience required to take on the position of executive assessor, as their jobs will have been outsourced to our machine overlords.
Another concern is the ethical consideration of using big data in hiring decisions and ongoing monitoring. In 2015, the Charlotte-Mecklenburg NC police department worked with a group of data scientists to develop a neural network to predict police officer performance. This neural network found that the best indicator of future performance was past performance – if grievances had been filed against an officer in the past, they were more likely to receive additional grievances in the future. The algorithm went a step beyond this, too, and identified personal stressors correlated to bad behavior. If an officer had recently divorced, or had gone into serious debt, the algorithm flagged them as more likely to commit misconduct in the future. Like employees of any kind, police officers are likely to see job performance suffer when there is trouble in their personal lives. Officers who had these types of personal stressors would be called into HR to discuss their work.
Two years into this program, the police union revolted against it, noting that these meetings with HR frequently turned adversarial, and that the program was akin to being the “thought police” – officers were being confronted for potential crimes not committed. It was decided to pull the plug on the program.
Not all this is doom and gloom, however. There are solutions to these problems (and others) on the horizon. Additional education on machine learning and how these programs work will reduce the “black box” of neural network decisions. Currently, neural network and machine learning outputs are very difficult to describe. As the field advances and scientists become more adept at explaining how a machine makes decisions, clarity and transparency could potentially negate some of the backlash seen from implementing neural networks.
The place of I/O Psychologists and HR professionals will shift as work changes to include AI assisted processes. Human skill at synthesizing complex information for decision making is currently unrivaled by computers, and ongoing human monitoring of AI programs is critical to ensure bias, discrimination, and adverse impact are not introduced into the selection or development program.
AI-only interviewing for technical jobs is already happening. Such technology opens the door for many more candidates to be considered for a job, which could potentially lead to a better job fit for both candidates and the company, along with increased diversity.
The unique advantage that humans will always have over technology is our interpersonal relationships. That being said, humans should not be afraid to use technology to make our lives easier. In the field of assessments, technology-based assessments with a human component, be it assessors or an interview with HR for job fit, will likely provide the best mix of AI advances and a human touch.