Over the last several years, breakthroughs in machine learning data analysis techniques have been transformative across the economy, from medical image analysis and diagnosis to complex image generation to smart assistants like Amazon Alexa. Shaker’s data scientists are hard at work extending these exciting new capabilities to the world of pre-employment assessment and simulation.
Throughout the history of psychometric assessment, psychologists have relied mostly on structured responses like Likert-type and multiple choice questions. These responses are often scored and combined linearly to yield scale scores that are intended to be descriptive or predictive of various outcomes. Multiple regression, itself an example of machine learning, and other tools continue to be key to helping us understand how measured responses relate to job performance, turnover, and other outcomes of business interest.
The holy grail of job simulations though has always been unstructured responses. These are free-form responses that candidates might speak or type. Traditional analysis and scoring techniques have not been able to adequately make sense of unstructured data. Basic natural language processing techniques like scoring keywords can provide only so much explanatory power.
While neural networks have been around for a long time, it was not until these networks began to be stacked together to create what is now called deep learning that we have achieved such a dramatic uptick in our ability to automatically make sense of unstructured data.
Deep learning is now a (very rapidly) growing class of algorithms that is allowing researchers to make vast strides in understanding and applying all sorts of unstructured information, from text and speech to images and more. Techniques such as Generative Adversarial Networks and Capsule Network architectures are now being developed that promise even greater advances.
At Shaker, our data science team is actively researching how we can best apply deep learning techniques such as Bidirectional Long-Short Term Memory Networks (BDLSTMs) to new unstructured Virtual Job Tryout exercises. Our findings so far indicate that these algorithms can match the validity of human subject matter expert ratings of open-ended responses. This is a remarkable finding – for the first time, a machine can score human utterances as accurately as actual humans can!
Our data scientists are confident that algorithms will soon be able to predict job performance better than human experts, and significantly better than traditional structured-response assessments.