Throughout the history of psychometric assessment, researchers have focused on discrete measurement modalities such as multiple choice biographical data, scale-based personality, and various types of mental ability measures. Researchers and practitioners have generally studied each pattern individually and attempted to show that individual constructs have a degree of reliability and correlate significantly with some outcome measure. As sample sizes and, correspondingly, statistical computing power have grown, researchers have also been able to examine complex effects such as interactions between constructs, though even these analyses are most often done within a particular assessment modality (e.g., studying the interaction between two personality scales).
Now, as computing and networking technologies continue to evolve at an exponential rate, nearly any type of assessment modality imaginable is possible to study and implement (Greene, 2011; Trull, 2007). As a result, novel assessment techniques are being developed faster than ever before. Today, simulations are at the forefront of modern assessment design and, as work samples, are known to be quite predictive of job performance (Schmidt & Hunter, 1998). As a category, they are quite heterogeneous.
This presentation will contribute to the growing literature in this area by examining two aspects of simulation design:
- Whether measuring respondent behaviors during the simulation rather than just responses to embedded questions might confer a psychometric advantage.
- How simulation measures may interact with other types of assessment measures, to better predict various outcomes.
Ultimately, as the computerized simulation of the real world increases in complexity and fidelity, our science can move from creating measures of theoretical constructs to measuring actual behavior. In this manner, in time we may begin to transcend the need to measure proxy variables and theoretical constructs, and instead, use more direct measures of behavioral outcomes.
Simulations offer us a venue to ask many new questions that were not feasible and informative before. What can we learn, for example, by observing how a person interacts with an environment, be it virtual or real? For example:
- What do errant mouse clicks tell us?
- What do we learn when a respondent changes her answer?
- Can we learn something about a candidate’s personality by whether she skips over questions she doesn’t know or spends extra time trying to get each question correct?
- Does repeating an example item during simulation instructions tell us anything about an individual’s personality?
In short, by simulating environments, psychologists can expand the measurement space from the traditional and straightforward question and answer, to a broad range of non-question-based behavioral measures. As simulations grow in complexity and fidelity, we may be able to move away from actual questions, and more towards measuring how an individual interacts with a particular environment; in other words, measure that individual’s choices and behaviors (Hornke & Kersting, 2006). At that point, our measurement focus will be on anything and everything that can be measured, from how a person moves through space to what they do after making a decision.
While one simulation-based measurement advance is thus a gradual moving away from construct measurement to direct behavior measurement, another breakthrough occurs as practitioners combine simulations with other assessment types. This allows researchers to measure interaction and other complex effects across measurement modalities. For example, we may find that a person with a high score on a multitasking simulation is better at a particular job, but only if that person also has a high score on attention to detail. Alternatively, perhaps we will find that multitasking scores are only predictive for people who have a background doing work that involves multitasking. This type of cross-modality research is in its’ infancy, but it should increase as researchers integrate more measures into online assessment experiences.
Psychological science has a hundred-year history of theory and research, but we feel that it has barely begun to tap its’ potential to explain and predict human behavior. Traditionally, researchers in our field focus on finding that elusive, “holy grail” measurement construct that can predict uncharted amounts of variance in job performance. We believe that this pursuit is ill-fated. Instead, researchers should explore ways to measure human characteristics with increasing fidelity, and in particular, move beyond paper-and-pencil measures of abstract constructs to actual direct behavioral measurements, especially in virtual environments. Furthermore, and especially with the vast amounts of data increasingly being captured by modern organizations, our science needs to focus less on the direct effects of measured variables, and more on complex and cross-modality effects. The latter topic represents a tremendous area of untapped opportunity for exploration.
Greene, R. L. (2011). Some considerations for enhancing psychological assessment. Journal of Personality Assessment, 93(3), 198-203. doi:10.1080/00223891.2011.558879
Hornke, L. F., & Kersting, M. (2006). Optimizing Quality in the Use of Web-Based and Computer-Based Testing for Personnel Selection. In D. Bartram, & R. K. Hambleton (Eds.), Computer-based testing and the Internet: Issues and advances (pp. 149-162). New York, NY: John Wiley & Sons Ltd.
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262-274. doi:10.1037/0033-2909.124.2.262
Trull, T. J. (2007). Expanding the aperture of psychological assessment: Introduction to the special section on innovative clinical assessment technologies and methods. Psychological Assessment, 19(1), 1-3. doi:10.1037/1040-3522.214.171.124