Most hiring managers think they can spot talent in a 30 minute conversation. They can't. Decades of research consistently show that unstructured interviews are among the weakest tools for predicting who will actually perform well on the job. The test for cognitive ability, on the other hand, has been one of the most studied and debated selection tools in the history of industrial psychology. And in 2022, a major study forced the entire field to rethink how strong that tool really is.
Here's the situation HR professionals face: you need a reliable way to predict who will succeed before they start working. Resumes don't tell you much. References are unreliable. And gut instinct is worse than either. Cognitive ability testing offers something better, but only if you understand what it can and can't do.
This article walks through what cognitive ability tests actually measure, what 100 years of research says about their effectiveness, where the science has shifted in recent years, and how to use these tests fairly and effectively in your hiring process.
What Does a Test for Cognitive Ability Actually Measure?
A test for cognitive ability measures how well a person can reason, solve problems, learn new information, and think abstractly. The U.S. Office of Personnel Management defines these tests as assessments of abilities involved in thinking, including reasoning, perception, memory, verbal ability, mathematical ability, and problem solving.
The construct being measured is often called General Mental Ability, or GMA. The idea goes back to psychologist Charles Spearman, who proposed in 1904 that a single general factor, which he called "g," underpins all cognitive performance. A person who scores well on verbal reasoning tends to also score well on numerical reasoning and spatial reasoning. That shared variance across different mental tasks is what GMA captures.
There are several common types of cognitive ability tests used in workplace settings. Verbal reasoning tests measure the ability to understand and work with written information. Numerical reasoning tests assess the ability to interpret and use numerical data. Logical reasoning tests look at the capacity to draw conclusions from abstract patterns. Some tests combine all three into a single general cognitive ability score.
Popular tests used in hiring include the Wonderlic Personnel Test, the Criteria Cognitive Aptitude Test (CCAT), the Predictive Index Learning Indicator, and the Thomas International General Intelligence Assessment. These tests typically run between 12 and 50 minutes and contain between 20 and 50 questions. They are designed as speed tests, meaning most candidates won't finish every question within the time limit. That's by design. The time pressure reveals how quickly someone can process and apply information.
The Research That Made Cognitive Testing King
For decades, the gold standard reference for hiring research was a 1998 meta analysis by Schmidt and Hunter, published in Psychological Bulletin. Their work synthesized 85 years of data on 19 different selection methods and concluded that GMA was the single best predictor of job performance. The validity coefficient they reported was .51 for medium complexity jobs. That's a strong correlation by any measure in the social sciences.
To put that number in context: a validity of .51 means that cognitive ability test scores have a solid relationship with job performance. In plain terms, people who score higher on these tests tend to perform meaningfully better at work. Schmidt and Hunter found this relationship held across hundreds of jobs and thousands of workers.
Their research also showed that when you combine a GMA test with an integrity test, the combined validity reaches .65. Pairing GMA with a structured interview produces a combined validity of .63. These are among the highest validity coefficients reported for any combination of selection tools.
The practical message was clear and simple: if you could use only one test to predict job performance, a cognitive ability test was your best bet. And if you combined it with a structured interview or an integrity test, your predictions improved even further. This finding shaped hiring practices across thousands of organizations for over two decades.
The 2022 Correction That Changed Everything
Then came Sackett, Zhang, Berry, and Lievens.
In 2022, Sackett and colleagues published a paper in the Journal of Applied Psychology that shook the foundations of personnel selection research. The title tells you what they found: "Revisiting Meta Analytic Estimates of Validity in Personnel Selection: Addressing Systematic Overcorrection for Restriction of Range." Paul Sackett himself called it the most important paper of his career.
What did they find? The earlier meta analyses had applied statistical corrections for something called range restriction in a way that inflated validity estimates. Range restriction happens when the people in your study are more similar to each other than the broader population. If you only study employees who were already hired (and therefore already passed some screening), you're looking at a restricted range of ability. Statistical corrections are supposed to account for this, but Sackett's team showed that these corrections were applied too broadly, inflating the numbers.
The corrected validity for cognitive ability tests dropped from .51 to .31. That's a meaningful difference. A validity of .31 still represents a useful predictor. It's a meaningful connection between test scores and job performance. But it's no longer the dominant force it was believed to be.
More importantly, Sackett's team found that structured interviews now emerged as the strongest predictor of job performance, with a validity of .42. Job knowledge tests (.40), empirically keyed biodata (.38), and work sample tests (.33) also outperformed cognitive ability tests in the revised estimates.
The Society for Industrial and Organizational Psychology summarized it this way: while Schmidt and Hunter positioned cognitive ability as the focal predictor, with others evaluated in terms of their incremental validity over cognitive ability, one might now propose structured interviews as the focal predictor against which others are evaluated.
Related: 18 Reasons Every Organisation Needs Psychometric Tests
What This Means for HR Professionals
Let me be direct: this doesn't mean you should throw out cognitive ability tests. A validity of .31 is still meaningful. It still outperforms references, years of experience, and education level as a predictor of job performance. And the research on cognitive ability predicting training success remains strong, with validity estimates around .67 across multiple meta analyses.
What it does mean is that cognitive ability tests should not be your only tool. They should not even be your primary tool in isolation. The revised research tells us that the strongest selection systems combine multiple methods. A structured interview paired with a cognitive ability test will predict job performance better than either tool alone.
I've spent over 25 years in HR consulting, and this is the mistake I see most often: organizations adopt a single test and treat it as a complete solution. They give candidates a cognitive ability test, rank everyone by score, and hire from the top down. That's not what the research supports, and it never was. Even Schmidt and Hunter argued for combining GMA tests with other measures.
Related: HR Analytics Data: A Guide For HR Managers
The Adverse Impact Problem
Any honest discussion of cognitive ability testing must address adverse impact. This is where the conversation gets uncomfortable, but avoiding it doesn't help anyone.
Research consistently shows that traditional cognitive ability tests produce score differences across racial and ethnic groups that are larger than those produced by other valid selection methods. Specifically, scores for White test takers have typically been about 1.0 standard deviation higher than scores for Black test takers on traditional GMA tests. Hispanic test takers fall somewhere in between.
These score differences translate directly into hiring differences. When cognitive ability tests are used in a top down selection process, where the highest scorers get hired first, minority candidates are disproportionately screened out. The U.S. Office of Personnel Management acknowledges this directly, noting that cognitive ability tests typically produce racial and ethnic differences larger than other valid predictors of job performance.
Here's what makes this especially problematic now. Before Sackett's 2022 revision, the validity diversity tradeoff seemed unavoidable but at least defensible. The argument went: cognitive tests are so much more valid than other methods that we have to accept the adverse impact because the alternatives predict less. With the revised validity estimate of .31, that argument weakens considerably. Structured interviews (.42) predict better and produce smaller group differences. Work sample tests (.33) predict similarly and also show smaller group differences.
The practical implication is significant. Organizations now have less justification for relying heavily on cognitive ability tests alone, both from a scientific validity standpoint and from a fairness standpoint.
Modern Approaches to Cognitive Testing
The field isn't standing still. Researchers are working on what they call modern intelligence tests, designed to reduce group differences while maintaining predictive power.
Goldstein and colleagues studied modern cognitive tests that present novel, unfamiliar problems that test takers must solve without relying on prior experience or accumulated knowledge. Across six studies in public safety organizations like police and fire departments, these modern tests predicted job performance and training success while substantially reducing the score gap between Black and White test takers.
The logic behind these newer tests is sound. Traditional cognitive tests often include items that draw on vocabulary, cultural knowledge, or educational content. Someone who attended well resourced schools in a particular cultural context will have an advantage on those items, and that advantage doesn't necessarily reflect their underlying reasoning ability. By stripping out knowledge dependent content and focusing on pure problem solving with novel stimuli, modern tests aim to measure cognitive ability more directly.
Research published in the Journal of Intelligence found that the size of racial score differences varies depending on how intelligence is measured. This supports the position that some of the group differences we see on traditional tests reflect the characteristics of the measurement tool, not just the underlying ability being measured.
Game based assessments represent another emerging approach. These assessments use interactive tasks resembling video games to measure cognitive functions like working memory, attention, and processing speed. Early evidence suggests they may produce smaller group differences than traditional paper and pencil tests, though the research base is still developing.
How to Use Cognitive Ability Tests Properly
Based on what the research evidence actually shows, here is how I recommend organizations use cognitive ability tests in their selection process.
First, do not use a cognitive ability test as your only selection method. The strongest approach combines multiple tools. Start with a structured interview as your primary method, given the revised validity evidence. Add a cognitive ability test to capture learning potential and reasoning ability. Include a work sample test or job knowledge assessment when candidates have relevant experience. Consider personality or integrity measures for additional insight.
Second, use cognitive tests as a screen rather than a rank order tool. Instead of hiring strictly from the top score down, set a minimum threshold that represents the cognitive ability needed for the role. Everyone who meets that threshold moves to the next stage. This approach reduces adverse impact compared to top down selection while still ensuring candidates have sufficient cognitive ability for the job.
Related: Human Resource Best Practices
Third, choose your test carefully. Look for tests that have been validated for the types of roles you're hiring for. Check whether the test publisher provides evidence of adverse impact and what steps they've taken to minimize it. Modern cognitive tests that use novel problems rather than knowledge dependent content may produce smaller group differences. Ask your test provider for their technical manual and review the validity evidence and group difference data yourself.
Fourth, match the test to job complexity. Cognitive ability tests are more predictive for complex jobs than for simple ones. Schmidt and Hunter's updated 2016 research showed GMA validity ranged from .74 for professional and managerial jobs down to .39 for unskilled jobs. If you're hiring for a role that requires continuous learning, complex decision making, and problem solving, cognitive testing adds more value. For more routine roles, the benefit is smaller and other tools may be more appropriate.
Fifth, document everything. Keep records of your validation studies, your selection rationale, and your adverse impact analyses. If you ever face a legal challenge, having evidence that your test is job related and that you've considered alternatives to reduce adverse impact puts you in a much stronger position.
Cognitive Ability and Training Success
One area where cognitive ability tests remain especially strong is predicting training performance. Even after the Sackett revisions, the validity of cognitive tests for training outcomes hovers around .67. That's a strong relationship by any standard in organizational research.
This makes intuitive sense. Training requires you to absorb new information, understand concepts you haven't encountered before, and apply knowledge in unfamiliar situations. Those are exactly the abilities that cognitive tests measure.
If your role requires extensive onboarding, continuous learning, or adaptation to new technologies and processes, cognitive ability becomes an even more relevant predictor. A candidate might interview brilliantly and have impressive experience, but if they struggle to learn new systems or adapt to changing procedures, they won't perform well in a role that demands rapid skill acquisition.
For organizations that invest heavily in training, the return on investment from cognitive testing is straightforward: you're identifying the people most likely to benefit from that training investment. The financial implications are real. Training an employee who can't absorb the material costs just as much as training one who can, but the return on that spending is dramatically different.
Related: Psychometric Tests are a Must-Have
The Debate Isn't Over
The Sackett findings triggered an intense academic debate that continues today. Several researchers have pushed back on the revised estimates. Cucina, Oh, and others have argued that concurrent validity studies (which study current employees rather than applicants) do experience meaningful range restriction due to occupational sorting. People tend to self select into jobs that match their cognitive ability level. Employers retain those who can handle the cognitive demands of their roles. This sorting process restricts the range of ability within any job, even among current employees.
A 2024 study published in Intelligence looked at cognitive ability's prediction of hands on job proficiency in military settings, using objective performance measures rather than supervisor ratings. Their findings suggested that cognitive ability's validity may be higher than .31 when you measure job performance through direct observation rather than subjective ratings.
Another important critique is that much of the data underlying both the original and revised validity estimates is quite old. Woods (2024) pointed out that nearly 19% of the primary studies in the meta analyses date from before 1940, and less than 1% were conducted after 2000. Jobs have changed enormously over that period. The nature of what constitutes good job performance has changed. Whether validity estimates based primarily on mid 20th century data generalize to today's knowledge work economy is a question worth asking.
Where does this leave us? The honest answer is that the true validity of cognitive ability tests for predicting job performance probably falls somewhere between .31 and .51. The exact number depends on how you correct for range restriction, what criterion you use to measure performance, and what type of job you're looking at. For complex, learning intensive roles, the validity is likely closer to the higher end. For simpler roles measured by supervisor ratings, it's likely closer to the lower end.
Practical Checklist for Implementing Cognitive Tests
If you're considering adding a cognitive ability test to your selection process, or if you already use one and want to improve your approach, work through these questions.
Does the job require learning and adaptation? If yes, cognitive testing adds clear value. If the job is highly routine with minimal learning demands, the benefit is smaller. Do you have validation evidence? Either conduct your own validation study or ensure the test publisher has validation data for roles similar to yours. Have you analyzed adverse impact? Run the numbers. Calculate selection rates for different groups and check whether the four fifths rule is met. If it isn't, consider alternatives or adjustments to your selection process.
Are you using multiple selection methods? A cognitive test should be one piece of a broader process that includes structured interviews, work samples where appropriate, and possibly personality or integrity measures. How are you using the scores? Top down ranking creates more adverse impact than a threshold approach. Set a minimum score and treat everyone above it as qualified, then make final decisions based on other factors.
Have you considered modern test formats? Newer tests using novel problems and reduced verbal or cultural content may predict equally well with smaller group differences. Is your test provider transparent? They should be able to show you validity data, reliability coefficients, and group difference statistics. If they can't, find a different provider.
Related: The Predictive Index Assessments: A Brief Guide
Where We Go From Here
The story of cognitive ability testing is a story about science correcting itself. For 25 years, the field operated on the understanding that cognitive ability was the undisputed champion of selection tools. The 2022 revisions challenged that. But they didn't invalidate cognitive testing. They put it in proper context.
Cognitive ability tests remain useful, scientifically supported tools for predicting job performance and, especially, training success. They work best when combined with structured interviews and job specific assessments. They work worst when used in isolation as a rank order device with no consideration of adverse impact.
The organizations that will hire best in the coming years are the ones that stay current with the research, combine multiple evidence based tools, and continuously monitor whether their selection process is both valid and fair. That means reading the studies, not just the vendor brochures. It means testing your assumptions against data, not tradition. And it means being willing to change your approach when the evidence says you should.
A test for cognitive ability is a powerful tool. But like any powerful tool, its value depends entirely on the skill of the person using it.



