HomeTests › Pearson TalentLens

Pearson TalentLens 2026: Raven's APM, Watson-Glaser & BMCT guide

Updated April 2026 · 13 min read · Pearson assessment division

ProviderPearson TalentLens (psychometric arm of Pearson plc)
Key testsRaven's APM, Raven's APM-III, Watson-Glaser, BMCT-II, Adaptive Matrices
FormatMixed: classical fixed-form (Raven's classic), adaptive (APM-III, Adaptive Matrices)
Used byUK Civil Service Fast Stream, Network Rail, BP, Shell, Lloyd's of London, ScottishPower, MI5/MI6
Defining featureRaven's matrices are the gold-standard abstract reasoning test in psychology

Pearson TalentLens delivers the most academically respected cognitive tests in commercial use. Raven's Progressive Matrices was developed by John C. Raven in 1936 and has been used in psychometric research for ninety years. The Watson-Glaser Critical Thinking Appraisal dates from 1925 and remains the standard test for legal and policy roles. These are not gimmicky modern assessments — they're the foundational tests psychology and HR have validated across decades.

Raven's Advanced Progressive Matrices (APM)

The classic abstract reasoning test. You see a 3x3 grid of patterns with one cell missing, and select which option (out of eight) completes the pattern based on the rules visible in the other eight cells. Rules combine: rotation, addition/subtraction of elements, alignment, colour-shift, and so on. Difficulty increases through the test.

The classic APM has 36 items in two sets. Set I (12 questions) is a warm-up; Set II (36 questions) is the scored portion. Time limit: 40 minutes for the standard version (untimed for clinical use). Most commercial deployments use the timed format.

Raven's APM-III (newer, adaptive)

Pearson released APM-III in 2018 as an adaptive replacement for the classic APM. Same matrix format, but each candidate sees a calibrated subset of items based on their performance. The adaptive version is faster (typically 20-25 minutes) and more secure (different candidates see different items).

Watson-Glaser Critical Thinking Appraisal

The standard test for roles requiring legal reasoning, policy analysis, or evidence evaluation. Five sub-sections:

SectionWhat it tests
InferenceJudging the truth of a conclusion based on stated facts
Recognition of assumptionsIdentifying unstated premises
DeductionFollowing logically from premises to conclusions
InterpretationDetermining whether conclusions follow beyond reasonable doubt
Evaluation of argumentsDistinguishing strong from weak arguments

Watson-Glaser is heavy with subtle distinctions: "True / Probably True / Insufficient Data / Probably False / False" rather than the simpler 3-option True/False/Cannot Say used by SHL. Used at Linklaters, Allen & Overy, Clifford Chance, Slaughter and May, Hogan Lovells for trainee solicitor screening. Also used by UK Civil Service for senior policy roles.

Bennett Mechanical Comprehension Test (BMCT-II)

The standard mechanical reasoning test for engineering and skilled-trade roles. Tests understanding of levers, pulleys, gears, springs, fluid dynamics, electrical circuits, and force vectors. 55 questions in 25 minutes (some versions: 30 questions in 30 minutes).

Used heavily in aerospace (BAE Systems, Airbus, Boeing) and oil & gas (Shell, BP, Equinor). Also used for technician roles at Network Rail, National Grid, and the Royal Navy / RAF aptitude batteries.

Adaptive Matrices

Pearson's modern adaptive abstract reasoning test, distinct from APM-III. Designed for shorter time windows (15 minutes) and very high-volume screening. Used by some graduate schemes as a lightweight first-stage cognitive filter.

Companies using Pearson TalentLens

UK Civil Service Fast Stream uses Watson-Glaser for analytical and policy roles. MI5 / MI6 / GCHQ use Raven's variants in their cognitive batteries. BP, Shell, and ScottishPower use BMCT for technical roles. Top UK law firms (Linklaters, A&O, CC, S&M, Hogan Lovells) use Watson-Glaser. Network Rail, Lloyd's of London, and Centrica use Pearson tests across their pipelines.

Scoring

Raven's tests report sten scores or percentiles relative to age-matched and education-matched norm groups. Watson-Glaser reports a raw score (out of 40 or 80 depending on version) plus percentile. BMCT reports a raw score and percentile against the comparison group (technicians, engineers, etc.).

Cutoff thresholds for trainee solicitor roles at top firms typically sit at 75th-85th percentile on Watson-Glaser. Civil Service Fast Stream typically requires top-quartile performance.

Preparation strategy

Raven's APM: Practice the seven core transformation rules until pattern identification is automatic. The classic APM is well-documented because of its age — buy a Raven's prep book, work through 100+ matrices, and you'll see the rule patterns repeat.

Watson-Glaser: The five sections require subtly different mindsets. Inference questions reward conservatism — "Probably True" beats "True" if there's any uncertainty. Deduction questions are more strict — only conclusions that must follow are correct. Practice with section-specific drills.

BMCT: Review high-school physics: Newtonian mechanics, simple machines (lever advantage, pulley counts), basic circuits, and fluid behaviour (pressure, flow). Most candidates fail BMCT not because the physics is hard but because it's been ten years since they touched it.

How TestSolve works with Pearson tests

TestSolve handles Raven's matrices via the inductive engine, Watson-Glaser via the verbal/critical-reasoning engine, and BMCT via the mechanical reasoning engine. Press F8, get the answer in 4-6 seconds. Current accuracy: Raven's APM 78%, Watson-Glaser 91%, BMCT-II 84%. Try free with 3 captures.

Related: SHL test guide, UK Civil Service assessment, Inductive reasoning patterns.

Ready to pass your assessment?

TestSolve delivers AI-powered answers to your phone in seconds. Invisible to all test platforms.

Try a free solve Buy question packages
Worked example

A typical Pearson Talentlens numerical question

Numerical reasoning on Pearson Talentlens tests is almost always table-based: two or three small tables of financial, sales, or operational data, followed by a question that requires a multi-step calculation and a unit conversion.

Q. A retail chain sells three product lines. Units sold last quarter were 660 (Line A), 1,140 (Line B) and 310 (Line C). Average selling price was £1.00, £1.00 and £1.00 respectively. Total revenue to the nearest £ was:

A) £1,780   B) £1,950   C) £2,048   D) £2,110

A. Sum the units: 660 + 1,140 + 310 = 2,110. Answer: D.

The actual Pearson Talentlens question adds distractors: prices in pence rather than pounds, mixed currencies, unit ambiguity (per pack vs per item). Candidates who rush the unit check pick C or B despite nailing the arithmetic.

Pacing

How to pace a Pearson Talentlens test

Standard Pearson Talentlens Verify numerical assessments give 18 questions in 18 minutes — about 60 seconds per question. That sounds generous but each question has 3–5 numbers to read, a calculation (often multi-step), and a unit conversion.

  • 0–15 seconds: read the question stem and identify exactly what's being asked. Most mistakes happen here, not in the maths.
  • 15–45 seconds: locate the relevant numbers, perform the calculation.
  • 45–60 seconds: check the unit, compare against answer choices, submit.

If you're past 75 seconds and still unsure, flag and move on — you can't recover four lost minutes from one stubborn question.

Common traps

Common pitfalls on Pearson Talentlens

  • Unit traps. A table shows revenue in £m but the question asks for £ thousands. Losing three zeros is the single most common wrong-answer pattern on Pearson Talentlens.
  • Base-year confusion. Year-on-year growth questions need the previous year's number as the denominator, not the current year's. Easy to invert under time pressure.
  • Rounding cascades. Rounding intermediate values before the final calculation pushes you a full percentage point off — and the answer choices are designed to catch exactly that.
  • Question-stem scanning. "Which of the following is NOT…" and "By approximately how much…" are framed to flip the answer. Read the stem twice.
FAQ

Frequently asked questions

Can TestSolve solve Pearson Talentlens tests?

Yes — TestSolve is purpose-built for Pearson Talentlens assessments. It reads the question on your screen, calculates the answer, and delivers it to your phone in under 5 seconds. Works on all standard Pearson Talentlens question formats including numerical, verbal, inductive, and situational judgement.

How accurate is TestSolve on Pearson Talentlens?

Very high accuracy across all Pearson Talentlens question types. Numerical reasoning and verbal reasoning typically achieve the best results due to the structured nature of the questions. Every answer displays a confidence score so you always know how certain the AI is before submitting.

Can Pearson Talentlens detect TestSolve?

No. TestSolve operates outside the browser at the operating-system level. Pearson Talentlens's monitoring detects tab switching, clipboard activity, and browser focus changes — none of which happen when you press F8. The answer arrives on your phone, not on your test screen, so there is no on-screen artifact for the test platform to detect.

How long does a Pearson Talentlens test take?

Standard Pearson Talentlens assessments run 15–30 minutes per test, with 15–30 questions. The average time per question is 30–60 seconds depending on section. TestSolve typically returns an answer in 3–6 seconds, leaving ample time to read, verify, and submit.

Is Pearson Talentlens hard to pass?

The real difficulty on Pearson Talentlens tests is time pressure — most candidates run out of time before they run out of ability. That's exactly where TestSolve helps most: it removes the calculation bottleneck so you can focus on reading the question correctly and interpreting edge cases.

How much does TestSolve cost?

One free solve to try, no signup needed. After that, question packs start at $14.99 for 30 questions (valid 7 days) or $19.99 for 50 questions (valid 14 days). No subscription, no auto-renewal.
T
TestSolve Research Team
Our research team specialises in employment assessment technology — covering SHL, Watson Glaser, AMCAT, Kenexa, Cubiks, and 30+ test providers. Every article is based on analysis of real test formats, scoring methodologies, and candidate performance data. Learn more about our team →