Neurocognitive testing is an increasingly important part of occupational medicine, not least because of an ageing workforce and increasing prevalence of chronic medical conditions with neuropsychiatric sequelae. Most cognitive screening tests are relatively simple to undertake, the interpretation however is more nuanced than simply calculating the score. Misunderstanding the results of cognitive tests could lead to over- or underdiagnosis with important implications for treatment, insurance and employment [1]. This review offers guidance around the conduct and interpretation of cognitive assessment tools, based on recent research and personal clinical experience.

There is no ‘best cognitive test’; guidelines are vague as to which tests should be performed with no consensus on the preferred test strategy. This is frustrating for the clinical team and leads to substantial heterogeneity in the tests employed [2]. In fact, a single tool that is appropriate for every situation is unrealistic. Neuropsychological disorders are multifaceted and a ‘one-size-fits-all’ approach to testing does patients a disservice.

There are a substantial and increasing variety of cognitive assessment tools available, varying in purpose, length and cognitive domains covered [3]. The choice of assessment tool needs to be guided by several factors: how much time is available, who will administer the test and level of training, etc. Before choosing a test the user must be clear on the purpose of the assessment, whether they are looking for very basic triage or a detailed multi-domain neuropsychological assessment that may inform a diagnostic formulation. The Alzheimer’s Society have produced a useful cognitive testing toolkit for health professionals [4]. Rather than name a single cognitive assessment of choice, they offer a range of options for differing settings. Occupational health practitioners should choose a few tests from this resource and become familiar with them.

Cognitive tests offer more than pass/fail; research around cognitive test properties has tended to focus on test accuracy, i.e. how well can a test detect dementia or other cognitive syndromes. There is no cognitive test that offers both perfect sensitivity and specificity; indeed, these measures tend to be inversely proportional. The optimal trade-off should be guided by the purpose of testing and the implications of an erroneous result. If the plan is to offer basic screening, where an abnormal result will be followed up in specialist services, then higher sensitivity may be preferred (fewer false negatives). If the test is to determine suitability for work, then a more specific test may be appropriate (fewer false positives) [5].

Accuracy of a test is not absolute and will vary by the condition of interest. The Montreal Cognitive Assessment (MoCA) is increasingly used as a dementia assessment but was initially developed to assess for mild cognitive impairment, rather than frank dementia. We should not assume that test properties will be similar for the two conditions [6]. Sometimes when the same test is used in differing populations, the threshold score used to define a positive test is altered to suit the new population. Again using MoCA as an example, the threshold used in stroke care is lower than the traditional threshold score [7].

Any cognitive assessment that offers more than basic screening will segment the testing into various neuropsychological domains. This can be particularly useful when considering the specific skills required for a person’s occupation. Taking the Addenbrooke’s Cognitive Evaluation-III (ACE-III) [8] as an example, the test provides individual scores for domains of attention, orientation, memory, verbal fluency, language and visuospatial ability. Since many tests offer this useful level of detail, it is oddly reductionist to only ever consider whether a person’s aggregate score on all the component sections is greater or lesser than a ‘normal’ value. Looking at total scores can be useful, but often the pattern of domain-specific impairment can be more enlightening. Consider the person with repeated falls and tremor and where work colleagues comment on them being ‘slow’. Testing with MoCA gives a borderline normal score, but points are lost for attention and executive function tasks—a pattern that may be seen with cognitive problems in the context of Parkinson’s disease.

Accuracy is not the only important test property; evidence reviews of the most popular cognitive tests suggest that there are not substantial differences in accuracy between commonly used tests, albeit the pattern of sensitivity and specificity may differ. However, other properties of these tests are different and it is worth considering these other factors before committing to a particular test strategy. Perhaps most important of these are test feasibility and acceptability. More sophisticated multi-domain cognitive tests offer greater granularity of assessment, but this comes at the cost of greater time and training requirements. Formal neuropsychological batteries can take several hours and such lengthy testing may not be possible for clinicians or their patients. There is a trade-off too between opportunity outcome measures and economy and resource implications. The once ubiquitous Folstein Mini-Mental State Examination (MMSE) is now less popular due to copyright enforcement [9].

Very short (<5 min) cognitive screens are available such as Hodkinson’s Abbreviated Mental Test [4]. These tests do not usually pose feasibility or acceptability issues but are limited due to ceiling effects (majority of people achieve full marks), they cover fewer cognitive domains and have poorer responsiveness to change over time. Thus, short tests are far from perfect, but if you only have 5 min to spare, then it may be better to perform one of these tests well, than try and rush a more detailed assessment.

Test results need to be interpreted in context; interpreting the results of the test should always be done in the context of the individual patient. The cognitive test is only part of a clinical assessment that will also include history and physical examination. Clinicians would not make a diagnosis of angina based on an electrocardiograph alone and the same should be true of cognitive tests and dementia.

An important consideration is around ‘baseline’ cognitive function. Assessment is usually prompted by a concern over a change in cognition. Those coming from a high educational background may perform sufficiently well on cognitive screening tests to reach the threshold ‘normal’ value, even when there has been substantial cognitive decline. Likewise, in a person with limited literacy and numeracy, they may lose points on cognitive testing with no real change in cognition.

Other comorbidities may also confound test performance, for example someone with arthritis and synovitis of the hand may struggle to complete timed pencil and paper tests but this is clearly not related to cognition. The differential of an abnormal cognitive test should always include the ‘Five Ds’ such that whilst the test could indicate dementia or delirium, depression (or other mood disorders); deafness (or other sensory issues) and dysphasia (or other communication problems, including whether English is the first language) should all be considered. Depression and anxiety may be particularly important in the workplace. A common ‘vicious cycle’ scenario is concern over performance, resulting in employer performance management that in turn causes mood problems that may exacerbate the performance issues. Dissecting cognitive decline from mood disorder may well fall to the occupational medicine team.

It is important to get collateral history; the common patient-facing tests only assess cognitive function at the time of testing and it is more useful to assess change in cognitive function over time. The person being tested may have little insight into such changes and so obtaining a collateral history from family or caregiver is an essential but often neglected component of the assessment. There are several questionnaires available that help structure and operationalize this informant interview. The Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) seeks to compare the patient now versus 10 years ago and has been found to have high sensitivity in detecting dementia [10].

Keep it functional; if one considers cognitive testing using the WHO International Classification of Functioning, most tests tell us about impairment only. In practice, clinicians are often interested in how the cognitive issues manifest at the levels of activity (formerly disability) and societal participation (formerly handicap). Here we need a tool to assess the functional impact of cognitive impairment. Structured tools are available to describe performance in basic and extended activities of daily living and these can complement the cognitive testing to give a more holistic assessment. In the ward setting clinicians will often directly assess a patient’s performance in tasks such as dressing and meal preparation. Assessing a person performing usual tasks in their place of work, complemented by reports from work colleagues, could be an equivalent assessment. There is no structured cognitive tool for this situation, rather this form of assessment relies on the experience and skill of the specialist in occupational medicine. This could be an area for further development.

References

1.

Brown
J
.
The use and misuse of short cognitive tests in the diagnosis of dementia
.
J Neurol Neurosurg Psychiatry
2015
;
86
:
680
685
.

2.

Menon
R
,
Larner
AJ
.
Use of cognitive screening instruments in primary care: the impact of national dementia directives (NICE/SCIE, National Dementia Strategy)
.
Fam Pract
2011
;
28
:
272
276
.

3.

Harrison
JK
,
Noel-Storr
AH
,
Demeyere
N
,
Reynish
EL
,
Quinn
TJ
.
Outcomes measures in a decade of dementia and mild cognitive impairment trials
.
Alzheimers Res Ther
2016
;
8
:
48
.

4.

Helping You to Assess Cognition. A Practical Toolkit for Clinicians
.
Alzheimer’s Society
. https://www.alzheimers.org.uk/dementia-professionals/resources-professionals/publications/assessing-cognition-older-people-toolkit (
June 2018
, date last accessed).

5.

Takwoingi
Y
,
Quinn
TJ
.
Review of Diagnostic Test Accuracy (DTA) studies in older people
.
Age Ageing
2018
;
47
:
349
355
.

6.

Davis
DH
,
Creavin
ST
,
Yip
JL
,
Noel-Storr
AH
,
Brayne
C
,
Cullum
S
.
Montreal Cognitive Assessment for the diagnosis of Alzheimer’s disease and other dementias
.
Cochrane Database Syst Rev
2015
:
CD010775
.

7.

Quinn
TJ
,
Elliott
E
,
Langhorne
P
.
Cognitive and mood assessment tools for use in stroke
.
Stroke
2018
;
49
:
483
490
.

8.

Noone
P
.
Addenbrooke’s Cognitive Examination-III
.
Occup Med (Lond)
2015
;
65
:
418
420
.

9.

Newman
JC
,
Feldman
R
.
Copyright and open access at the bedside
.
N Engl J Med
2011
;
365
:
2447
2449
.

10.

Harrison
JK
,
Fearon
P
,
Noel-Storr
AH
,
McShane
R
,
Stott
DJ
,
Quinn
TJ
.
Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) for the diagnosis of dementia within a secondary care setting
.
Cochrane Database Syst Rev
2015
:
CD010772
.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)