Hostname: page-component-76fb5796d-vvkck Total loading time: 0 Render date: 2024-04-26T10:12:11.994Z Has data issue: false hasContentIssue false

Development of robust normative data for the neuropsychological assessment of Greek older adults

Published online by Cambridge University Press:  29 January 2024

Xanthi Arampatzi
Affiliation:
Lab of Cognitive Neuroscience, School of Psychology, Aristotle University of Thessaloniki, Thessaloniki, Greece Athens Alzheimer’s Association, Athens, Greece
Eleni S. Margioti
Affiliation:
Lab of Cognitive Neuroscience, School of Psychology, Aristotle University of Thessaloniki, Thessaloniki, Greece Athens Alzheimer’s Association, Athens, Greece
Lambros Messinis
Affiliation:
Lab of Cognitive Neuroscience, School of Psychology, Aristotle University of Thessaloniki, Thessaloniki, Greece
Mary Yannakoulia
Affiliation:
Department of Nutrition and Dietetics, Harokopio University, Athens, Greece
Georgios Hadjigeorgiou
Affiliation:
School of Medicine, University of Cyprus, Nicosia, Cyprus
Efthimios Dardiotis
Affiliation:
School of Medicine, University of Thessaly, Larissa, Greece
Paraskevi Sakka
Affiliation:
Athens Alzheimer’s Association, Athens, Greece
Nikolaos Scarmeas
Affiliation:
1st Department of Neurology, Aiginition Hospital, National and Kapodistrian University of Athens Medical School, Athens, Greece Department of Neurology, Taub Institute for Research in Alzheimer’s Disease and the Aging Brain, The Gertrude H. Sergievsky Center, Columbia University, New York, USA
Mary H. Kosmidis*
Affiliation:
Lab of Cognitive Neuroscience, School of Psychology, Aristotle University of Thessaloniki, Thessaloniki, Greece
*
Corresponding author: M. H. Kosmidis; Email: kosmidis@psy.auth.gr
Rights & Permissions [Opens in a new window]

Abstract

Objective:

Normative data for older adults may be tainted by inadvertent inclusion of undiagnosed individuals at the very early stage of a neurodegenerative process. To avoid this pitfall, we developed norms for a cohort of older adults without MCI/dementia at 3-year follow-up.

Methods:

A randomly selected sample of 1041 community-dwelling individuals (age ≥ 65) received a full neurological and neuropsychological examination on two occasions [mean interval = 3.1 (SD = 0.9) years].

Results:

Of these, 492 participants (Group 1; 65–87 years old) were without dementia on both evaluations (CDR=0 and MMSE ≥ 26); their baseline data were used for norms development. Group 2 (n = 202) met the aforementioned criteria only at baseline, but not at follow-up. Multiple linear regressions included demographic predictors for regression-based normative formulae and raw test scores as dependent variables for each test variable separately. Standardized scaled scores and stratified discrete norms were also calculated. Group 2 performed worse than Group 1 on most tests (p-values < .001–.021). Education was associated with all test scores, age with most, and sex effects were consistent with the literature.

Conclusions:

We provide a model for developing sound normative data for widely used neuropsychological tests among older adults, untainted by potential early, undiagnosed cognitive impairment, reporting regression-based, scaled, and discrete norms for use in clinical settings to identify cognitive decline in older adults. Additionally, our co-norming of a variety of tests may enable intra-individual comparisons for diagnostic purposes. The present work addresses the challenge of developing robust normative data for neuropsychological tests in older adults.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of International Neuropsychological Society

Introduction

Normative data for neuropsychological tests are commonly used in clinical assessments to evaluate an individual’s cognitive abilities relative to a healthy comparison group. When dealing with older adults, however, healthy comparison groups may inadvertently include individuals at the very early preclinical stages of a neurodegenerative disorder, such as Mild Cognitive Impairment (MCI) or dementia. Inclusion of such cases in a standardization sample may reduce normative standards and potentially lead to erroneous classification as cognitively healthy (McCaffrey & Westervelt, Reference McCaffrey and Westervelt1995; Oosterhuis et al., Reference Oosterhuis, van der Ark and Sijtsma2016; Salthouse & Tucker-Drob, Reference Salthouse and Tucker-Drob2008). Thus, sensitive measures and methods of detection are needed to ensure accurate early identification of such disorders.

Given the insidious nature of neurodegenerative disorders among older individuals, cross-sectional studies in this age group, by their nature, cannot supply adequate information about possible cases with early underlying pathology. Instead, longitudinal studies are needed to confirm, on follow-up, the non-neurological status of participants in the original normative databases. Thus, ideally, a normative study of older adults should include a longitudinal investigation to allow the exclusion ex post facto of any individuals from the initial assessment who later developed dementia to ensure that the normative data are based on healthy individuals. Few studies, to date, have developed such robust normative data (i.e., Alden et al., Reference Alden, Lundt, Twohy, Christianson, Kremers, Machulda, Jack, Knopman, Mielke, Petersen and Stricker2022; Kramer et al., Reference Kramer, Casaletto, Umlauf, Staffaroni, Fox, You and Kramer2020) based on samples of older adults who were clinically normal across several assessments, yielding norms that were more sensitive to early detection of memory decline than existing norms.

Another critical issue in developing normative data, which may influence diagnostic accuracy, relates to the norming procedure. Discrete norms are the most widely available and frequently used. Particular demographic characteristics (e.g., age, educational level, sex) have been found to influence performance on most neuropsychological tests and they are routinely controlled for in discrete normative databases (Lezak et al., Reference Lezak, Howieson, Bigler and Tranel2012; Mitrushina et al., Reference Mitrushina, Boone, Razani and D'Elia2005; Strauss et al., Reference Strauss, Sherman and Spreen2006). Traditional discrete norms tables typically provide data broken down by age and education, and sometimes sex (if this factor was found to influence performance on the particular test in question). Alternatively, others provide cutoff scores with or without corrective factors (e.g., add 1 or 2 points to an individual’s score for low education or old age before comparing with normative data) (Duff & Ramezani, Reference Duff and Ramezani2015; Heaton et al., Reference Heaton, Miller, Taylor and Grant2004). Then, an individual’s scores are compared with the appropriate normative group (i.e., people in the same age and education range) to determine whether his or her score deviates from that expected based on the performance of a healthy cohort with similar demographic characteristics.

Although quite popular, discrete norms have some limitations. One is that normative groups are typically divided based on age and educational level, which may sometimes be arbitrary. Another limitation is that performance designation can shift depending on which age band is used even though the raw score remains the same. For example, if an individual is 65 years old and is compared with a normative group between 65 and 80 years old, she or he may be at an advantage; yet, if this person’s score were compared to the immediately preceding group (e.g., 50–65), her or his scores might indicate impairment. Furthermore, in some cases, published normative data classify older adults as a single age group with an age range of 60–90 years. Given the rapid decline in most cognitive abilities that comes with age, an individual’s performance could either be overestimated or underestimated depending on where it is classified in the broad age range (overestimating impairment in the low range and underestimating it in the high age range). Finally, stratified discrete norms also have the disadvantage of requiring large sample sizes for reliable results due to statistical limitations (Duff et al., Reference Duff, Beglinger, Moser, Paulsen, Schultz and Arndt2010; Oosterhuis et al., Reference Oosterhuis, van der Ark and Sijtsma2016).

An alternative approach is to derive continuous norms using multiple regression equations (Parmenter et al., Reference Parmenter, Testa, Schretlen, Weinstock-Guttman and Benedict2010). This method, now commonly used in test development, is the Standard Regression-Based change score approach and has been used extensively in healthy as well as clinical samples (Brandt & Benedict, Reference Brandt and Benedict2001; Martin et al., Reference Martin, Sawrie, Gilliam, Mackey, Faught, Knowlton and Kuzniekcy2002). Regression-based normative formulae simultaneously correct for (via multivariate regression analyses) any demographic variables that may influence neuropsychological performance (Temkin et al., Reference Temkin, Heaton, Grant and Dikmen1999), and, thus, may be more sensitive than discrete norms in detecting late-life cognitive decline (Duff & Ramezani, Reference Duff and Ramezani2015). Another benefit of developing regression-based or continuous norms is that they require smaller samples than are needed for discrete norms (Oosterhuis et al., Reference Oosterhuis, van der Ark and Sijtsma2016).

Therefore, our goal in undertaking the present study was initially to verify whether a group of individuals who demonstrably were not at an early stage of a neurodegenerative process would in fact yield stricter criteria than a group of older individuals who later developed MCI or dementia, and, thus, may have already been demonstrating subtle indications of decline. To our knowledge, this has only been done to date for single tests (i.e., Rey Auditory Verbal Learning Test in Alden et al., Reference Alden, Lundt, Twohy, Christianson, Kremers, Machulda, Jack, Knopman, Mielke, Petersen and Stricker2022; California Verbal Learning Test in Kramer et al., Reference Kramer, Casaletto, Umlauf, Staffaroni, Fox, You and Kramer2020). Yet clinical assessments typically rely on a battery of tests from which to come to potential diagnostic conclusions. Thus, we used a comprehensive test battery assessing simultaneously five major cognitive domains. If confirmed, our next goal was to develop normative data based on the former, untainted group applying regression-based normative formulae, simultaneously correcting for all relevant demographic variables (age, level of education, and sex) and traditional discrete norms (stratified by the aforementioned demographic variables, if relevant), to compare the relative utility of each (based on patient examples) and provide clinicians with a choice for their clinical practice or research purposes.

Methods

Participants

Participants were recruited for the Hellenic Longitudinal Investigation of Aging and Diet (HELIAD), a longitudinal, epidemiologic, and population-based study in Greece, randomly selected from municipality rosters (city of Larissa and surrounding rural regions, as well as Marousi, a suburb of Athens). Procedures and study details have been described previously (Bougea et al., Reference Bougea, Maraki, Yannakoulia, Stamelou, Xiromerisiou, Kosmidis, Ntanasi, Dardiotis, Hadjigeorgiou, Sakka, Anastasiou, C., Stefanis and Scarmeas2019; Dardiotis et al., Reference Dardiotis, Kosmidis, Yannakoulia, Hadjigeorgiou and Scarmeas2014; Kosmidis et al., Reference Kosmidis, Vlachos, Dardiotis, Yannakoulia, Hadjigeorgiou and Scarmeas2018; Vlachos et al., Reference Vlachos, Kosmidis, Yannakoulia, Dardiotis, Hadjigeorgiou, Sakka, Ntanasi, Stefanis and Scarmeas2020). Briefly, of the randomly selected individuals 65 years of age and older, we were able to contact 3552, of whom 2004 (58% were women) agreed to participate. No exclusion criteria were applied besides age. About two-thirds of the total study sample had a low level of education (some primary education).

All participants provided written informed consent before their participation. The present research received approval by the ethics committees of the University of Thessaly and the National and Kapodistrian University of Athens and was conducted in accordance with the Helsinki Declaration.

Procedure

Cognitive status was evaluated on two occasions to eliminate the possibility of subsequent cognitive decline not yet detected at baseline evaluation. At baseline, neurologists conducted a standard physical and neurological examination, including a structured face-to-face interview regarding information about medical, neurological, psychiatric, and family history, as well as activities of daily living, social and leisure activities, and lifestyle patterns. The neurologists assigned a global summary score for each participant, namely, a Clinical Dementia Rating (CDR) score (Morris, Reference Morris1993). Additionally, neuropsychologists administered a comprehensive neuropsychological test battery, which included tests of orientation, memory (verbal and non-verbal), language (naming, verbal fluency, comprehension, and repetition), attention/processing speed, executive functioning, visuospatial perception, and a gross estimate of intellectual abilities (vocabulary test).

Consensus diagnostic meetings were held to determine the potential presence of dementia (based on Diagnostic and Statistical Manual of Mental Disorders-IV-TR criteria; APA, 2000) and MCI (based on National Institute of Neurological and Communicative Disorders and Stroke and Alzheimer’s Disease and Related Disorders Association criteria (McKhann et al., Reference McKhann, Drachman, Folstein, Katzman, Price and Stadlan1984) and International Working Group on MCI). Participants were reassessed after a mean time interval of approximately 3.1 (range: 1.3–7.3) years, repeating the baseline neurological and neuropsychological assessment. After the follow-up examination, a consensus meeting was held to revise the participants’ initial clinical diagnosis, if necessary.

Participants included in the present analyses met the following criteria: (1) they were considered not to have dementia at the baseline diagnostic evaluation, based on the following two indices: a value of 0 on the CDR and a threshold value of ≥ 26 on the Mini-Mental State Examination (MMSE; Folstein et al., Reference Folstein, Folstein and McHugh1975), (2) they had returned for the follow-up evaluation and had CDR and MMSE ratings (regardless of their scores per se), and (3) they had completed the neuropsychological battery at least on the initial assessment. Illiterate individuals with no formal schooling (determined by self-report) were excluded from the present analyses and have been presented separately (Mandyla et al., Reference Mandyla, Yannakoulia, Hadjigeorgiou, Dardiotis, Scarmeas and Kosmidis2022), as they pose not only a quantitatively but also a qualitatively, different group warranting specialized attention.

Neuropsychological assessment

For the purpose of the overall investigation, we compiled tests from multiple sources to provide a comprehensive assessment of cognitive functioning, memory, language, attention/processing speed, executive functioning and motor programing, visuospatial ability, and a gross estimate of general intelligence. The battery comprised the following tests (all those containing verbal items were Greek versions):

  1. a. Medical College of Georgia Complex Figure Test (MCG; Lezak et al., Reference Lezak, Howieson and Loring2004), consisting of a complex line drawing; variables of interest: correct number of items on copy and percentage of items copied correctly, immediate and delayed recall and percentage of items recalled on each condition based on the copy score, and recognition (created for the present study) conditions. We used version 1 (of the four versions available) during the first assessment and version 2 on the second assessment.

  2. b. Greek Verbal Learning Test (GVLT; Vlahou et al., Reference Vlahou, Kosmidis, Dardagani, Tsotsi, Giannakou, Giazkoulidou, Zervoudakis and Pontikakis2013), a list-learning test of semantically related items (16 in total, from 4 semantic categories); variables of interest: correct number of words on the first learning trial, the total of five learning trials, immediate and delayed recall (on both free and cued recall conditions), learning curve, and encoding, consolidation, retrieval deficit indices. We used test version A on the initial assessment and one of the three alternate forms (specifically, B) on the second assessment.

  3. c. Verbal fluency: semantic (objects) and phonological verbal fluency (letter A) (Kosmidis et al., Reference Kosmidis, Vlahou, Panagiotaki and Kiosseoglou2004); variables of interest: number of words, clusters, and switches produced for each condition. We used alternate forms in the two assessments.

  4. d. Subtests of the Boston Diagnostic Aphasia Examination short form, specifically, the Boston Naming Test-short form (15-item), and selected items from the Complex Ideational Material Subtest, to assess verbal comprehension and repetition of three multisyllabic words and three short phrases, selected to be challenging to reproduce (Tsapkini et al., Reference Tsapkini, Vlahou and Potagas2010); variables of interest: number correct on naming, comprehension, and repetition.

  5. e. Trail Making Test (TMT), Parts A and B (Vlahou & Kosmidis, Reference Vlahou and Kosmidis2002); variables of interest: time to completion on each part and their ratio (TMT B:TMT A). For those who could not complete the test in 5 minutes (maximum time = 300 s), the procedure discontinued (Spreen & Strauss, Reference Spreen and Strauss1991)

  6. f. Anomalous Sentence Repetition (original Greek version created for the present study based on a description in Lezak et al., Reference Lezak, Howieson and Loring2004), consisting of 14 commonly used sayings which, however, included an incorrect word, to be repeated as given; variable of interest: number of correct items.

  7. g. Graphical Sequence Test (Lezak et al., Reference Lezak, Howieson and Loring2004), consisting of six requests to write or draw specific sequences of objects, shapes, or numbers; variable of interest: number of correct items.

  8. h. Recitation of Months (forwards and backward) (Östberg et al., Reference Östberg, Fernaeus, Bogdanović and Wahlund2008; Tardif et al., Reference Tardif, Roy, Remi, Laforce, Verret, Fortin, Houde and Poulin2013); variables of interest: time to completion forwards, backward, and their ratio (backward:forward).

  9. i. Motor Programming (Lezak et al., Reference Lezak, Howieson and Loring2004), consisting of a congruent (20 trials of tapping the table the same number of times as the examiner, specifically once or twice) and an incongruent (20 trials of tapping the opposite number of times, namely once for two taps and twice for one) condition; variables of interest: number of correct responses on congruent and incongruent conditions.

  10. j. Judgment of Line Orientation (JLO; Benton et al., Reference Benton, Hamsher, Varney and Spreen1983) (abbreviated for the present study, 10 items, every third item of the original test); variable of interest: number of correct pairs.

  11. k. Clock Drawing Test (CDT), free drawn version (Bozikas et al., Reference Bozikas, Giazkoulidou, Hatzigeorgiadou, Karavatos and Kosmidis2008; Freedman et al., Reference Freedman, Leach, Kaplan, Winocur, Shulman and Delis1994); variable of interest: total score.

  12. l. Vocabulary test: a Greek multiple-choice vocabulary test (Giaglis et al., Reference Giaglis, Kyriazidou, Paraskevopoulou, Tascos and Kosmidis2010); variable of interest: number of correct responses.

Data analyses

Student’s t-tests were used to compare the mean scores of a group of participants without dementia at baseline assessment, as defined above (CDR = 0 and MMSE ≥ 26), who also continued to meet these criteria at follow-up, henceforth referred to as Group 1 (this group was also used in the present analyses to generate normative data) to a group of participants who were without dementia only at baseline examination (CDR = 0 and MMSE ≥ 26) but did not meet both of the criteria (i.e., had a CDR > 0 and MMSE < 26) at the follow-up assessment, henceforth designated as Group 2. To indicate the size of the differences between the two groups, we also estimated Cohen’s effect sizes. For Cohen’s d, we considered an effect size of 0.2–0.3 as small, at approximately 0.5 medium, and of 0.8–1.0 as a large effect size.

Multiple regression analyses were performed with the neuropsychological test scores as the dependent variables. The independent variables were age at baseline assessment, age squared, education, and sex. The significance of the term of age squared was tested in the model to evaluate nonlinear effects on performance (Parmenter et al., Reference Parmenter, Testa, Schretlen, Weinstock-Guttman and Benedict2010). If an independent variable had an insignificant regression coefficient, it was dropped from the model, and the model was re-estimated, including the remaining variables. The procedure was repeated until all remaining predictors had regression weights significantly different from zero (Oosterhuis et al., Reference Oosterhuis, van der Ark and Sijtsma2016). Coefficients of determination (R 2) were reported as a measure of variation that is explained by each model. Statistical analyses were performed using SPSS software version 29.0. [We applied linear, instead of nonlinear, analyses (as proposed by Kornak et al., Reference Kornak, Fields, Kremers, Farmer, Heuer, Forsberg, Brushaber, Rindels, Dodge, Weintraub, Besser, Appleby, Bordelon, Bove, Brannelly, Caso, Coppola, Dever, Dheel, Dickerson, Dickinson, Dominguez, Domoto‐Reilly, Faber, Ferrall, Fishman, Fong, Foroud, Gavrilova, Gearhart, Ghazanfari, Ghoshal, Goldman, Graff‐Radford, Graff‐Radford, Grant, Grossman, Haley, Hsiao, Hsiung, Huey, Irwin, Jones, Jones, Kantarci, Karydas, Kaufer, Kerwin, Knopman, Kraft, Kramer, Kukull, Lapid, Litvan, Ljubenkov, Lucente, Lungu, Mackenzie, Maldonado, Manoochehri, McGinnis, McKinley, Mendez, Miller, Multani, Onyike, Padmanabhan, Pantelyat, Pearlman, Petrucelli, Potter, Rademakers, Ramos, Rankin, Rascovsky, Roberson, Rogalski‐Miller, Sengdy, Shaw, Staffaroni, Sutherland, Syrjanen, Tartaglia, Tatton, Taylor, Toga, Trojanowski, Wang, Wong, Wszolek, Boeve, Boxer and Rosen2019), to retain the original data, and avoid potential manipulation of data and variables where outliers may exist, as well as because existing software is not readily available for the simultaneous estimation of nonlinear trends]. Traditional norms were derived by dividing our sample into discrete age and educational-level categories and computing the mean (M) and standard deviation (SD), as well as percentile rankings. Performance was classified as normal by utilizing a cutoff score of 1.5 SD, which corresponds to the 7.5th percentile (Petersen & Morris, Reference Petersen and Morris2005). Also, raw scores of all variables of interest were adjusted and converted to scaled standardized scores (M = 10, SD = 3) using the cumulative frequency distribution.

Finally, we applied each type of normative data to two case examples to illustrate their relative utility in detecting potential cognitive impairment in one patient with known dementia and another with known MCI.

Results

Group comparisons

At baseline, the overall study sample comprised 1984 participants, of which 1089 were also examined in the follow-up. Of the total number of participants who were examined at both assessments, 492 were considered without dementia at both assessments (Group 1) (age range: 65–87 years, education range: 1–21 years) and they constituted the main sample of this study. Of the total sample with baseline and follow-up assessments, 395 participants did not meet criteria for inclusion (based on CDR and MMSE scores) at baseline and were thus excluded from the sample. Finally, 202 participants who were initially classified as without dementia at the baseline assessment, did not meet the criteria at the follow-up assessment (had either CDR > 0 or MMSE < 26); they were included in a comparison group (Group 2) (age range: 65–88 years, education range: 1–18 years) only for the purpose of establishing the need (or lack thereof) for basing normative data on Group 1. Table 1 lists the demographic and clinical characteristics of Groups 1 and 2.

Table 1. Demographic and clinical characteristics during the baseline evaluation for Group 1 and Group 2

GDS = Geriatric Depression Scale, HADS = Hospital Anxiety and Depression Scale.

Chi-squared test used for categorical variables; t-tests for continuous variables.

Data on continuous variables are presented as M ± SD; data on categorical variables are presented as number of occurrences.

A group comparison of all neuropsychological test variables yielded significantly lower mean scores for Group 2 relative to Group 1 on almost all scores (p-values ranging from < .001–.021; see Table 2) and with moderate effect sizes. Only one very simple neuropsychological test variable [namely, MCGCF-recognition (Cohen’s d = 0.16, p > .05)] did not differentiate the groups from each other (the small effect size suggested no meaningful difference). Overall, these findings indicated that at baseline, Group 2 demonstrated poorer performance than Group 1, suggesting an already existing subtle decline across all tests in the former group several years before the follow-up assessment.

Table 2. Group comparisons (means and SDs) on neuropsychological test variables and between-group effect sizes (Cohen’s d) (continued)

T-test used for all continuous variables.

Association between demographic factors and neuropsychological test performance

Henceforth analyses were based only on Group 1. The effects of age, education, and sex were investigated for each test, and significant linear associations were found, as presented in Table 3. As indicated by the adjusted R 2, results showed that about 17% of the variance of test scores included in the neuropsychological battery was explained by these demographic variables, with a range between 4% and 56%. Education was significantly associated with all test scores, with higher education being related to better performance. In contrast, age was not significantly related to CDT, TMT B, ratio of TMT B to TMT A, vocabulary, comprehension, recitation of months, phonological verbal fluency, visual recognition memory, motor programing–congruent condition, the 1st learning trial of GVLT and two of the deficit indices of the GVLT (consolidation and retrieval deficit). For the remaining test variables, greater age was related to poorer performance. Sex was significantly related to test scores on the GVLT, MCG (for copy and delayed recall scores but not for immediate recall), JLO, CDT, verbal fluency, and Anomalous Sentence Repetition, but not with any other test variables. Specifically, we found a female advantage on GVLT learning and recall scores, the copy condition of the MCG, verbal fluency (both phonological and semantic), and Anomalous Sentence Repetition, but a male advantage on MCG immediate recall percentage, delayed recall, delayed recall percentage, and JLO. No other significant sex effects were found. The term of age squared was not significant in any of the regression equations, thus it was not included in further analyses.

Table 3. Regression-based equations to calculate adjusted scores

All F tests are significant at p < .001. Subtest scores are raw scores.

a Standard error of the estimate.

b Age is in years; gender is coded as 0=male. 1=female; Male sex is used as reference; education is coded in educational years.

c β = 0 if p > 0.05.

Calculation of regression-based normative equations

Normative data are presented as regression-based algorithms to adjust test scores for age, education, and sex variables according to the following equation:

$$\eqalign{ {\rm{Test\ Scor}}{{\rm{e}}_{{\rm{predicted}}}} & = {\rm{constant}} + {\beta _{{\rm{age}}}}{\rm{(age) }} + {\beta _{{\rm{sex}}}}{\rm{(sex)}} \cr & + {\beta _{{\rm{education}}}}{\rm{(education)}} \cr} $$

To determine size of deviation from the predicted score, we calculated the following:

$${\rm{(Test\ scor}}{{\rm{e}}_{{\rm{actual}}}} - {\rm{Test}}\ {\rm{scor}}{{\rm{e}}_{{\rm{predicted}}}}{\rm{)}}/{\rm{SD}}\ {\rm{residual}}$$

Individual equations are listed in Table 3 but are also available in an Excel file for clinical use for individual examinees [available in compressed form (https://www.7-zip.org/) at https://www.psy.auth.gr/faculty/kosmidis/NormsCalculationFINAL.7z). Based on this procedure, we present two case examples. In the case of A, a 76-year-old woman with 6 years of education and a diagnosis of dementia, performance on the GVLT sum of five learning trials can be calculated given the information available in Table 3. Specifically, the expected score = 44.135 + (−0.212 × 76) + (7.478 × 1) + (0.752 × 6) = 40.013. Instead, her actual score was 15, clearly below what was predicted based on her demographic characteristics. To determine whether or not the actual score indicated impairment, we continued as follows: (15–40.013) / 10.04 = −2.49, suggesting that, indeed, the difference was significant. Similarly, for Case B, a 74-year-old man with 2 years of education and a diagnosis of MCI, had a predicted score = 44.135 + (−0.212 × 74) + (7.478 × 0) + (0.752 × 2) = 29.951; yet his actual score was also 15, clearly below that predicted and confirmed a significantly lower as follows: (15–29.951)/10.04 = −1.49.

Calculation of discrete normative data: raw and standard scores

Discrete normative data, with corresponding raw test scores, mean (SD), and percentiles, are presented in table format in Appendix A. These data are stratified by four levels of age (65–69, 70–74, 75–79, and 80–87) and by three levels of education (1–6, 7–12, 13+ years). Where sex had an effect on performance, separate tables for men and women were included. All tables include means and standard deviations by cell, percentile rankings, and cutoff scores for easy comparison to a target patient. These tables include norms based on raw data. While for most test variables, individual scores corresponded to a specific percentile ranking, on a few variables, the same score corresponded to several percentile rankings. For example, for both women and men in some age/education cells, a score of 100% (or 36) on the MCG copy condition, as well as on the Sentence Comprehension of the BDAE, corresponded with performance at the 25th, 50th, the 75th, and the 90th percentile (see Appendix A).

Using our previous case examples, Case A’s score of 15 on the GVLT sum of five learning trials for women would be −2.16 SD below the mean for her age and educational-level group (on page 7 of Appendix A, see the eighth column from the left on the relevant table), which is also below the cutoff for determining impaired performance (below the 7.5th percentile). For Case B, his score of 15 on the same variable was also below the 7.5th percentile (see the fifth column from the left on the relevant table) and −2.92 SD below the mean for men in his age group and educational level, thus also impaired.

Standard scores (SS) were calculated by taking the raw score of each variable of interest and transforming it to a common scale (M = 10, SD = 3) using the cumulative frequency distribution. This approach allows for the organization and presentation of normative data and facilitates comparisons between different neuropsychological tests. Conversion of raw scores to standard scores is listed in table format in Appendix B. Again, referring to the previous case examples, for both Case A and Case B, their GVLT sum of learning trials standard score would be below SS = 2 (in the low range).

Discussion

In the present study, we found that ensuring that normative samples of older individuals are demonstrably untainted by an early neurodegenerative process yielded stricter criteria, as mean values were higher in those demonstrably without dementia at follow-up than those who upon follow-up no longer were deemed to be in this category. This was the case for almost all neuropsychological test variables, with the exception of one very simple task. More specifically, we found that, at baseline assessment, Group 1 performed better than Group 2 on almost all tests. This finding justified our efforts to secure such an untainted group as a robust normative standard, as it suggests that individuals in Group 2 were already presenting subtle cognitive decline, a few years before their diagnosis of MCI or dementia. We also found an education effect on all test variables, an age effect on some, and a sex effect on a few. Thus, we included these demographic characteristics in the calculation of normative data according to commonly used approaches. Regression-based normative data appeared to be more sensitive to the level of impairment based on demographic factors, than discrete norms. Specifically, the former differentiated the level of performance of two cases (one with diagnosed MCI and at the impairment cutoff, and the other with diagnosed dementia and clearly deficient) with the same score, but different ages, sexes, and years of education. In contrast, when comparing these same cases to discrete norms tables, they were both below the cutoff percentile and in the deficient range based on cell means and SDs. Moreover, standardized scaled scores overestimated the severity of impaired performance, most likely because they were based on the total Group 1 sample and did not consider the effects of age, sex, and education. This highlights the importance of accounting for factors that influence cognitive performance when developing norms, as well as the preferability of regression-based norms.

Older adults present greater variability in their neuropsychological performance than younger cohorts (Duff et al., Reference Duff, Beglinger, Moser, Paulsen, Schultz and Arndt2010) and more rapid rates of cognitive decline with aging (Hänninen et al., Reference Hänninen, Koivisto, Reinikainen, Helkala, Soininen, Mykkänen, Laakso and Riekkinen1996), thus necessitating normative data adjusted for this population to distinguish normal changes from those reflective of an underlying neurodegenerative disease. Through the procedure described in the present study, we addressed frequent criticisms related to the interpretation of neuropsychological performance of healthy older individuals, questioning the validity of data that may be, unbeknownst to both experimenter and research participant, tainted by undiagnosed early neurocognitive decline. By studying individuals from a population-based epidemiological study, we ensured our sample’s representativeness of the population. By choosing among these individuals, those who were considered to be without MCI or dementia on two assessment times several years apart, we confirmed that our sample was not in the early, and yet undetected, phase of a neurodegenerative process. Thus, our sample provided a unique opportunity to ensure the ex post facto exclusion of preclinical MCI or dementia cases from the normative data within a population-based large sample of older individuals. Excluding individuals from the final normative sample whose cognitive status no longer justified their characterization as without MCI or dementia at follow-up yielded stricter criteria for the determination of cognitive impairment than typically done in most normative studies.

Additional concerns relative to the accuracy and utility of normative data, especially for older individuals, relate to the method of norm development. Given the more accelerated rate of cognitive decline typically observed among healthy older individuals relative to their younger counterparts, we maximized the sensitivity of our normative data to the effects of demographic characteristics often found to affect neuropsychological performance. Thus, we explored the influence of age, education, and sex on neuropsychological performance on each test variable. Indeed, we found that these characteristics were associated with many neuropsychological variables. Specifically, education was associated with performance across the board, wherein increased education was related to better neuropsychological performance. Age was associated with most variables (verbal and visual memory and learning, visuospatial perception, language, attention/processing speed, and executive functioning), but not all; where an association existed, it was inverse, with increasing age being related to poorer performance. Finally, the association of sex with neuropsychological performance was typical of that in the literature (female advantage on verbal learning and recall, figure copy, attention/processing speed, verbal fluency, and inhibition, and male advantage on visual memory (immediate recall percentage, delayed recall, and delayed recall percentage) and visuospatial perception of orientation). Consequently, these demographic variables were included in the analyses when developing the regression-based normative data and were also used in the discrete norms to break down our sample into five-year age groups, three educational-level groups and sex (where indicated by group differences).

In the course of exploring the optimal procedure for enhancing the utility of normative data, we co-normed a comprehensive battery of commonly used neuropsychological tests based on this sample of older adults without dementia. Such studies are limited in number, especially for this age group (Duff, Reference Duff2016). For example, a study in New Zealand (Callow et al., Reference Callow, Alpass, Leathem and Stephens2015) developed normative data for a large sample of individuals aged 45–85, who were tested at two-time points. This, however, was limited to Addenbrooke’s Cognitive Examination-Revised, a brief screening test. Another study (Roberts et al., Reference Roberts, Geda, Knopman, Cha, Pankratz, Boeve, Ivnik, Tangalos, Petersen and Rocca2008), conducted in the US, developed normative data for ten commonly used neuropsychological tests, but only for two relatively broad age groups: 70–79 and 80–89 years old. Both studies included some participants who were tested over the telephone, and produced traditional, discrete norms. In contrast, our co-norming of a comprehensive battery of tests may aid in process-based interpretation of neuropsychological performance, as clinicians and researchers can thus view and evaluate an individual’s scores on one test or cognitive domain relative to another, and, accordingly, make sense of the individual’s overall pattern of performance and its potential consistency with (or lack thereof) expected patterns of cognitive strengths and weaknesses. Thus, the present neuropsychological battery may be used flexibly in clinical practice.

Our study has some limitations. Total variance in our regression models varied and accounted for a mean value of about 17%. The magnitude of this correction indicated that other demographic or non-demographic variables might also be potential predictors of test performance. Another issue to be addressed concerning both types of norms relates to the small number of individuals in the normative group who were 80 years of age or older, which resulted from our sample’s age distribution. For the regression-based norms, this would reduce precision; for the discrete norms, it may lead to overestimating or underestimating cognitive decline among the very old. Moreover, despite our large sample, on several tests discrete norms yielded scores that were equated to more than one percentile level. This raises concerns regarding the interpretation of performance levels, as well as to the sensitivity of the particular variables in differentiating abilities. Finally, we compared types of norms using only two individual case examples and one specific neuropsychological test variable, thus limiting our ability to generalize to other test variables. A larger sample and an extensive list of test variables should be used to confirm the preferability of one type of norms over the other for each test variable, as these may vary either in a systematic or even a nonsystematic way.

The current study presents a method for ensuring the relative purity of normative data for a comprehensive, flexible neuropsychological test battery comprised of well-known tests applied in everyday clinical practice and research studies, specifically for older individuals. This has the potential to increase the accuracy in identifying individuals who are even in the early, and yet undiagnosed, stage of a neurodegenerative process. While our data may be specific to Greek-speaking populations, they provide a template for similar future studies in other languages and cultures. Additionally, they may aid in cross-national studies for comparisons in international collaborations. In any event, the investigation and refinement of approaches that augment the accuracy of neuropsychological assessment for the aging population, such as differentiating the effects of normal aging and early neurodegenerative processes, is certainly a necessary and worthwhile endeavor.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/S1355617723011499.

Acknowledgements

We would like to acknowledge the contributions of Dr Eleni Aretouli.

Funding statement

The present study was funded by the following (awarded to NS): IIRG-09-133014 from the Alzheimer’s Association and 189 10,276/8/9/2011 from the ESPA-EU program Excellence Grant which is co-funded by the European Social Fund and Greek National resources and ΔY2β/oικ.51657/14.4.2009 from the Ministry for Health and Social Solidarity (Greece). Additional funding (awarded to MHK): Reinforcement of Research Activities at AUTh, Action D: Reinforcement of Research Activities in the Humanities, Research Committee, Aristotle University of Thessaloniki.

Competing interests

None.

References

Alden, E. C., Lundt, E. S., Twohy, E. L., Christianson, T. J., Kremers, W. K., Machulda, M. M., Jack, C. R. Jr, Knopman, D. S., Mielke, M. M., Petersen, R. C., & Stricker, N. H. (2022). Mayo normative studies: A conditional normative model for longitudinal change on the auditory verbal learning test and preliminary validation in preclinical alzheimer’s disease. Alzheimer’s & Dementia (Amsterdam, Netherlands), 14(1), e12325. https://doi.org/10.1002/dad2.12325 Google ScholarPubMed
American Psychiatric Association (2000). Diagnostic and statistical manual of mental disorders: DSM-IV-TR. American Psychiatric Association.Google Scholar
Benton, A. L., Hamsher, K. D., Varney, N., & Spreen, O. (1983). Contributions to Neuropsychological Assessment: A Clinical Manual. Oxford University Press.Google Scholar
Bougea, A., Maraki, M. I., Yannakoulia, M., Stamelou, M., Xiromerisiou, G., Kosmidis, M. H., Ntanasi, E., Dardiotis, E., Hadjigeorgiou, G., Sakka, M., Anastasiou, P., C., A., Stefanis, L., & Scarmeas, N. (2019). Higher probability of prodromal parkinson’s disease is related to lower cognitive performance. Neurology, 92(19), e2261e2272. https://doi.org/10.1212/WNL.0000000000007453 CrossRefGoogle ScholarPubMed
Bozikas, V. P., Giazkoulidou, A., Hatzigeorgiadou, M., Karavatos, A., & Kosmidis, M. H. (2008). Do age and education contribute to performance on the clock drawing test? Normative data for the greek population. Journal of Clinical and Experimental Neuropsychology, 30(2), 199203. https://doi.org/10.1080/13803390701346113 CrossRefGoogle ScholarPubMed
Brandt, J., & Benedict, R. H. B. (2001). Hopkins verbal learning test-revised. Professional manual. Psychological Assessment Resources, Inc.Google Scholar
Callow, L., Alpass, F., Leathem, J., & Stephens, C. (2015). Normative data for older New Zealanders on the Addenbrooke’s cognitive examination-revised. New Zealand Journal of Psychology, 44(3), 2941.Google Scholar
Dardiotis, E., Kosmidis, M. H., Yannakoulia, M., Hadjigeorgiou, G. M., & Scarmeas, N. (2014). The hellenic longitudinal investigation of aging and diet (HELIAD): Rationale, study design, and cohort description. Neuroepidemiology, 43(1), 914. https://doi.org/10.1159/000362723 CrossRefGoogle ScholarPubMed
Duff, K. (2016). Demographically corrected normative data for the Hopkins verbal learning test-revised and brief visuospatial memory test-revised in an elderly sample. Applied Neuropsychology, 23(3), 179185. https://doi.org/10.2144/000114329 CrossRefGoogle Scholar
Duff, K., Beglinger, L. J., Moser, D. J., Paulsen, J. S., Schultz, S. K., & Arndt, S. (2010). Predicting cognitive change in older adults: The relative contribution of practice effects. Archives of Clinical Neuropsychology, 25(2), 8188. https://doi.org/10.1093/arclin/acp105 CrossRefGoogle ScholarPubMed
Duff, K., & Ramezani, A. (2015). Regression-based normative formulae for the repeatable battery for the assessment of neuropsychological status for older adults. Archives of Clinical Neuropsychology, 30(7), 600604. https://doi.org/10.1093/arclin/acv052 CrossRefGoogle ScholarPubMed
Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975). “Mini-mental state”: A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12(3), 189198. https://doi.org/10.1016/0022-3956(75)90026-6 CrossRefGoogle ScholarPubMed
Freedman, M., Leach, L., Kaplan, E., Winocur, G., Shulman, K. I., & Delis, D. C. (1994). Clock drawing: A neuropsychological analysis. Clock drawing: A neuropsychological analysis. Oxford University Press.Google Scholar
Giaglis, G., Kyriazidou, S., Paraskevopoulou, E., Tascos, N., & Kosmidis, M. H. (2010). Evaluating premorbid level: Preliminary findings regarding the vulnerability of scores on cognitive measures in patients with MS. Journal of the International Neuropsychological Society, 16(Suppl 1), 159.Google Scholar
Hänninen, T. U. O. M. O., Koivisto, K. E. I. J. O., Reinikainen, K. A. R. I. J., Helkala, E. E. V. A..-L. I. I. S. A., Soininen, H. I. L. K. K. A., Mykkänen, L. E. E. N. A., Laakso, M. A. R. K. K. U., & Riekkinen, P. A. A. V. O. J. (1996). Prevalence of ageing-associated cognitive decline in an elderly population. Age and Ageing, 25(3), 201205. https://doi.org/10.1093/ageing/25.3.201 CrossRefGoogle Scholar
Heaton, R. K., Miller, S. W., Taylor, M. J., & Grant, I. (2004). Revised comprehensive norms for an expanded Halstead-Reitan Battery: Demographically adjusted neuropsychological norms for African American and Caucasian adults.Psychological Assessment Resources, Inc.Google Scholar
Kornak, J., Fields, J., Kremers, W., Farmer, S., Heuer, H. W., Forsberg, L., Brushaber, D., Rindels, A., Dodge, H., Weintraub, S., Besser, L., Appleby, B., Bordelon, Y., Bove, J., Brannelly, P., Caso, C., Coppola, G., Dever, R., Dheel, C., Dickerson, B., Dickinson, S., Dominguez, S., Domoto‐Reilly, K., Faber, K., Ferrall, J., Fishman, A., Fong, J., Foroud, T., Gavrilova, R., Gearhart, D., Ghazanfari, B., Ghoshal, N., Goldman, J., Graff‐Radford, J., Graff‐Radford, N., Grant, I. M., Grossman, M., Haley, D., Hsiao, J., Hsiung, R., Huey, E. D., Irwin, D., Jones, D., Jones, L., Kantarci, K., Karydas, A., Kaufer, D., Kerwin, D., Knopman, D., Kraft, R., Kramer, J., Kukull, W., Lapid, M., Litvan, I., Ljubenkov, P., Lucente, D., Lungu, C., Mackenzie, I., Maldonado, M., Manoochehri, M., McGinnis, S., McKinley, E., Mendez, M., Miller, B., Multani, N., Onyike, C., Padmanabhan, J., Pantelyat, A., Pearlman, R., Petrucelli, L., Potter, M., Rademakers, R., Ramos, E. M., Rankin, K., Rascovsky, K., Roberson, E. D., Rogalski‐Miller, E., Sengdy, P., Shaw, L., Staffaroni, A. M., Sutherland, M., Syrjanen, J., Tartaglia, C., Tatton, N., Taylor, J., Toga, A., Trojanowski, J., Wang, P., Wong, B., Wszolek, Z., Boeve, B., Boxer, A., Rosen, H., & ARTFL/LEFFTDS Consortium (2019). Nonlinear Z-score modeling for improved detection of cognitive abnormality. Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring, 11(1), 797808. https://doi.org/10.1016/j.dadm.2019.08.003 Google ScholarPubMed
Kosmidis, M. H., Vlachos, G., Dardiotis, E., Yannakoulia, M., Hadjigeorgiou, G. M., & Scarmeas, N. (2018). Prevalence of dementia and its subtypes in Greece. Alzheimer Disease & Associated Disorders-An International Journal, 32(3), 232239. https://doi.org/10.1097/WAD.0000000000000249 CrossRefGoogle Scholar
Kosmidis, M. H., Vlahou, C. H., Panagiotaki, P., & Kiosseoglou, G. (2004). The verbal fluency task in the Greek population: Normative data, and clustering and switching strategies. Journal of the International Neuropsychological Society, 10(2), 164172. https://doi.org/10.1017/S1355617704102014 CrossRefGoogle ScholarPubMed
Kramer, A. O., Casaletto, K. B., Umlauf, A., Staffaroni, A. M., Fox, E., You, M., & Kramer, J. H. (2020). Robust normative standards for the California verbal learning test (CVLT) ages 60-89: A tool for early detection of memory impairment. The Clinical Neuropsychologist, 34(2), 384405. https://doi.org/10.1080/13854046.2019.1619838 CrossRefGoogle ScholarPubMed
Lezak, M. D., Howieson, D. B., Bigler, E. D., & Tranel, D. (2012). Neuropsychological assessment (5th edn). Oxford University Press.Google Scholar
Lezak, M. D., Howieson, D. B., & Loring, D. W. (2004). Neuropsychological assessment (4th edn). Oxford University Press.Google Scholar
Mandyla, M. A., Yannakoulia, M., Hadjigeorgiou, G., Dardiotis, E., Scarmeas, N., & Kosmidis, M. H. (2022). Identifying appropriate neuropsychological tests for uneducated/Illiterate older individuals. Journal of the International Neuropsychological Society, 28(8), 862875. https://doi.org/10.1017/S1355617721001016 CrossRefGoogle ScholarPubMed
Martin, R., Sawrie, S., Gilliam, F., Mackey, M., Faught, E., Knowlton, R., & Kuzniekcy, R. (2002). Determining reliable cognitive change after epilepsy surgery: Development of reliable change indices and standardized regression-based change norms for the WMS-III and WAIS-III. Epilepsia, 43(12), 15511558. https://doi.org/10.1046/j.1528-1157.2002.23602 CrossRefGoogle ScholarPubMed
McCaffrey, R. J., & Westervelt, H. J. (1995). Issues associated with repeated neuropsychological assessments. Neuropsychology Review, 5(3), 203221. https://doi.org/10.1007/BF02214762 CrossRefGoogle ScholarPubMed
McKhann, G., Drachman, D., Folstein, M., Katzman, R., Price, D., & Stadlan, E. M. (1984). Clinical diagnosis of alzheimer’s disease: Report of the NINCDS-ADRDA work group under the auspices of department of health and human services task force on alzheimer’s disease. Neurology, 34(7), 939944.CrossRefGoogle ScholarPubMed
Mitrushina, M., Boone, K. B., Razani, J., & D'Elia, L. F. (2005). Handbook of normative data for neuropsychological assessment (2nd edn). Oxford University Press.Google Scholar
Morris, J. C. (1993). The clinical dementia rating (CDR). Neurology, 43(11), 24122412. https://doi.org/10.1212/WNL.43.11.2412-a CrossRefGoogle ScholarPubMed
Oosterhuis, H. E. M., van der Ark, L. A., & Sijtsma, K. (2016). Sample size requirements for traditional and regression-based norms. Assessment, 23(2), 191202. https://doi.org/10.1177/1073191115580638 CrossRefGoogle ScholarPubMed
Östberg, P., Fernaeus, S. E., Bogdanović, N., & Wahlund, L. O. (2008). Word sequence production in cognitive decline: Forward ever, backward never. Logopedics Phoniatrics Vocology, 33(3), 126135. https://doi.org/10.1080/14015430801945794 CrossRefGoogle ScholarPubMed
Parmenter, B. A., Testa, S. M., Schretlen, D. J., Weinstock-Guttman, B., & Benedict, R. H. B. (2010). The utility of regression-based norms in interpreting the minimal assessment of cognitive function in multiple sclerosis (MACFIMS). Journal of the International Neuropsychological Society, 16(1), 616. https://doi.org/10.1017/S1355617709990750 CrossRefGoogle ScholarPubMed
Petersen, R. C., & Morris, J. C. (2005). Mild cognitive impairment as a clinical entity and treatment target. Archives of Neurology, 62(7), 11601163. https://doi.org/10.1001/archneur.62.7.1160 CrossRefGoogle ScholarPubMed
Roberts, R. O., Geda, Y. E., Knopman, D. S., Cha, R. H., Pankratz, V. S., Boeve, B. F., Ivnik, R. J., Tangalos, E. G., Petersen, R. C., & Rocca, W. A. (2008). The Mayo clinic study of aging: Design and sampling, participation, baseline measures, and sample characteristics. Neuroepidemiology, 30(1), 5869. https://doi.org/10.1159/000115751 CrossRefGoogle Scholar
Salthouse, T. A., & Tucker-Drob, E. M. (2008). Implications of short-term retest effects for the interpretation of longitudinal change timothy. Neuropsychology, 22(6), 800811. https://doi.org/10.1037/a0013091 CrossRefGoogle Scholar
Spreen, O., & Strauss, E. (1991). A compendium of neuropsychological tests: Administration, norms, and commentary. Oxford University Press.Google Scholar
Strauss, E., Sherman, E. M. S., & Spreen, O. (2006). A compendium of neuropsychological tests: Administration, norms, and commentary (3rd edn). Oxford University Press.Google Scholar
Tardif, M., Roy, M., Remi, B., Laforce, R., Verret, L., Fortin, M‐Pierre, Houde, M., & Poulin, S. (2013). P4-103: Months backward test as a reliable predictor of cognitive decline in mild Alzheimer’s disease. Alzheimer’s & Dementia, 9(4), P741P742. https://doi.org/10.1016/j.jalz.2013.05.1493 CrossRefGoogle Scholar
Temkin, N. R., Heaton, R. K., Grant, I., & Dikmen, S. S. (1999). Detecting significant change in neuropsychological test performance: A comparison of four models. Journal of the International Neuropsychological Society, 5(4), 357369. https://doi.org/10.1017/S1355617799544068 CrossRefGoogle ScholarPubMed
Tsapkini, K., Vlahou, C. H., & Potagas, C. (2010). Adaptation and validation of standardized aphasia tests in different languages: Lessons from the Boston diagnostic aphasia examination-short form in greek. Behavioural Neurology, 22(3-4), 111119. https://doi.org/10.3233/BEN-2009-0256 CrossRefGoogle ScholarPubMed
Vlachos, G. S., Kosmidis, M. H., Yannakoulia, M., Dardiotis, E., Hadjigeorgiou, G., Sakka, P., Ntanasi, E., Stefanis, L., & Scarmeas, N. (2020). Prevalence of mild cognitive impairment in the elderly population in Greece: Results from the HELIAD study. Alzheimer Disease & Associated Disorders, 34(2), 156162. https://doi.org/10.1097/WAD.0000000000000361 CrossRefGoogle ScholarPubMed
Vlahou, C. H., & Kosmidis, M. H. (2002). The greek trail making test: Preliminary norms for clinical and research use. Psychology: The Journal of the Hellenic Psychological Society (in Greek), 9(3), 336352.Google Scholar
Vlahou, C. H., Kosmidis, M. H., Dardagani, A., Tsotsi, S., Giannakou, M., Giazkoulidou, A., Zervoudakis, E., & Pontikakis, N. (2013). Development of the greek verbal learning test: Reliability, construct validity, and normative standards. Archives of Clinical Neuropsychology, 28(1), 5264. https://doi.org/10.1093/arclin/acs099 CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Demographic and clinical characteristics during the baseline evaluation for Group 1 and Group 2

Figure 1

Table 2. Group comparisons (means and SDs) on neuropsychological test variables and between-group effect sizes (Cohen’s d) (continued)

Figure 2

Table 3. Regression-based equations to calculate adjusted scores

Supplementary material: File

Arampatzi et al. supplementary material

Arampatzi et al. supplementary material
Download Arampatzi et al. supplementary material(File)
File 311.3 KB