• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

information for practice

news, new scholarship & more from around the world


advanced search
  • gary.holden@nyu.edu
  • @ Info4Practice
  • Archive
  • About
  • Help
  • Browse Key Journals
  • RSS Feeds

Inter‐rater reliability of the Conversational Assessment of Neurocognitive Dysfunction

Background

Cognitive assessment through communication has been the focus of recent studies because the conventional cognitive tests are often considered invasive for older people. Although the Conversational Assessment of Neurocognitive Dysfunction is designed to assess cognitive function non-invasively, inter-rater reliability remains unclear. The current study investigated the Conversational Assessment of Neurocognitive Dysfunction’s reliability.

Methods

The Conversational Assessment of Neurocognitive Dysfunction was used by four clinical psychologists, who evaluated 38 older people with and without cognitive dysfunction. One clinical psychologist evaluated the assessment based on face-to-face communication with participants, while the other clinical psychologists evaluated it according to the audio data in the digital voice recorder. All clinical psychologists were blind to the results of other conventional cognitive tests and details surrounding participants’ daily living activities.

Results

The univariate correlation scores of the Conversational Assessment of Neurocognitive Dysfunction among evaluators ranged from 0.61 to 0.79, all of which were significant (P < 0.001). The intraclass correlation coefficient was 0.64 (P < 0.001, 95% CI: 0.53–0.79 for agreement) and 0.67 (P < 0.001, 95% CI: 0.45–0.77 for consistency). The Conversational Assessment of Neurocognitive Dysfunction score of all evaluators was significantly associated with conventional cognitive tests like the Mini-Mental State Examination (P < 0.001).

Conclusions

The findings suggested that the Conversational Assessment of Neurocognitive Dysfunction has moderate to good inter-rater reliability and high concurrent validity as a cognitive assessment tool, and it would be useful in clinical practice.

Read the full article ›

Posted in: Journal Article Abstracts on 07/04/2023 | Link to this post on IFP |
Share

Primary Sidebar

Categories

Category RSS Feeds

  • Calls & Consultations
  • Clinical Trials
  • Funding
  • Grey Literature
  • Guidelines Plus
  • History
  • Infographics
  • Journal Article Abstracts
  • Meta-analyses - Systematic Reviews
  • Monographs & Edited Collections
  • News
  • Open Access Journal Articles
  • Podcasts
  • Video

© 1993-2025 Dr. Gary Holden. All rights reserved.

gary.holden@nyu.edu
@Info4Practice