>
Fa   |   Ar   |   En
   Reliability of Trachoma Clinical Grading-Assessing Grading of Marginal Cases  
   
نویسنده rahman s.a. ,yu s.n. ,amza a. ,gebreselassie s. ,kadri b. ,baido n. ,stoller n.e. ,sheehan j.p. ,porco t.c. ,gaynor b.d. ,keenan j.d. ,lietman t.m.
منبع plos neglected tropical diseases - 2014 - دوره : 8 - شماره : 5
چکیده    Background:clinical examination of trachoma is used to justify intervention in trachoma-endemic regions. currently,field graders are certified by determining their concordance with experienced graders using the kappa statistic. unfortunately,trachoma grading can be highly variable and there are cases where even expert graders disagree (borderline/marginal cases). prior work has shown that inclusion of borderline cases tends to reduce apparent agreement,as measured by kappa. here,we confirm those results and assess performance of trainees on these borderline cases by calculating their reliability error,a measure derived from the decomposition of the brier score.methods and findings:we trained 18 field graders using 200 conjunctival photographs from a community-randomized trial in niger and assessed inter-grader agreement using kappa as well as reliability error. three experienced graders scored each case for the presence or absence of trachomatous inflammation - follicular (tf) and trachomatous inflammation - intense (ti). a consensus grade for each case was defined as the one given by a majority of experienced graders. we classified cases into a unanimous subset if all 3 experienced graders gave the same grade. for both tf and ti grades,the mean kappa for trainees was higher on the unanimous subset; inclusion of borderline cases reduced apparent agreement by 15.7% for tf and 12.4% for ti. when we assessed the breakdown of the reliability error,we found that our trainees tended to over-call tf grades and under-call ti grades,especially in borderline cases.conclusions:the kappa statistic is widely used for certifying trachoma field graders. exclusion of borderline cases,which even experienced graders disagree on,increases apparent agreement with the kappa statistic. graders may agree less when exposed to the full spectrum of disease. reliability error allows for the assessment of these borderline cases and can be used to refine an individual trainee's grading. © 2014 rahman et al.
آدرس f.i. proctor foundation,san francisco,ca, United States, f.i. proctor foundation,san francisco,ca, United States, programme nationale des soins oculaire,niamey, Niger, the carter center ethiopia,addis ababa, Ethiopia, programme nationale des soins oculaire,niamey, Niger, programme nationale des soins oculaire,niamey, Niger, f.i. proctor foundation,san francisco,ca, United States, f.i. proctor foundation,san francisco,ca,united states,department of ophthalmology,university of california,san francisco,san francisco,ca, United States, f.i. proctor foundation,san francisco,ca,united states,department of ophthalmology,university of california,san francisco,san francisco,ca,united states,department of epidemiology and biostatistics,university of california,san francisco,san francisco,ca, United States, f.i. proctor foundation,san francisco,ca,united states,department of ophthalmology,university of california,san francisco,san francisco,ca, United States, f.i. proctor foundation,san francisco,ca,united states,department of ophthalmology,university of california,san francisco,san francisco,ca, United States, f.i. proctor foundation,san francisco,ca,united states,department of ophthalmology,university of california,san francisco,san francisco,ca,united states,department of epidemiology and biostatistics,university of california,san francisco,san francisco,ca, United States
 
     
   
Authors
  
 
 

Copyright 2023
Islamic World Science Citation Center
All Rights Reserved