>
Fa   |   Ar   |   En
   Interpreting the Validity of A High-Stakes Test in Light of the Argument-Based Framework: Implications For Test Improvement  
   
نویسنده Darabi Bazvand Ali ,Ahmadi Alireza
منبع Journal Of Research In Applied Linguistics - 2020 - دوره : 11 - شماره : 1 - صفحه:66 -88
چکیده    The validity of large-scale assessments may be compromised, partly due to their content inappropriateness or construct underrepresentation. few validity studies have focused on such assessments within an argument-based framework. this study analyzed the domain description and evaluation inference of the ph.d. entrance exam of elt (peee) sat by ph.d. examinees (n = 999) in 2014 in iran. to track evidence for domain definition, the test content was scrutinized by applied linguistics experts (n = 12). as for evaluation inference, the reliability and differential item functioning (dif) of the test were examined. results indicated that the test is biased because (1) the test tasks are not fully represented in the ph.d. course objectives, (2) the test is best reliable for high-ability test-takers (irt analysis), and (3) 4 items are flagged for nonnegligible dif (logistic regression [lr] analysis). implications for language testing and assessment are discussed and some possible suggestions are offered.
کلیدواژه Argument-Based Validity; Differential Item Functioning (Dif); Elt
آدرس University Of Human Development, College Of Languages, English Department, Iraq, Shiraz University, Department Of Foreign Languages And Linguistics, Iran
پست الکترونیکی arahmadi@shirazu.ac.ir
 
     
   
Authors
  
 
 

Copyright 2023
Islamic World Science Citation Center
All Rights Reserved