[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
:: Search published articles ::
Showing 5 results for Rating

Pourya Baghaii Moghadam, Reza Pishghadam,
Volume 11, Issue 1 (3-2008)
Abstract

Local independence of test items is an assumption in all Item Response Theory (IRT) models. That is, the items in a test should not be related to each other. Sharing a common passage, which is prevalent in reading comprehension tests, cloze tests and C-Tests, can be a potential source of local item dependence (LID). It is argued in the literature that LID results in biased parameter estimation and affects the unidimensionality of the test. In this study the effects of the violation of the local independence assumption on the person measures in a C-Test are studied. A C-Test battery comprising four passages, each containing 25 blanks, was analysed twice. Firstly, each gap was treated as an independent item and Rasch’s (1960) dichotomous model was employed. In the second analysis, each passage was treated as a super item and Andrich’s (1978) rating scale model was used. For each person, two ability measures were estimated, one on the basis of the dichotomous analysis and one on the basis of the polytomous analysis. The differences between the two measures, after being brought onto the same scale, are compared and the implications are discussed.
Alireza Ahmadi,
Volume 12, Issue 1 (3-2009)
Abstract

This article investigated and compared the consistency of self and peer assessments as alternatives for teacher assessment.  Thirty sophomores majoring in TEFL were asked to assess their classmates’ as well as their own speaking ability in a conversation class. They were taught how to do this using a rating scale of speaking. They did the rating twice during the term the first rating was carried out during the 8th and 9th weeks and the second rating at the end of the term (weeks 15 and 16). The results of the study indicated that self and peer assessments were not significantly related at the end of the term and only loosely, though significantly, related in the middle of the term. Both self and peer assessments indicated consistency over time, however peer assessment enjoyed a higher consistency.
Parviz Maftoon, Kourosh Akef,
Volume 12, Issue 2 (9-2009)
Abstract

The purpose of the present study was to develop appropriate scoring scales for each of the defined stages of the writing process, and also to determine to what extent these scoring scales can reliably and validly assess the performances of EFL learners in an academic writing task. Two hundred and two students’ writing samples were collected after a step-by-step process oriented essay writing instruction. Four stages of writing process – generating ideas (brainstorming), outlining (structuring), drafting, and editing – were operationally defined. Each collected writing sample included student writers’ scripts produced in each stage of the writing process. Through a detailed analysis of the collected writing samples by three raters, the features which highlighted the strong or weak points in the student writers’ samples were identified, and then the student writers’ scripts were categorized into four levels of performance. Then, descriptive statements were made for each identified feature to represent the specified level of performance. These descriptive statements, or descriptors, formed rating scales for each stage of the writing process. Finally, four rating sub-scales, namely brainstorming, outlining, drafting, and editing were designed for the corresponding stages of the writing process. Subsequently, the designed rating scales were used by the three raters to rate the 202 collected writing samples. The scores thus obtained were put to statistical analyses. The high inter-rater reliability estimate (0.895) indicated that the rating scales could produce consistent results. The Analysis of Variance (ANOVA) indicated that there was no significant difference among the ratings created by the three raters. Factor analysis suggested that at least three constructs, –language knowledge, planning ability, and idea creation ability – could possibly underlie the variables measured by the rating scale.  
Parviz Birjandi, Masood Siyyari,
Volume 13, Issue 1 (3-2010)
Abstract

Self-assessment and peer-assessment are two means of realizing the goals of educational assessment and learner-centered education. Although there are many arguments in favor of their educational benefits, they have not become common practices in educational settings. This is mainly due to the fact that teachers do not trust the pedagogical values and the reliability of learners’ self- and peer-assessment. With regard to these points, this study aimed at investigating the effect of doing self- and peer-assessments over time on the paragraph writing performance and the self- and peer-rating accuracy of a sample of Iranian English-major students. To do so, eleven paragraphs during eleven sessions were written and then self- or peer-rated by the students in two experimental groups. The findings indicated that self-and peer-assessment are indeed effective in improving not only the writing performance of the students but also their rating accuracy. After comparing the effects of self- and peer-assessment on the writing performance and the rating accuracy of the participants, peer-assessment, however, turned out to be more effective in improving the writing performance of the students than self-assessment. In addition, neither of the assessment methods outdid the other in improving the rating accuracy of the students.
,
Volume 18, Issue 2 (9-2015)
Abstract

In this study, the researcher used the many-facet Rasch measurement model (MFRM) to detect two pervasive rater errors among peer-assessors rating EFL essays. The researcher also compared the ratings of peer-assessors to those of teacher assessors to gain a clearer understanding of the ratings of peer-assessors. To that end, the researcher used a fully crossed design in which all peer-assessors rated all the essays MA students enrolled in two Advanced Writing classes in two private universities in Iran wrote. The peer-assessors used a 6-point analytic rating scale to evaluate the essays on 15 assessment criteria. The results of Facets analyses showed that, as a group, peer-assessors did not show central tendency effect and halo effect; however, individual peer-assessors showed varying degrees of central tendency effect and halo effect. Further, the ratings of peer-assessors and those of teacher assessors were not statistically significantly different.


Page 1 from 1     

Iranian Journal of Applied Linguistics
Persian site map - English site map - Created in 0.08 seconds with 29 queries by YEKTAWEB 4666