[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Journal Information::
Articles archive::
For Authors::
For Reviewers::
Registration::
Contact us::
Site Facilities::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
:: Search published articles ::
Showing 17 results for Assessment

Alireza Ahmadi,
Volume 12, Issue 1 (3-2009)
Abstract

This article investigated and compared the consistency of self and peer assessments as alternatives for teacher assessment.  Thirty sophomores majoring in TEFL were asked to assess their classmates’ as well as their own speaking ability in a conversation class. They were taught how to do this using a rating scale of speaking. They did the rating twice during the term the first rating was carried out during the 8th and 9th weeks and the second rating at the end of the term (weeks 15 and 16). The results of the study indicated that self and peer assessments were not significantly related at the end of the term and only loosely, though significantly, related in the middle of the term. Both self and peer assessments indicated consistency over time, however peer assessment enjoyed a higher consistency.
Parviz Maftoon, Kourosh Akef,
Volume 12, Issue 2 (9-2009)
Abstract

The purpose of the present study was to develop appropriate scoring scales for each of the defined stages of the writing process, and also to determine to what extent these scoring scales can reliably and validly assess the performances of EFL learners in an academic writing task. Two hundred and two students’ writing samples were collected after a step-by-step process oriented essay writing instruction. Four stages of writing process – generating ideas (brainstorming), outlining (structuring), drafting, and editing – were operationally defined. Each collected writing sample included student writers’ scripts produced in each stage of the writing process. Through a detailed analysis of the collected writing samples by three raters, the features which highlighted the strong or weak points in the student writers’ samples were identified, and then the student writers’ scripts were categorized into four levels of performance. Then, descriptive statements were made for each identified feature to represent the specified level of performance. These descriptive statements, or descriptors, formed rating scales for each stage of the writing process. Finally, four rating sub-scales, namely brainstorming, outlining, drafting, and editing were designed for the corresponding stages of the writing process. Subsequently, the designed rating scales were used by the three raters to rate the 202 collected writing samples. The scores thus obtained were put to statistical analyses. The high inter-rater reliability estimate (0.895) indicated that the rating scales could produce consistent results. The Analysis of Variance (ANOVA) indicated that there was no significant difference among the ratings created by the three raters. Factor analysis suggested that at least three constructs, –language knowledge, planning ability, and idea creation ability – could possibly underlie the variables measured by the rating scale.  
Parviz Birjandi, Masood Siyyari,
Volume 13, Issue 1 (3-2010)
Abstract

Self-assessment and peer-assessment are two means of realizing the goals of educational assessment and learner-centered education. Although there are many arguments in favor of their educational benefits, they have not become common practices in educational settings. This is mainly due to the fact that teachers do not trust the pedagogical values and the reliability of learners’ self- and peer-assessment. With regard to these points, this study aimed at investigating the effect of doing self- and peer-assessments over time on the paragraph writing performance and the self- and peer-rating accuracy of a sample of Iranian English-major students. To do so, eleven paragraphs during eleven sessions were written and then self- or peer-rated by the students in two experimental groups. The findings indicated that self-and peer-assessment are indeed effective in improving not only the writing performance of the students but also their rating accuracy. After comparing the effects of self- and peer-assessment on the writing performance and the rating accuracy of the participants, peer-assessment, however, turned out to be more effective in improving the writing performance of the students than self-assessment. In addition, neither of the assessment methods outdid the other in improving the rating accuracy of the students.
Reza Pishghadam, Elyas Barabadi,
Volume 15, Issue 1 (3-2012)
Abstract

The main purpose of this study was to construct and validate a Computerized version of Dynamic Assessment (C-DA) and examine its effectiveness in enhancing reading comprehension. Feasibility and concern for psychometric properties of testing are issues that have limited the use of DA approaches. In this study, C-DA is offered as a solution for overcoming such limitations. To this end, a software package named Computerized Dynamic Reading Test (CDRT) was developed. The software is capable of providing test takers with strategy-based hints. For each test taker, two scores are assigned by the software a non-dynamic score which is based on test takers' first try of each item and a dynamic score which is based on the average hints they have employed. One hundred and four university students took the test. The findings of the study indicated that while observing the psychometric standards of testing namely, reliability and validity, C-DA was useful both in improving students' reading comprehension ability and in obtaining information about their potentiality for learning which goes beyond and over the initial performance level. While some test takers made the best use of the hints and could enhance their comprehension of the text, others could not use them to their advantage. The Information obtained from DA enables teachers to provide students with more individualized and consequently more effective instruction. 
Zia Tajeddin, Mohammad Hossein Keshavarz, Amir Zand-Moghadam,
Volume 15, Issue 2 (9-2012)
Abstract

The aim of the present study was to investigate the effect of task-based language teaching (TBLT) on EFL learners’ pragmatic production, metapragmatic awareness, and pragmatic self-assessment. To this end, 75 homogeneous intermediate EFL learners were randomly assigned to three groups: two experimental groups and one control group. The 27 participants in the pre-task, post-task pragmatic focus group (experimental group one) received pragmatic focus on five speech acts in pre-task and the post-task phases. The 26 participants in the scaffolded while-task group (experimental group two) only received pragmalinguistic and sociopragmatic feedback and scaffolding during task completion. However, the 22 participants in the mainstream task-based group (control group) were not provided with any sort of pragmatic focus. The EFL learners’ pragmatic production, metapragmatic awareness, and pragmatic self-assessment were measured using a written discourse completion task (WDCT), a metapragmatic awareness questionnaire, and a pragmatic self-assessment questionnaire. The findings showed that the three groups enhanced their pragmatic production to almost the same degree at the end of the treatment. Furthermore, the results revealed the development of metapragmatic awareness among the EFL learners in the two experimental groups only. In addition, the two experimental groups managed to develop their pragmatic self-assessment more than the control group. Therefore, it can be concluded that the use of tasks within the framework of TBLT, with or without pragmatic focus in any of the three phases, helps EFL learners develop pragmatic production, while the development of metapragmatic awareness and pragmatic self-assessment can be attributed to pragmatic focus and feedback.
, ,
Volume 17, Issue 1 (4-2014)
Abstract

This paper reports on a study that investigated the effect of self-assessment on a group of English-as-a-foreign-language (EFL) students’ goal orientation. To this end, 57 EFL students participated in a seven-week course. The participants were divided into an experimental and a control group. At the beginning and at end of the semester, both groups completed a goal-orientation questionnaire. However, the participants in the experimental group completed a bi­-weekly self-assessment questionnaire throughout the semester as well. The data were analyzed using a Multivariate Analysis of Covariance (MANCOVA). The findings revealed that the students’ learning goal orientation improved significantly in the experimental group. This suggests that practicing self-assessment on a formative basis boosts EFL students’ leaning goal orientation.

,
Volume 18, Issue 2 (9-2015)
Abstract

In this study, the researcher used the many-facet Rasch measurement model (MFRM) to detect two pervasive rater errors among peer-assessors rating EFL essays. The researcher also compared the ratings of peer-assessors to those of teacher assessors to gain a clearer understanding of the ratings of peer-assessors. To that end, the researcher used a fully crossed design in which all peer-assessors rated all the essays MA students enrolled in two Advanced Writing classes in two private universities in Iran wrote. The peer-assessors used a 6-point analytic rating scale to evaluate the essays on 15 assessment criteria. The results of Facets analyses showed that, as a group, peer-assessors did not show central tendency effect and halo effect; however, individual peer-assessors showed varying degrees of central tendency effect and halo effect. Further, the ratings of peer-assessors and those of teacher assessors were not statistically significantly different.

Rajab Esfandiari, Razieh Nouri,
Volume 19, Issue 2 (9-2016)
Abstract

Professionalism requires that language teachers be assessment literate so as to assess students’ performance more effectively. However, assessment literacy (AL) has remained a relatively unexplored area. Given the centrality of AL in educational settings, in the present study, we identified the factors constituting AL among university instructors and examined the ways English Language Instructors (ELIs) and Content Instructors (CIs) differed on AL. A researcher-made, 50-item questionnaire was constructed and administered to both groups: ELIs (N = 155) and CIs (N = 155). A follow-up interview was conducted to validate the findings. IBM SPSS (version 21) was used to analyse the data quantitatively. Results of exploratory factor analysis showed that AL included three factors: theoretical dimension of testing, test construction and analysis, and statistical knowledge. Further, results revealed statistically significant differences between ELIs and CIs in AL. Qualitative results showed that the differences were primarily related to the amount of training in assessment, methods of evaluation, purpose of assessment, and familiarity with psychometric properties of tests. Building on these findings, we discuss implications for teachers’ professional development.
Gholam Reza Kiany, Monireh Norouzi,
Volume 19, Issue 2 (9-2016)
Abstract

Performance assessment is exceedingly considered a key concept in teacher education programs worldwide. Accordingly, in Iran, a national assessment system was proposed by Farhangian University to assess the professional competencies of its ELT graduates. The concerns regarding the validity and authenticity of traditional measures of teachers' competencies have motivated us to devise a localized performance assessment scheme. Therefore, the present study aimed to develop a performance assessment scheme to be used as a benchmark for assessing the professional competencies of ELT graduates of this university. To this end, three assessment tasks and rating scales were developed, piloted, and administered. Next, Haertel's participatory approach was employed to set passing standards for the assessment tasks as well as the whole assessment scheme. Analysis of the data revealed inter-rater and intra-rater reliability coefficients of 0.85 and 0.89. The validity of the assessment scheme was also confirmed by experts' judgments made, to a large extent, on the correspondence between the target domain and test domain skills. Based on the results, the proposed assessment scheme is rendered more efficient and reliable in comparison to traditional tests with regard to the following dimensions: a) higher degrees of reliability and validity of the assessment scheme aimed at the improvement of licensure and program development; b) stronger evidence for inter-/intra- rater reliability and consistency of scoring; and c) an optimized and systematic procedure for setting passing standards based on the consensus of experts' judgments. It is believed that further development of the proposed assessment scheme unlocks its potential to be used as a large-scale teacher assessment model for Farhangian University.
Zohreh Zafarani, Parviz Maftoon,
Volume 19, Issue 2 (9-2016)
Abstract

This study aims at investigating the effect of dynamic assessment (DA) on L2 writing achievement if applied via blogging as a Web 2.0 tool, as well as examining which pattern of interaction is more conducive to learning in such an environment. The results of the study indicate that using weblogs to provide mediation contributes to the enhancement of the overall writing performance, vocabulary and syntactic complexity, and quantity of overall information presented in a single paragraph. That is to say, DA procedures are applicable via Web 2.0 tools and are advantageous to L2 learners’ writing suggesting that L2 practitioners and instructors should actively consider the integration of Web 2.0 technology into L2 education system using DA. Moreover, the collaborative pattern of interaction as compared to expert/novice, dominant/passive, and dominant/dominant patterns is found to be more conducive to fostering writing achievement in the asynchronous computer-mediated communication environment.
Although the use of verbal protocols is growing in oral assessment, research on the use of raters’ verbal protocols is rather rare. Moreover, those few studies did not use a mixed-methods design. Therefore, this study investigated the possible impacts of rater training on novice and experienced raters’ application of a specified set of standards in rating. To meet this objective, the study made use of verbal protocols produced by 20 raters who scored 300 test takers’ oral performances and analyzed the data both qualitatively and quantitatively. The outcomes demonstrated that through applying the training program, the raters were able to concentrate more on linguistic, discourse, and phonological features; therefore, the extent of their agreement increased specifically among the inexperienced raters. The analysis of verbal protocols also revealed that training how to apply a well-defined rating scale can foster its use for raters both validly and reliably. Various groups of raters approach the task of rating in different ways, which cannot be explored through pure statistical analysis. Thus, think-aloud verbal protocols can shed light on the vague sides of the issue and add to the validity of oral language assessment. Moreover, since the results of this study showed that inexperienced raters can produce protocols of higher quality and quantity in the use of macro and micro strategies to evaluate test takers’ performances, there is no evidence based on which decision makers should exclude inexperienced raters solely because of their lack of adequate experience.

Marzieh Souzandehfar,
Volume 21, Issue 1 (4-2018)
Abstract

For the first time, this study combined models and principles of authentic assessment from two parallel fields of applied linguistics as well as general education to investigate the authenticity of the TOEFL iBT speaking module. The study consisted of two major parts, namely task analysis and task survey. Utilizing Bachman and Palmer’s (1996) definition of authenticity, the task analysis examined the degree of the correspondence between the characteristics of the speaking module tasks in the TOEFL iBT test and those of target language use (TLU) tasks. In the task survey, a Likert Scale questionnaire of authenticity was developed by the researcher based on Herrington and Herrington’s (1998; 2006) four criteria of authentic assessment. The questionnaire was sent through email to 120 subjects who had already taken the test in order to elicit their attitudes towards the degree of the authenticity of the speaking section tasks. The results of the task analysis revealed a limited correspondence between the characteristics of the test tasks and those of the TLU tasks. However, the results of the task survey indicated that except for one factor (indicators), most of the test takers had a positive view toward the authenticity of the speaking module tasks in terms of the three other factors (context, student factor, task factor).     

Shohreh Esfandiari, Kobra Tavassoli,
Volume 22, Issue 2 (9-2019)
Abstract

This study aimed at investigating the comparative effect of using self-assessment vs. peer-assessment on young EFL learners’ performance on selective and productive reading tasks. To do so, 56 young learners from among 70 students in four intact classes were selected based on their performance on the A1 Movers Test. Then, the participants were randomly divided into two groups, self-assessment and peer-assessment. The reading section of a second A1 Movers Test was adapted into a reading test containing 20 selective and 20 productive items, and it was used as the pretest and posttest. This adapted test was piloted and its psychometric characteristics were checked. In the self-assessment group, the learners assessed their own performance after each reading task while in the peer-assessment group, the participants checked their friends’ performance in pairs. The data were analyzed through repeated-measures two-way ANOVA and MANOVA. The findings indicated that self-assessment and peer-assessment are effective in improving young EFL learners’ performance on both selective and productive reading tasks. Further, neither assessment method outdid the other in improving students’ performance on either task. These findings have practical implications for EFL teachers and materials developers to use both assessment methods to encourage learners to have better performance on reading tasks.

Masoomeh Taghizadeh, Golnar Mazdayasna, Fatemeh Mahdavirad,
Volume 23, Issue 2 (9-2020)
Abstract

In the educational setting of Iran, language assessment literacy (LAL) is still an underexplored issue. This paper investigated the development of LAL among EFL students taking language assessment course at state universities in Iran. The three components of LAL (i.e., knowledge, skills, and principles) were the focus of the inquiry. To collect the required data, a questionnaire, encompassing 83 Likert items and a set of open-ended questions, was developed, and responses from 92 course instructors were collected. Teaching and assessment practices of two course instructors were also observed throughout an educational semester. SPSS (26) was used to analyze the data. Findings revealed that these courses mainly focused on knowledge and skills, overlooking the principles of assessment. Adherence to traditional assessment approaches, use of inappropriate teaching materials, and lack of practical works in assessment also characterized the investigated courses. The paper concludes with suggestions to better design language assessment courses to increase the assessment literacy of English graduates who will probably enter the teaching contexts after graduation.
Seyyed Mahdi Modarres Mosadegh, Mohammad Rahimi,
Volume 24, Issue 1 (3-2021)
Abstract

IELTS preparation courses have gained significant popularity in Iran in the past decade. Although teachers in such an exam-oriented context have started to use formative assessment to improve their writing instruction, their knowledge and beliefs about assessment for learning are still a myth. This mixed-methods study investigated Iranian IELTS teachers’ beliefs and knowledge about the four main aspects of formative assessment of writing in preparation courses for IELTS Writing task 2. Thirty-nine IELTS teachers provided answers to a 23-item questionnaire focusing on four areas: feedback, self-assessment, peer-assessment, and using assessment results for day-to-day classes, to illustrate how frequently they use such techniques. In the next stage, six of the teachers sat for an interview to provide their reasons for using/not using such techniques. The results showed that the teachers have good feedback literacy and make use of some self-assessment techniques such as rubric orientation while they did not value or know enough about how they can involve their students in their own learning process. The teachers seemed to overestimate their role in their students’ learning process while considering the students as somewhat incapable of monitoring their own progress and achievement, which is a crucial aspect of formative assessment. These findings have implications for teacher professional development and further formative assessment programs to be conducted in Iran.
 
Natasha Pourdana, Payam Nour,
Volume 26, Issue 1 (3-2023)
Abstract

Due to inconclusive evidence for the differential impacts of portfolio assessment (PA) on genre-based writing improvement and learner engagement, this study cross-examined 46 EFL undergraduates’ descriptive and narrative writing performances in a 12-week PA design.  Teacher feedback points were collected from consecutive formative assessments of the students’ descriptive and narrative writing according to the genre-specific indicators in the West Virginia Department of Education descriptive writing rubric and Smarter Balanced narrative writing rubric, respectively. Statistical results reported the significant impact of PA on improving accurate word choice and grammar, development, and organization of ideas in session-wise students’ descriptive writing, with no sign of improvement in their performance on post-test descriptive writing. Further, the positive impact of PA was supported by improving the components of elaboration of narrative, language and vocabulary, organization, and convention in session-wise students’ narrative writing, as well as their performance on post-test narrative writing. Qualitative data on students’ engagement in PA was collected from inductive content analysis of their reflective journals. Students’ self-reports were schematized, and their level of engagement was rendered in terms of their approval of the usefulness and novelty of PA, the frequent mismatch between student self-assessment and teacher feedback both in quality and quantity, the sensitivity of teacher feedback to some writing features over others, the applicability of teacher feedback to the revision process, and overall positive perception of writing improvement.

Kobra Tavassoli, Marjan Oskouiefar, Masoumeh Ghamoushi,
Volume 26, Issue 2 (9-2023)
Abstract

This study aimed to investigate the impact of mobile-assisted learning-oriented assessment (LOA) on the writing ability of English as a Foreign Language (EFL) learners. A total of 60 intermediate Iranian EFL learners were selected through convenience sampling and divided randomly into two groups: control and experimental. Both groups completed pretests and posttests, and the experimental group received nine 90-minute sessions focused on teaching descriptive essay writing using LOA syllabi and mobile applications related to the tasks. The control group followed a traditional writing syllabus without any LOA-related treatments. Both groups used the Adobe Connect mobile application for their online classes. Two open-ended questions were administered to the experimental group at the beginning and end of the course to measure their attitudes toward mobile-assisted language learning (MALL). The data were analyzed using a repeated-measures two-way ANOVA, revealing that mobile-assisted LOA significantly improved the EFL learners’ writing ability. The results of the two open-ended questions indicated that the learners had a positive attitude toward MALL in general but a somewhat negative attitude toward online classes. The findings have important implications for teachers, materials developers, and teacher educators.


Page 1 from 1     

Iranian Journal of Applied Linguistics
Persian site map - English site map - Created in 0.11 seconds with 41 queries by YEKTAWEB 4666