메뉴 건너뛰기
.. 내서재 .. 알림
소속 기관/학교 인증
인증하면 논문, 학술자료 등을  무료로 열람할 수 있어요.
한국대학교, 누리자동차, 시립도서관 등 나의 기관을 확인해보세요
(국내 대학 90% 이상 구독 중)
로그인 회원가입 고객센터 ENG
주제분류

추천
검색

논문 기본 정보

자료유형
학술저널
저자정보
저널정보
이화여자대학교 교과교육연구소 교과교육학연구 교과교육학연구 제17권 제1호
발행연도
2013.1
수록면
199 - 215 (17page)

이용수

표지
📌
연구주제
📖
연구배경
🔬
연구방법
🏆
연구결과
AI에게 요청하기
추천
검색

초록· 키워드

오류제보하기
This paper explores rating performance of high school EFL teachers in national curriculum-specific writing assessment by comparing rater reliability and ratings of experienced and novice raters. The experienced raters completed the government-authorized writing test rater certification program and had two- or three-year rating experience, whereas the novice raters were new teachers who lacked rater training and rating experience. The two groups of teacher-raters rated 410 samples written by high school students in four assessing aspects: task completion, content, organization, and language use. Results reveal the discernible influence of rater background on ratings. The experienced teacher-raters’ rating experience and understanding of the target test task and the rating rubric acquired through the rater certification process resulted in better rater reliability and significantly lower ratings than the novice teacher-raters. Variations among assessment aspects were also noted: for example, lower reliability in content and organization and lower ratings of the experienced teacher-raters in organization and language use. The findings from the study imply that test-takers may receive different ratings depending on who rates their writing, and thus, securing credible experienced raters is essential in a high-stake writing test.

목차

등록된 정보가 없습니다.

참고문헌 (24)

참고문헌 신청

이 논문의 저자 정보

최근 본 자료

전체보기

댓글(0)

0