In Turkey, the national examinations at the end of lower and upper secondary provide an important selection function for a limited number of prestigious high school and bachelor’s programmes. While the examinations fulfil this function transparently and provide results that are trusted nationally, their educational value could be enhanced. This chapter provides suggestions for incremental changes to the national examinations, so that after time they better encourage students to develop the deeper knowledge and skills upon which the national curriculum is based. It also provides suggestions for how the country’s new national assessment, ABİDE, can be developed to best support improved learning outcomes and enhanced assessment literacy among teachers.
OECD Reviews of Evaluation and Assessment in Education: Student Assessment in Turkey
Chapter 4. Ensuring national examinations and assessments support learning
Abstract
The statistical data for Israel are supplied by and under the responsibility of the relevant Israeli authorities. The use of such data by the OECD is without prejudice to the status of the Golan Heights, East Jerusalem and Israeli settlements in the West Bank under the terms of international law.
Introduction
Turkey’s central examinations at the end of Grades 8 and 12 are designed to place students based on ability in different upper secondary and tertiary programmes and institutions. However, the different quality of learning options leads to a significant mismatch between the large number of students who want to attend the best high schools and bachelor’s degree programmes and available places. The pressure on high school and tertiary places results in teachers and students devoting considerable time to preparing for the central examinations. Since the examinations are based primarily on multiple-choice items that assess knowledge recall in discrete domains, this results in a narrowing of student learning and less attention to the acquisition of more complex competencies. Developing students’ ability to draw on their knowledge and skills across different subject areas to demonstrate competencies like effective communication, critical thinking and problem solving are central to Turkey’s curriculum and the country’s development as a modern knowledge economy.
In recent years, and most recently during the course of this review1, Turkey has sought to reform its examinations to reduce some of their distorting consequences for teaching and learning. This chapter provides further recommendations for the design and questions used in the examinations so that selection into upper secondary and tertiary education is as objective and fair as possible in the current context. It also sets out options to better align examinations with national goals for universal completion of upper secondary, notably through the introduction of an examination that helps certify student learning at the end of compulsory education by including some externally examined components.
This chapter also provides recommendations on how Turkey might address the disparities between the demand and supply of quality upper secondary and tertiary pathways, which puts significant pressure on the education system. This would help to create a more inclusive education system that offers students a range of quality learning paths in the reflection of their interests and strengths, and national skills needs. It also looks at how Turkey can make better use of learning data to reduce inequities in school quality and give students a fairer chance – and more genuine choice – in accessing different pathways. Here, an important measure will be fully developing and implementing the national assessment, ABİDE to provide reliable information on learning at the system, school and student level to support more equal standards. New initiatives launched in association with Turkey’s new education vision for 2023 – namely, the Student Learning Achievement Monitoring assessment, the Turkish Language Skills Study and the Common Examinations project2 – also hold promise in this regard.
Context and main features of examinations and national assessment
Responsibilities for examinations and assessment
Two organisations are responsible for central examinations and assessments: the General Directorate of Measurement, Evaluation and Examination Services in the Ministry of National Education (MoNE); and the Measurement, Selection and Placement Centre (ÖSYM), a body outside MoNE (see Figure 1.1, Chapter 1).
The General Directorate of Measurement, Evaluation and Examination Services is supporting national assessment capacity
In 2014, the Directorate of Measurement, Evaluation and Examination Services became a separate General Directorate in the ministry, having previously been a unit within a wider directorate. With this structural change, the directorate has taken on a greater role in the analysis and use of assessment data across the education system in Turkey. The directorate continues to be responsible for central examinations, notably at the end of Grade 8. It now also undertakes projects to develop assessment capacity and instruments. This has included developing: a National Assessment of Student Learning – Akademik Becerilerin İzlenmesi ve Değerlendirilmesi (ABİDE); a project to enhance teachers’ capacity for classroom assessment; and the establishment and co‑ordination of pilot assessment centres in all provincial directorates to help improve provincial directorates’ and schools’ assessment capacity. The directorate has also recently introduced Student Learning Achievement Monitoring assessments, and conducts other examinations for selection and promotion for some public professions and private bodies.
The central examination at the end of compulsory education is the responsibility of the ÖSYM, which is separate from the ministry
The ÖSYM is responsible for designing, administering and overseeing the marking of the university placement examination in Grade 12 and placing students in tertiary programmes. The Placement Centre is separate from the ministry but reports to the Minister of National Education. The centre is also responsible for the automated university placement system that uses student scores from the university placement examination to allocate students to tertiary education institutions and programmes. Previous OECD analysis highlighted that the location of responsibility for the university placement examination in a body that is largely independent of the ministry (OECD, 2007[1]). Many steps are taken to ensure alignment between the school curriculum and the examination at the end of upper secondary, including protocols signed between the Placement Centre and the ministry, and clear examination specifications based on the curriculum. However, the separation of the institution that is responsible for the examination from the ministry can create a risk of misalignment. The location of responsibility for the university placement examination outside the ministry is all the more notable as this is currently the only national standardised examination at the end of secondary education. There is no examination in Turkey to certify learning on completion of compulsory schooling, such as exists in most other OECD countries.
National examinations
There have been multiple reforms to the examination at the end of Grade 8 for high school placement
Turkey has historically had some form of examination at the end of Grade 8 to determine student placement into the country’s different types of high school. In recent decades, the country has implemented reforms to the examination and the placement system for high schools motivated by a desire to create a fair and transparent system for student selection, while also helping to reduce disparities in performance and quality across school types.
Prior to 2008, a centralised examination at the end of Grade 8 was used for high school placement. This was later changed to students taking centralised examinations at the ends of the Grades 6, 7 and 8. However, this led to constant pressure on students throughout lower secondary and with the system of a single centralised examination at the end of Grade 8 returned. In 2013, a new student placement system, Temel Öğretimden Ortaöğretime Geçiş Sistemi ‒ Transition from Elementary Schools to Secondary Schools Exam (TEOG), was introduced and was still in place at the time of the OECD review team’s mission to Turkey in May 2017. Under the TEOG, a single, aggregate score was calculated for each student based on two centralised examinations in Grade 8 set by the General Directorate of Measurement, Evaluation and Examination Services (MEES) and their marks from classroom assessments. The score was used to place students by automatically matching them to high school places based on their TEOG score and preferences (see Table 4.1). The system was perceived to be fair and transparent, with public confidence in its integrity.
Table 4.1. Recent changes to the Grade 8 examination
TEOG, 2013-17 |
New examination for selective high schools, 2018 |
|
---|---|---|
When |
Two exam sessions: 1st semester (November) and 2nd semester (April). |
One exam session at the end of the school year (June). |
Purpose |
Determine placement in all high schools. |
Determine placement in minority of the most prestigious high schools and programmes. |
Eligibility |
Compulsory for all Grade 8 students. |
Optional for all Grade 8 students. |
Components |
Two centralised examinations (70% of final mark). Classroom assessment marks from Grades 6, 7 and 8 (30% of final mark). |
One centralised examination across two sessions. |
Content assessed |
National curriculum. |
National curriculum. |
Subjects covered |
Examinations: Turkish, mathematics, sciences, culture of religion and knowledge of ethics, history of Turkish Republic, and a foreign language. Subjects are weighted to reflect their relative weight in the timetable. Classroom assessments: all subjects. |
One “verbal” examination: Turkish, culture of religion and knowledge of ethics, history of Turkish Republic, and a foreign language. One ‘numeric’ examination: mathematics and science. Turkish, mathematics and sciences carry a weighted coefficient of 4; history of the Turkish Republic, religion and ethics and a foreign language carry a coefficient of 1. |
Format |
Examinations: 20 multiple-choice items (4 options, 1 correct answer) in 40 minutes. Classroom assessments: determined by teacher within scope of national regulations. |
Two booklets with multiple choice questions. Verbal examination: 50 questions in 75 minutes. Numeric examination: 40 questions in 60 minutes. |
Item development |
Teachers engaged in short-term contracts. |
Teachers engaged in short-term contracts. |
Marking |
Students record their answers on an answer sheet that is scanned and processed using optical mark reading software. Incorrect and missing responses are scored as zero i.e. there is no penalty for guessing. |
Students record their answers on an answer sheet that is scanned and processed using optical mark reading software. It has not yet been decided whether there will be a penalty for guessing. |
Grading |
Examination: a student’s scores across 2 centralised examinations are used to calculate an average mark out of 700. Classroom assessment: students score a maximum of 100 marks for each grade, based on their average from all subjects. An average mark out of 100 is calculated on the basis of their score from Grades 6, 7 and 8. Final TEOG score: a student’s marks from the examinations and classroom assessments are used to calculate an aggregate mark with a maximum of 500. There is no minimum mark required to “pass”. |
A student’s raw score is converted to a standard score for placements. There is no minimum mark required to “pass”. |
However, the TEOG also had a series of negative consequences for teaching, learning and equity. It created significant competition for school places, with students and teachers devoting a great deal of time to examination preparation, rather than the development of the broader competencies emphasised in the curriculum. It also reinforced disparities in educational access, with a disproportionate share of students from higher socio-economic groups attending the most selective schools. For the few students who did not achieve a high enough mark in the TEOG, it also meant that they were unable to gain a high school place and had to continue their compulsory education through distance learning in an open high school. These consequences informed the decision to end the TEOG in 2017.
Since September 2018, the vast majority of Turkish students starting high school have been placed in a school within their local area
Under the new placement system, Grade 8 students select 5 schools in their local area that they wish to attend. A centralised and automated placement system operated by the General Directorate of Measurement, Evaluation and Examination Services place students according to their preferences and geographic proximity. Given many students want to attend the “best” schools, i.e. those with previously high TEOG entrance scores, it is likely that there will not be enough places in a student’s preferred schools. In this case, the directorate applies oversubscription criteria to determine a student’s placement. Factors likely to be included in the criteria are: school proximity; having sisters or brothers at the same school or having already attended the school; previous academic success of student; attendance; and date of birth (with priority given to younger students).
A minority of school places are determined by a centralised examination
As of September 2018, student placement in the most prestigious schools and programmes (approximately 10% of all high school places) is determined by student performance on a centralised examination set by the General Directorate of Measurement, Evaluation and Examinations. The selective school places is more or less equally distributed across the country and chosen by the provincial education directorates according to criteria set by the ministry. The places include schools or programmes in high demand, where students had to achieve high TEOG scores to gain entry in the past. They also include a range of different schools and programmes, including some competitive vocational and technical programmes like computing and mechanics. The entrance examination is similar to the previous TEOG examination in terms of design and assess similar content (see Table 4.1). Sample questions suggested that it also includes more questions set in real-world contexts designed to assess higher-order skills.
The examination at the end of Grade 12 serves as the gatekeeper for tertiary education institutions and programmes
Grade 12 students, as well as those who have previously completed compulsory education, are eligible to take the university placement examination. This examination was also reformed during the course of this review3, although there is significant continuity with the previous system (see Table 4.2). In its previous and new forms, the examination has two stages. The first stage examination (Yükseköğretime Geçiş Sınavı, YGS) is now replaced with the Basic Proficiency Test (Temel Yeterlilik Testi, TYT). It comprises four multiple-choice tests in Turkish, mathematics, social sciences and science that students take in one sitting at the end of the school year. Students must meet a minimum score of 150 to be placed on a short course tertiary programme and 180 to be placed on a four-year bachelor’s programme.
Table 4.2. Changes to the Grade 12 examinations
Student Selection and Placement System (ÖSYS), 2017 |
Higher Education Institutions Examination, 2018 |
|
---|---|---|
When |
1st stage YGS in spring semester. 2nd stage LYS in June (end of school year). |
1st stage TYT and 2nd stage AYT taken over 1 weekend in June (end of school year). |
Purpose |
1st stage: determines student eligibility for tertiary programmes. Students must meet minimum threshold to progress to 2nd stage. 2nd stage: determines placement on bachelor’s programme and institution. Students who wish to study a bachelor’s programme with a foreign language component are also required to take the foreign language test. |
Remains the same. |
Eligibility |
Optional for students completing compulsory education. No limits on repetition. |
Remains the same. |
Components |
Two centralised examinations taken separately. Classroom assessment marks from Grades 9 to 12. |
Remains the same. |
Content assessed |
1st stage: national curriculum in Grades 9 and 10. 2nd stage: full high school curriculum. |
Remains the same. |
Subjects covered |
1st stage (YGS): Turkish (40 questions), basic mathematics (40 questions), science (40 questions) and social sciences and liberal arts (40 questions). 2nd stage (LYS): 5 tests in key subject areas corresponding to various academic profiles - Turkish language and literature; mathematics; sciences; social sciences; foreign languages Students choose which tests to take, depending on the bachelor’s programme they wish to study. Foreign language test. |
1st stage (TYT): Turkish (40 questions); basic mathematics (40 questions); sciences (20 questions); social sciences and liberal arts (20 questions). 2nd stage (AYT): 4 tests in key subject areas corresponding to various academic profiles - Turkish language and literature and social sciences (40 questions); mathematics (40 questions); social sciences (40 questions) and science (40 questions). Students take 2, 3 or 4 tests depending on the score type they wish to calculate and the tertiary education programme they wish to study. Foreign language test. |
Format |
Predominantly multiple-choice (5 options, 1 correct answer). YGS: 160 items to be answered in 160 minutes. LYS: Mathematics (80 items in 135 minutes). Sciences (90 questions in 135 minutes). Literature and geography (80 questions in 135 minutes). Social sciences (90 questions in 135 minutes). Foreign language test (80 questions in 120 minutes). In recent years, the LYS included a small number of short, constructed response items. |
All multiple choice questions. TYT: 120 items to be answered in 135 minutes. AYT: 160 items to be answered in 180 minutes. Foreign language test: 80 items to be answered in 120 minutes. |
Marking |
Automatic marking through the optical mark reading (OMR) technologies. One-fourth of the number of incorrect answers are subtracted from the number of correct answers to find the raw score achieved by a candidate in a test. |
Remains the same. |
Grading |
1st stage – students score a maximum of 500 marks. A minimum of 180 is required to be eligible for the 2nd stage and a minimum of 150 is required to be placed on a short course tertiary programme. 2nd stage – a student’s university placement score is calculated based on their average of the YGS (40%) and LYS and foreign language examinations (60%), and their classroom assessments mark. To calculate the classroom assessment grade, the student’s average mark from Grades 9 to 12 is calculated as a maximum out of 100 and multiplied by the coefficient 0.12. Different weights are applied to the LYS scores depending on the bachelor’s degree being applied for. |
Remains the same. |
The second stage examination, the Field Qualification Tests (Alan Yeterlilik Testleri, AYT, previously the LYS) also takes place at the end of the year. Students can choose to take up to four tests in the subjects relevant to the fields of study they wish to pursue in university. The tests now comprise only multiple-choice questions though, in the LYS of 2017, a small number of short, constructed response items had been included. A student’s results from the first stage, TYT (40%), and the second stage, AYT (60%), are combined with their grade point average (the average value of a student’s final grades) from Grades 9 to 12 to calculate a final placement score, with the marks from classroom assessments contributing a minimum of 30 marks and a maximum of 60 points overall. After students have received their results, they indicate their preferences for up to 24 programmes and institutions via a centralised system which automatically assigns applicants to bachelor’s programmes based on: the applicant’s preferences; the applicant’s placement score; and the number of places available on each programme of study.
The 2018 examination reforms aimed to reduce pressure on students
The decision to organise the examination in 2018 over a single weekend aimed to reduce pressure and disruption for students. The university placement examinations previously took place over five days. The overall number of questions in both stages was slightly less than in previous years, providing students with more time than before to answer questions.
According to the ÖSYM, a key change following the 2018 examination reform was that the first stage examination, the TYT, would focus on assessing basic skills and students’ ability to use and apply knowledge in different contexts. Sample questions shared by the centre with the public suggest that some questions, especially in the mathematics tests did assess more basic skills in some real‑world settings (in contrast to the very difficult, more abstract mathematics assessed in the previous YGS tests). However, overall the samples from 2018 suggest that there remained significant continuity with the previous items and the skills that are assessed. The OECD review team was informed that a new approach to item development has subsequently been introduced to improve the quality of items from the perspective of situating tasks in authentic contexts and assessing higher-order thinking skills. While the changes sound highly promising, the review team was not in a position to analyse the new item types.
Competition for a place on a bachelor’s programme is intense
The university placement system is based on criteria that are objective, efficient and, of paramount importance, transparent. However, it is also a very competitive process. While over 2 million students on average take the university placement examination each year, in 2017 there were places for less than a quarter of candidates in bachelor’s programmes (MoNE, 2014[2]). This leaves the remaining students attending open education, distance learning undergraduate courses, or the short cycle courses that account for around half of all tertiary enrolments in Turkey. The latter are generally provided by vocational and technical tertiary institutions which are perceived to be of poor quality and carry a lower status than bachelor’s programmes (World Bank, 2007[3]). Students in short cycle programmes are also more likely to drop out, with a third dropping out before completion, in contrast to just 6% of bachelor’s degree students (OECD, 2016[4]).
The competition for tertiary places means that students, parents and teachers devote significant time and energy to examination preparation. In 2015, 15-year-olds in Turkey spent more time in after-school study than their peers in any other OECD country (OECD, 2016[5]). It was also reported to the OECD review team that some students in the final years of upper secondary choose to leave their regular high school to attend the distance learning courses provided by open high schools with a flexible schedule so that they can focus more on examination preparation. The desire to obtain a high score in the university placement examination also means that more than half of the examination’s candidates each year are repeaters, who are retaking the examination in the hope of being able to enrol in a more prestigious tertiary education course or institution.
Open examinations are available for those who have followed alternative educational routes
“Open examinations” are available for students in open lower secondary schools and open high schools, as well as students who have followed alternative educational routes, such as those who had previously dropped out of school or studied overseas. Students in open schools are required to pass these examinations to continue their education in the open system. However, to re-enter formal secondary education, students must be below the upper age limit of 18 years and need to apply to the local provincial education directorate for permission. To enter formal tertiary education, students must complete the standard university entrance examinations.
Assessments
A new national assessment, ABİDE, has the potential to fill an important gap in reliable data on learning outcomes
In 2016, the Directorate of Measurement, Evaluation and Examinations developed and administered a new national assessment – Akademik Becerilerin İzlenmesi ve Değerlendirilmesi - National Assessment of Student Learning (ABİDE) – applied to a sample of Grade 4 and 8 students to provide nationally representative results. The directorate administered the sample assessment to Grade 4 students in April 2018. Up until 2013, Turkey had conducted sample national assessments in Grades 4, 5, 6, 7, 8 and 11, called Situation Assessment Studies, to monitor achievement and collect some background information such as students’ socio-economic status. The assessments enabled the ministry to compare performance across regions and different types of schools; however, it was discontinued and for the past five years Turkey has not reported any standardised data on student learning in basic education (OECD, 2007[1]; OECD, 2013[6]).
ABİDE aims to fill this gap by providing reliable and comparative data to measure learning outcomes against the national curriculum. The 2016 pilot assessed students in Turkish, mathematics, sciences and social sciences. It also included questionnaires for students, teachers and school leaders to gather information on background factors that may be influencing learning outcomes. Although it had not yet done so at the time of the OECD review visit, the directorate has also published a national report on the results from the Grade 8 pilot. The directorate is also using ABİDE to trial a broader range of questions than those that are typically used in Turkey’s high-stake central examinations. For example, the 2016 pilot included some open-ended, constructed response items, with the intention of considering the use of similar types of questions in the TEOG examination.
The OECD was informed of three more recent assessment initiatives intended to provide more reliable information on student progress nationwide, as well as to enhance the diagnostic information available to teachers and schools at the classroom level. These are the Student Learning Achievement Monitoring assessment, the Turkish Language Skills Study and the Common Examinations2 initiatives. While the intent of these initiatives appears very positive, they were introduced after the analysis for this review was completed and are therefore not addressed in this report.
Regular participation in international student assessments provides periodic information on student performance against international benchmarks
Turkey participates in two international student assessments – the OECD Programme for International Student Assessment (PISA) since 2003, and the International Association for the Evaluation of Educational Achievement (IEA) Trends in International Mathematics and Science Study (TIMSS). PISA assesses 15-year-olds’ science, reading and mathematical literacy and is conducted every 3 years, while TIMSS assesses students in Grades 4 and 8 in mathematics and science every 4 years. Turkey has participated in the Grade 8 assessment since 1999, and the Grade 4 assessment since 2011.
Both PISA and TIMSS assess students against international frameworks of the knowledge and skills that are important for full participation in modern economies. For PISA, this includes knowledge and competencies such as the ability to apply knowledge and skills and to analyse, reason and communicate effectively to solve problems. Alongside the student assessments, both PISA and TIMSS collect information on the background factors that may influence student learning. This includes questionnaires for students to collect information on their motivation and perceptions of school, and school principals on the school learning environment. PISA also includes questionnaires to parents, and teachers participate in TIMSS by sharing information on their education, professional development and experience in teaching.
Other types of diplomas and certification
Turkish students completing compulsory education receive a diploma
Upon completing compulsory education, students may be awarded a High School Diploma or a Vocational and Technical High School Diploma depending on the type of high school they have attended. High school diplomas are aligned with Level 4 on Turkey’s National Qualifications Framework that was published in 2015 and is benchmarked against the European Qualifications Framework (MYK, 2013[7]). The diploma, with an accompanying transcript, records students’ results in school-based assessments and includes a summative “diploma score”, with marks of 50 or over corresponding to a “pass”. Since the diploma is entirely based on results from classroom assessments, there is no standardised component to ensure that all students have reached the same minimum standards or to provide reliable information. In some specialist areas such as computing, students in vocational and technical high schools can also apply to the Vocational Qualifications Authority to take an additional examination that leads to the awarding of a Vocational Qualification Certificate. Vocational Qualification Certificates are provided at different levels aligned with Turkey’s National Qualifications Framework (MYK, 2013[7]).
Policy issues
In the aftermath of the decision to end the TEOG examination, Turkey’s most urgent priority is to develop a high school placement system that can cope with the likely acute oversubscription in some schools, in a way that is efficient, transparent and objective. Over time, providing more information to guide student choice and improving flexibility between pathways will help to ensure a better match between demand and supply for high school places. At the end of compulsory education, the first priority should be to introduce an examination to help certify learning and achievement and the second, to ensure that the examination for tertiary selection discriminates more effectively based on the competencies and higher-order skills that are important at this level. Finally, it is critical that Turkey addresses a major gap in national data on learning outcomes, by fully implementing its pilot national assessment, ABİDE. Providing adequate time for implementation and development of these all changes will be essential for their success.
Policy issue 4.1. Enhancing the school placement and selection process at the end of Grade 8
Reform of the examination and student placement system at the end of Grade 8 is closely related to the current high school offer in Turkey and the challenge the country is facing in managing the transition from an elite to a universal system of upper secondary education. Turkey has an established body of prestigious high schools – the science high schools and previously the Anatolian high schools – where entry is determined by examination results. The outcomes of students from these high schools are significantly better than those of students at other general, and vocational and technical high schools. In 2016, 54% of graduates from science high schools went on to study for a bachelor’s degree, compared with 23% of graduates from general high schools and just 8% of those from vocational and technical high schools (MoNE, 2016[8]). Students in the science and Anatolian high schools have also tended to benefit from more learning time devoted to core subjects like science and mathematics, as well as the most qualified teachers (Clark, 2014[9]). The wide variations in student learning experience and outcomes across the different types of high schools mean that placement at the end of Grade 8 carries very high stakes, putting considerable pressure on students and families to gain access to the best schools and fuelling a large system of private tutoring.
Over the past 15 years, Turkey has implemented a series of reforms to address these pressures and the inequities they create. In 2010, Turkey began converting all high schools into Anatolian high schools and introduced a common curriculum in Grades 9 and 10 in an attempt to reduce differences in school quality. Turkey’s Tenth Development Plan proposes to reduce still further the variety of high school types and facilitate student mobility across programmes. Other policies to support greater equity have included the prohibition of private tutoring and requiring all students to attend their local primary and lower secondary school.
These reforms have been complemented by successive changes to the placement mechanism at the end of Grade 8. The TEOG system that was introduced in 2013 aimed to create a fair, transparent and objective system by requiring all students to take the same central examination for high school entry. However, by making entry to all high schools dependent on examination results, the TEOG also exerted a powerful backwash effect on education, putting pressure on young learners and resulting in many schools, parents and students focusing on examination preparation. For these reasons in 2018, the TEOG was replaced with a new system of high school placement, where most students transition to high schools in their local area, based on proximity and their programme preferences.
In several respects, this most recent change to the high school placement system brings Turkey closer towards the practice in other OECD countries. Most OECD countries that offer different upper secondary programmes avoid relying heavily on the results of national examinations to determine student pathways. Instead, these countries use a range of both national and classroom assessment information, alongside other elements, to inform decisions that are largely taken at school level (see Box 4.1). However, while the intention of the current reform is positive, there are likely to be many challenges in the first years of implementation, not least oversubscription at the schools perceived to be the best. It will be essential that Turkey develops a transparent system to manage oversubscriptions, otherwise, student and parent opposition will make the new placement system vulnerable to reactive policy changes. This review provides suggestions of how this might be done, recognising that the effectiveness of any planned improvements to the examination system will be highly dependent on the success of longer-term policies to create more equal standards of quality across Turkey’s high schools.
Box 4.1. Student placement in different secondary programmes in OECD countries
Thirty-two OECD countries place students in different programmes during lower or upper secondary education. On average across OECD countries, three different programmes are available to secondary students broadly corresponding to academic, vocational and technical programmes. However, there is wide variation across countries, from 7 programmes in the Netherlands to just 2 in Greece. Placement most frequently takes place at age 16 but occurs much earlier in a few countries, such as Austria and Germany, where students are placed in different pathways at age 10.
To determine student placements, most OECD countries complement results from central or national examinations with information from classroom-based assessments, and student and parent choice. To ensure that a range of evidence about student performance is combined with their interests and preferences to help identify the programmes best suited to the individual student, a number of countries use individual student plans or class/teacher councils.
Individual student plans
A number of OECD countries use individual student plans to collect evidence of student learning across a range of competencies. The plans are used as the basis for a discussion between parents, students and teachers about future educational pathways that will be best suited to the needs of a student. In Sweden, Individual Development Plans record students’ development in relation to their learning and personal goals, and identify the next steps that will help students to reach their goals. Teachers discuss the plans with students and their parents.
Teacher and class councils
Councils take into account a range of evidence about a student’s academic performance and interests to discuss which pathway will best meet the strengths and needs of an individual student. In France, class councils (conseil de classe) evaluate each student’s performance throughout the year. Each grade has a council that includes the school principal, teaching staff, two parents, two students and a guidance counsellor. The class council meets at least three times a year. It reviews each student’s academic performance and takes into account his or her interests and medical and social well-being in order to advise the student on pathways. Students can appeal against decisions made by the class council if they wish.
Sources: Ministry of Education and Research (2010[10]), Country Background Report for Sweden, https://www.oecd.org/sweden/45957739.pdf; OECD (2016[5]), PISA 2015 Results (Volume II), Policies and Practices for Successful Schools, http://dx.doi.org/10.1787/9789264267510‑en; Ministère de l'Éducation Nationale (2018[11]), Les Niveaux et les Etablissements d'Enseignement [Levels and Educational Institutions], http://www.education.gouv.fr/pid24/les-niveaux‑et‑les‑etablissements‑d‑enseignement.html; Euridyce (2018[12]), The Information Database on Education Systems in Europe, https://webgate.ec.europa.eu/fpfis/mwikis/eurydice/index.php/Countries (accessed 10 January 2018).
Recommendation 4.1.1. Consider the system for high school placement to manage demand
Following the 2018 reform, the likely mismatch between supply and demand for certain schools makes it critical that the criteria used to place students when their preferences are oversubscribed are transparent. The pressure will be particularly acute in the first few years after the adoption of the new system since the TEOG has shaped students’ and parents’ perceptions of school quality. Few students will actively choose to attend a school that had low average TEOG entrance scores in the past or a reputation for low transition rates to bachelor’s programmes.
Ensure that students and parents understand how the new placement system will operate
Under the new system, most Turkish students will be placed in one of their preferred high schools based on the area of residence. To ensure that this new system is transparent, Turkey will need to ensure that the criteria for high school placements are made publicly available well before students and parents are required to formally indicate their high school preferences. The ministry should also encourage lower secondary schools to organise information evenings for Grade 8 students and their parents to ensure that they have a clear understanding of how the placement system will operate.
Basing the new placement system on the area of residence is objective and is the factor most frequently considered systematically for admission to the schools that 15-year-olds attend across OECD countries (OECD, 2016[5]). However, in Turkey as in many other countries, not all schools are equal. Some students may be trapped in areas where schools are low quality, encouraging families to move to areas where schools are perceived to be good (UCL Institute of Education, 2015[13]). Turkey has introduced measures to try to reduce this practice, for example, by taking into account the number of semesters that a student has been attending their lower secondary school. However, the impact of such policies will remain limited in the absence of deliberate efforts to improve schools in disadvantaged communities and reduce the country’s very large geographic disparities.
Develop transparent criteria for oversubscribed schools
A particular challenge for Turkey will be determining places when schools are oversubscribed. The ministry is considering applying transparent and objective criteria like having siblings currently attending the school or in the past, students’ current rate of attendance and their date of birth. Turkey also plans to use subjective criteria like students’ marks from classroom assessments. This will provide information to match students to pathways where they are more likely to do well based on their strengths and abilities. In many other OECD countries, a student’s classroom performance also informs their upper secondary pathways (see Box 4.1).
However, there is a risk that the classroom marks effectively become another means to rank students, similar to the previous TEOG system. There are also challenges to using classroom assessment results accurately and reliably in Turkey. Teachers report that they find it difficult to assess accurately students’ levels of learning (Kan, 2017[14]). There are also few moderation practices like teachers coming together within and across schools to ensure that student work is assessed to a common standard. Perceived differences in school quality, limited opportunities for students to move across pathways and the relatively early age of selection also mean that high school placement carries very high stakes, which risks that teachers will be vulnerable to pressure to inflate student marks.
Aware of these challenges, the ministry is considering reporting students’ classroom marks as broad categories for the purposes of high school placement (e.g. A, B, C, etc.), rather than the mark out of 100 points that school report cards use. This approach aims to avoid excessive competition focused on small differences in student marks and to create more heterogeneous school intakes. However, since there will still be challenges associated with the reliability and accuracy of classroom assessment marks, the oversubscription criteria should ensure that students’ classroom marks are balanced by the more objective and transparent factors like distance, having a sibling in a school and age. Another option to encourage objectivity and transparency would be to apply a lottery within the categories of academic performance to determine the students within each category who gain a place in oversubscribed schools. In the medium to long term, measures to improve teachers’ assessment skills (see Chapters 2 and 3) and stronger school-level moderation practices should help to make marks based on classroom assessment more reliable.
Recommendation 4.1.2. Provide more information to guide student choice, while improving flexibility between pathways
Developing a placement system that is both transparent and fair will mean providing more and better-quality information to students and their parents about the pathways that are likely to match their interests and abilities. It will also mean reducing the stakes associated with placements by creating more flexibility across pathways.
Develop resources, like the new e-portfolios and career guidance, to inform students’ high school preferences
Under the new placement system, student choice will be a more influential factor that in the past so students will need more support to choose high school programmes that reflect their interests, abilities and future career opportunities. This is especially important for informing student demand for vocational and technical high schools so that these are seen as a pathway to develop valued competencies rather than a second-choice option for those who do less well in academic tests. There are several measures that Turkey might consider to help students make more informed decisions on their high school choice:
Develop the e-portfolio. The pilot e-portfolio can be developed to document evidence of learning in different areas of knowledge and skills to inform students’ high school preferences. Using evidence of learning in this way encourages students to think about what their interests and strengths are, and how different high school programmes and ultimately tertiary and career options can best match these. Teachers will need training and guidance on the types of evidence that can be used to document different skills, knowledge and competencies (see Chapter 3).
Provide more support to schools on how to discuss options with students and parents. If Turkey is to move towards a more demand-driven system of school selection, guidance services in lower secondary schools will need to be significantly enhanced. Guidance counsellors will need to receive more, high‑quality training and be able to access reliable information about the types of careers associated with different programme options, in particular for vocational and technical pathways. The overall numbers may also need to be expanded since there is currently just one counsellor per high school in general.
While building a cadre of guidance counsellors will take time, there are other measures that could be introduced relatively easily to help students understand better the options available. Schools can be encouraged to organise visits to and/or from different high schools in the catchment area. Some countries introduce careers guidance as a subject in the curriculum in middle school, which is something Turkey might consider. Norway, for example, makes Selection for Education a mandatory subject throughout lower secondary education. It provides information about the programmes available, the main differences and the career possibilities for each. Another course, Working-Life Skills, gives students an introduction to real-work situations as a means to help inform decisions on vocational education programmes (OECD, 2011[15]).
Provide information on a wider range of high school outcomes. Effective guidance will require more information than is currently available on the outcomes of different high school programmes, beyond success in the tertiary placement examination. National labour market information systems can be helpful to signal the value of different programmes, as can information about the kinds of competencies that employers look for in new recruits (OECD, 2010[16]). However, local information and experiences – the testimonies of successful graduates from vocational and technical education and local employers, for example, and direct access to workplace experiences – are also important to steer choices.
Create greater flexibility across upper secondary programmes
Some flexibility for students across upper secondary programmes is important to reduce the stakes of initial selection and to ensure that students do not remain in a programme that does not reflect their interests or ability. In Turkey, all schools follow a common curriculum in Grades 9 and 10, in theory providing students with the possibility to change tracks up until the end of Grade 10. However, in practice under TEOG, it was rare for a student in a vocational or technical programme to move to a general programme. A student wanting to move school or programme in Grades 9 or 10 was still required to meet the initial threshold set by a school’s average TEOG entrance score. There was no possibility to take into account a student’s performance after the TEOG score was calculated, such as their performance in Grades 9 or 10, their motivation or interests. This risks that students who develop slightly later are confined to a programme that limits their future options and does not reflect their potential achievement. Most other OECD countries begin selection later than age 13 as in Turkey, most frequently at age 16, and provide greater opportunities to move between tracks. In the Netherlands for example, students who study in a combined general/vocational lower secondary track can later transfer to a general upper secondary track if they meet admission requirements such as minimum grades that are set by upper general secondary schools (Akkerman et al., 2011[17]).
In Turkey, high schools might be given greater discretion and encouragement to accept students from different programmes up until the end of Grade 10, where spaces are available. The greatest demand for transfer is likely to be from vocational and technical high schools into general schools. Transparent, national guidance will be needed to help schools decide among applicants. Criteria might reference explicitly both a student’s performance in Grades 9 and 10 as measured by classroom assessment, as well as evidence of motivation for the programme that the student would like to study, through for example extra-curriculum activities or assignments documented in their e-portfolio. While there is likely to be less demand for transfer into vocational and technical schools, measures might be taken to make students more aware of this option, such as through timetabled discussions of learning and career opportunities and opportunities for work experience.
Structural changes to upper secondary schooling to create greater flexibility across pathways might also be considered. The ministry is already considering providing provincial education directorates with some autonomy to adapt school structures to local demand, such as being able to increase school capacity or change high school types. In the medium to longer term, Turkey might also consider:
Providing provincial education directorates with the option to create more comprehensive high schools. Some provinces in Turkey in rural or sparsely populated areas already have Multi-Program Anatolian High Schools that offer both vocational and general programmes within the same school. This reduces rigidity between tracks as it is easier for students to move across programmes and supports equity by bringing together students from a diverse range of backgrounds.
Delaying the age of selection by one or two years. In the future, Turkey might consider delaying selection until the end of Grades 9 or 10. At the end of lower secondary, students would continue onto their local high school with selection taking place one or two years later. In 1999, Poland implemented a similar reform that increased the age of student selection from 14 to 15 years, with a significant positive impact on the learning outcomes of those students who had previously been placed in vocational options at 14 years (see Box 4.2).
Any change to the structure of schooling in Turkey would need to be carefully planned and gradually implemented. When Finland extended its comprehensive system until 16, it was implemented over a period of 5 years from 1972 to 1977 in selected regions, beginning with the less populated areas. Similarly, when Sweden extended comprehensive education until age 16, this was implemented gradually beginning with a few municipalities (OECD, 2012[18]).
Box 4.2. Poland’s reforms for a comprehensive lower secondary gymnasium
In 1999, Poland implemented reforms to provide equal educational opportunities for all students and improve education quality. The reforms created a new comprehensive school – the new lower secondary gymnasium – and a new education system structure of six years at primary school and three years at the new lower secondary gymnasium. This replaced the previous system where students had remained in primary school for eight years and were then tracked into different pathways based on their performance in the placement exams. The top 20% went into a general secondary lyceum that prepared them for entry into university. The bottom half went to vocational schools and the remaining students attended two-year technical secondary schools. Under the new system, all students follow the same common curriculum until the age of 15, extending comprehensive education by one year.
The structural reform was accompanied by a new core curriculum for the lower secondary gymnasium. Curriculum development was also decentralised to the local level to engage schools and teachers, and central examinations were used to monitor results.
Studies suggest that the reform helped to reduce performance differences between schools and improved the performance of the lowest-achieving students. In PISA 2000, 21% of students in Poland only reached the lowest of PISA’s competency levels, Level 1. Students in vocational schools performed significantly below those in general secondary school, with nearly 70% of vocational students performing at the lowest literacy level. By 2003, Poland’s average student performance had improved and the fall in performance difference between schools was the greatest among all OECD countries. The trend continued in PISA 2006 where there was a 115-point improvement among those students who would previously have attended vocational schools but now received an additional year of general education in the new comprehensive lower secondary gymnasium.
Sources: OECD (2011[19]), Lessons from PISA for the United States, http://dx.doi.org/10.1787/9789264096660‑en; OECD (2012[18]), Equity and Quality in Education: Supporting Disadvantaged Students and Schools, http://dx.doi.org/10.1787/9789264130852-en.
Recommendation 4.1.3. Reduce any negative distortions created by the high school entrance examination
Retaining the competitive entrance examination for a minority of high schools will reinforce national perceptions of these schools’ prestige and sustain the inequities that result from competitive selection. However, societal and parental pressures make it very difficult for Turkey not to keep a minority of elite, academically selective schools.
Following the 2018 reform, approximately 10% of high school places will be determined by student performance on the new centralised entrance examination. While this accounts for a small minority of overall high school places, the ministry predicts that in the examination’s first years the vast majority of students, 1 million (out of a cohort of 1.2 million), will take it. In the future, as it becomes clear that only a minority of candidates are successful, the ministry expects that the number of candidates will fall. Other recommendations in this report should also, over time, help to reduce such extreme pressure. However, with such a high number of students taking the examination, at least in the short term, it is likely to exert significant influence on the education system, with students and parents devoting considerable time and resources to examination preparation. This makes it important that Turkey consider measures to mitigate negative impacts on equity and teaching and learning.
Consider measures to ensure that students from disadvantaged backgrounds have a fair chance of accessing selective schools
Turkey has put in place some measures to enhance access to selective high schools. The schools will be more or less equally distributed across the country. There will also be some diversity in the types of schools, which will include some high-demand vocational and technical programmes. However, these measures are unlikely to change the current situation where Turkey’s most prestigious high schools attract a disproportionate share of students from the most advantaged socio-economic backgrounds. Internationally, other academically selective public schools also enrol a disproportionally greater share of students from higher socio-economic backgrounds than the local population. Some of these systems have introduced measures aimed at reducing disparities in access, which Turkey might draw upon. One option would be to group students according to socio‑economic group (or an appropriate proxy) and offer places to those students who achieve the highest marks in the entrance examination within each socio‑economic group. Chicago’s selective schools operate a similar system as do some schools in England (see Box 4.3).
Box 4.3. Efforts to mitigate the negative impacts for equity of selective public schools in England (United Kingdom) and the United States
In some states or local authorities in Australia, England (United Kingdom) and the United States, some secondary schools use performance in a competitive entrance examination, among other factors, to determine entry. Where these schools exist, there is frequently high demand for places. They also have a negative impact on equity – with most selective schools having a disproportionate share of students from advantaged socio‑economic backgrounds. In New South Wales, Australia, for example, over 70% of students in Sydney’s top 10 performing selective schools come from families in the top quarter of socio-economic groups (Ho, 2017[20]). Evidence from England finds that even when comparing students of similar ability, according to national assessment results, students from lower socio-economic groups are less likely to attend selective schools (UCL Institute of Education, 2015[13]).
Some of these selective schools have put in place measures to try to partially mitigate the negative impact of selection on equity:
Chicago, United States: In Chicago, 11 “selective enrolment high schools” offer accelerated programmes for able students. Admission is based on student preferences and their score, which is calculated using their performance in the selective entrance exam and their classroom results. Thirty percent of school places are allocated to students with the highest marks, regardless of their socio‑economic status. The remaining 70% of places are allocated to students across four different socio-economic “tiers” according to area of residence. The tiers are calculated based on: average family income; adults’ level of educational attainment; share of homes that are owner-occupied; share of single-parent households; share of the population speaking a language other than English; and the performance of schools in the local area. A share of the top performing students within each tier is offered a place. In 2013, approximately 40% of offers were made to students from the 2 most socio-economically disadvantaged tiers (FRB of Chicago, 2016[21]).
New York, United States: The “specialised high schools of New York City” are nine selective public high schools for academically and artistically gifted students. To enter, students in Grades 8 and 9 take the common Specialised High Schools Admissions Test. Students from disadvantaged backgrounds are provided with access to free lessons to help them prepare for the entrance examination.
Selected Local Education Authorities (LEAs) in England, United Kingdom: There are approximately 164 grammar schools in England, where student performance on the “11-plus examination” determines entry. Students can opt to take the “11-plus” examination in the last year of primary school at the age of 11. To encourage enrolment among students from lower socio-economic backgrounds, some grammar schools develop relationships with the community and with school teachers to help identify high ability students from lower socio‑economic backgrounds that may be interested in attending the school and work with their parents to encourage them to apply. Some schools also provide assisted-place schemes that cover or subsidise costs for travel, uniforms and textbooks, while others try to ensure that around a quarter of their annual student intake is from more disadvantaged communities.
Sources: Ho, C. (2017[20]), “Angry Anglos and aspirational Asians: Everyday multiculturalism in the selective school system in Sydney”, https://doi.org/10.1080/01596306.2017.1396961; UCL Institute of Education (2015[13]), Research into the Impact of Selective Schooling and School Composition, Secondary School Size, and Academies and Free Schools for the Guernsey Education Department Research, https://www.gov.gg/CHttpHandler.ashx?id=97557&p=0; Barrow, L., L. Sartain and M. de la Torre (2016[22]), The Role of Selective High Schools in Equalizing Educational Outcomes: Heterogeneous Effects by Neighborhood Socioeconomic Status, University of Chicago Consortium on School Research.
Design the high school entrance examination to reduce negative backwash for teaching and learning
The examination’s design will be very similar to the previous TEOG (see Table 4.1), based exclusively on multiple-choice questions and largely assessing discrete knowledge in abstract contexts. As it develops the question items for the new Grade 8 examination, the ministry should consider including more complex items based on real-world situations assessing competencies and higher-order skills. These kinds of items are easier to achieve with open-ended questions. However, in Turkey, the need to process results for a very large number of students in a short period of time coupled with the need to maintain the highest degree of objectivity means that machine-scored multiple-choice items are likely to remain the predominant item type for the foreseeable future. There are ways, though, that these items might be improved to focus teaching and learning on more complex competencies.
One option would be to replace some of the existing items with multiple-choice questions that present students with more complex tasks in authentic contexts, similar to the item types used in the OECD Programme for International Student Assessment (PISA) for example. The first step would be the development and approval of new test specifications, including the weighting for items specifically targeted at measuring higher-order cognitive abilities (e.g. analysis, synthesis and complex problem solving) in authentic contexts. Schools, teachers and the general public would then need advanced notice of the proposed change. At the same time, the Directorate of Measurement, Evaluation and Examinations would need to train item writers in the development and evaluation of high-quality innovative items, i.e. items that depart from the traditional format by, for example, using authentic data or previously unseen stimulus material to test competencies beyond recall or routine application of knowledge. Examples of the new item types with explanatory notes could then be published. Finally, the directorate would need to publish examples of complete tests to ensure that all stakeholders are fully informed before new test formats are introduced.
The directorate has recently taken steps in this direction. A number of the questions in the 2018 samples shared with the public are designed to assess higher-order competencies. In particular, the sample mathematics questions employ some multi-stage problem solving which should require students to demonstrate higher order cognitive skills. There has also been an effort to set questions in real-world contexts, however, it is important these contexts are relevant and accessible for test takers and do not unfairly disadvantage (or advantage) certain groups, like those who might already be familiar the stimulus material. The OECD was informed that subsequent improvements have been made to the test items, both with regards to the authenticity of tasks and to the focus on higher order thinking skills.
The General Directorate of Measurement, Evaluation and Examinations (MEES) should also consider including semi-objective constructed-response items, such as multiple matching, ordering and short-answer types which can be processed automatically using character recognition technologies (see Recommendation 4.2.2). The Directorate has also piloted, through ABİDE, the use of short-answer questions for Grade 8 students with their responses scored on-screen by human markers. Human marking enables the use of items with more extended answers. However, while this approach may be a possibility for the future and deserves further investigation, the practical demands of processing a huge number of answers in a short time and the need to achieve complete agreement between markers to avoid appeals, means it is unlikely to be a practical solution in the medium term.
Improve the discriminatory power of the examination
Following the 2018 reform, many more students will be taking the entrance examination than the number of available places, making it critical that it discriminates effectively among students, especially at the top of the ability range. However, this will be difficult, given the limited number of items and the possibility to guess correctly when responding to a multiple‑choice question. This was a recognised problem with the TEOG. The score distributions from the 2017 TEOG show that some sub-tests had limited psychometric properties; for example, the modal score for the Turkish language test was 100% suggesting that it had little power to discriminate in the upper half of the ability range. In 2017, more students received top marks in the TEOG examination than the number of places that were available in the top schools. It will be important for the directorate to pay closer attention to the discriminatory capacity of the new examination if it is to play its ability-placement function effectively. Sample items should be pre-tested explicitly for this purpose. This should be combined with results analysis after the examination to determine how effective actual items were, with appropriate adjustments implemented to improve the examination’s discriminatory power over time.
Policy issue 4.2. Ensuring that examinations at the end of compulsory education serve effectively the functions of certification and selection
At the end of upper secondary education, examinations should fulfil two major functions. First, they should help provide each school leaver with certification of their achievements and ensure that they have met the minimum requirements for graduation. Second, they should provide the information necessary to inform decisions on further education and eligibility for tertiary education.
While Turkey has a school-leaving diploma for students who complete upper secondary education, this does not serve a qualification and certification function as effectively as it might. The diploma is based solely on classroom assessment marks without any standardised measure of achievement. Since classroom assessments are not centrally designed, administered or marked, they are perceived to provide less reliable information than central examinations. Consequently, they carry less credibility or “signalling value” for future employers and further education providers and have less utility for students with respect to advancing in work or in learning. For this reason, the majority of OECD countries (21) use central or national examinations at the end of upper secondary education to certify student achievement (OECD, 2015[23]). In Turkey, developing a recognised form of certification at this stage should be a priority and will be essential for achieving a more inclusive secondary school system.
Another concern for the university placement examinations is that while they serve their primary administrative function to place students in tertiary education programmes effectively and efficiently, they dominate teaching and learning throughout high school. Students, parents and teachers devote significant time and energy in preparing for the examination that focuses on knowledge recall and does not assess the higher-order competencies that are important for tertiary success. While it might be difficult to reduce significantly the influence of the examination, whose high stakes are accentuated in Turkey by the limited access to bachelor’s programmes, there are ways in which the university placement examination could be redesigned so that it has a less negative backwash effect on student learning.
Recommendation 4.2.1. Develop a national examination to help certify achievement at the end of compulsory education
The absence of any external examinations to certify student achievement against national curriculum standards at the end of upper secondary school is a notable gap in Turkey, especially as the country wants to ensure all students complete this level of education. At present, the only recognised signal of student achievement at the end of upper secondary is the university placement examination. However, the sole purpose of this examination is selection to tertiary education. It is by necessity discriminating, with more than half (63%) of the examination’s candidates in 2016 failing to obtain sufficiently high marks to be placed in any kind of tertiary programme (MoNE, 2016[8]). This leaves the majority of candidates leaving high school with no meaningful recognition of what they have achieved. In contrast, the majority of OECD countries provide all students with the opportunity to certify their achievements and demonstrate that they have met the minimum requirements of compulsory schooling. The small minority of OECD countries that do not provide the weight of externality for high school completion, relying, like Turkey does, solely on classroom-based results and only providing a standardised examination for entry to tertiary education – Korea, for example – not only have very different school and tertiary systems but have also recognised this as an obstacle to effective skills development (Kis and Park, 2012[24]).
Turkey should consider developing a national examination that gives all students the chance to gain recognition for their achievements at the end of upper secondary school. This would be a significant change that will require careful planning and communication to students, parents, teachers and schools. Great care and technical expertise would be needed at all stages of designing and implementing the new examination. The process of agreeing on the concept, building a model, piloting instruments and procedures, building public awareness and support, and finally implementing the system would take at least five years. For example, introducing a new matura examination system in Slovenia took six years from the initial policy decision in 1989 to the conduct of the first live examinations in 1995 (Bethell and Gabrščeek, 1996[25]). In introducing such an examination, Turkey would need to consider the following points.
Determine the main purpose of the school-leaving examination
In developing an examination for certification at the end of upper secondary education, a fundamental first question to address will be whether to have a single examination that serves the purpose of both high school completion and tertiary selection or two separate examinations. In many OECD countries, the same examination is used to certify upper secondary completion and select students into tertiary (see Box 4.4) (OECD, 2015[23]). Such dual-purpose systems have significant advantages. They allow all assessment instruments (e.g. test papers) to be closely aligned with the content and target outcomes of the school curriculum. They reduce pressure on students because they only have to prepare for one set of examinations rather than two. They also increase the efficiency of the system because examinations are administered once rather than twice. However, designing an examination system that serves both functions well is technically challenging because the examination items must produce score distributions which are sufficiently reliable across the full range of student ability. Some countries approach this by organising examinations at different ability levels, for example, Ireland.
While this might be a model for Turkey to move towards in the longer term, there are several reasons why a two-stage approach might be both more feasible and desirable now. Students who reach this final stage of education in Turkey have very diverse levels of learning and face very different future opportunities. Around half will be graduating from vocational schools and only a minority of all students will go onto study a bachelor’s degree. In this context, it would be very hard to develop a single exam that certifies fairly the achievement of all school graduates while at the same time identifies effectively aptitude for tertiary-level study.
Moreover, Turkey already has a system of two-stage examinations at the end of high school. The TYT is open to all students and according to its name, aims to assess “basic proficiencies”. With the changes outlined below to the way students are assessed ‒ including the introduction of a passing threshold based on demonstrated minimum proficiency in core, and perhaps also, some optional subjects – the TYT might be developed to help serve a certification function, while also remaining the first filter in the university placement system. Under this new system, the AYT could be retained as the final selection and placement tool for tertiary education. In this case, the AYT would only need to focus on effectively discriminating at the top of the ability range meaning that the examination items could be modified and improved to better assess higher order competencies (see Recommendation 4.2.2). Given the extent of overhaul to the high school entry examination, incremental reform to this high-stakes examination for university entry might be more feasible politically and socially.
The governance of this system would need to be addressed. At present, the Placement Centre is responsible for both the TYT and AYT, on the basis that their primary purpose is progressive selection for tertiary education. Developing the first examination into a school‑leaving style examination for certification at the end of upper secondary would require a change in this arrangement, with the General Directorate of Measurement, Evaluation and Examinations (MEES) in the ministry, managing the examination design to ensure its alignment with both the curriculum and the broader intent of upper secondary education policy in Turkey.
Box 4.4. School leaving examinations in Lithuania and Ireland
In Lithuania, the matura examination held at the end of upper secondary education serves the dual purpose of certifying school completion and providing access to tertiary education, including state-funded places for tertiary education. The matura can be taken at the state-level in biology, chemistry, physics, geography, information technologies, mathematics, history and foreign languages, while locally-assessed school-level matura examinations can be taken in minority languages, arts, musicology and technology.
In order to complete upper secondary education, students must pass two matura examinations: a compulsory examination in Lithuanian language and literature and another examination in a subject chosen by the student. Students who wish to pursue tertiary education in universities must take the Lithuanian language and literature matura at the state-level. Certain tertiary education institutions may also set their own individual requirements for admission. In order to obtain state funding for tertiary education, students must pass three matura examinations in Lithuanian language and literature (state-level), mathematics (state-level) and a foreign language.
Ireland’s Leaving Certificate Examinations are final examinations taken at the end of the secondary school system. The Leaving Certificate serves two purposes: it certifies school completion and provides access to tertiary education. The examinations are available in a variety of subjects including Irish language, English language, mathematics, natural sciences, humanities and the arts. Students can take a combination of higher-level and ordinary-level examinations.
The grades from these examinations are converted into points, with the total number of points determining access to the tertiary education course the student has applied for. The Central Applications Office runs this university admissions’ process. Students can also opt to take two subjects, Irish language and mathematics at the foundation level. The Irish language examination at this level equals zero points and only certain institutions consider granting any points to mathematics at the foundation level. However, students can combine these foundation-level examinations with other higher or ordinary level examinations to collect enough points to gain access to certain universities.
In order to certify school completion, students must pass examinations at any level in five subjects. Students who meet this criterion are also able to access post-secondary non‑tertiary courses that usually last one year and, in many cases, provide access to tertiary education institutions.
Sources: OECD (2017[26]), Education in Lithuania, http://dx.doi.org/10.1787/9789264281486-en; Department of Education and Skills, Ireland (2018[27]), The Education System, Ireland, https://www.education.ie/en/The-Education-System/ (accessed 01 March 2018).
Determine which subjects will be assessed
In developing the structure and content of the examination system, Turkey will need to determine which subjects will be compulsory and which will be optional. In countries where a core of compulsory subjects is defined, a national language (state language or mother tongue) is almost always included, with mathematics as the next most common requirement (Hodgen et al., 2010[28]). These subjects represent the fundamental cognitive competencies that any student should master at a basic proficiency level by the end of schooling and provide an essential foundation for successful participation in the modern knowledge economy. This approach would also reflect the current organisation of Turkey’s national curriculum, where Turkish and mathematics are taken by all students until the end of upper secondary education. Providing certification would help to value and incentivise achievement in these core areas.
Beyond this, there is considerable variety in approaches globally and across the OECD in the subjects examined at the end of upper secondary education (Dufaux, 2012[29]). This reflects differences in how countries balance breadth and depth of learning at this stage of schooling and the extent of freedom given to students in their specialisation. Overall, however, there seems to have been a general trend in recent years towards reducing the number of compulsory subjects in favour of more student choice, and limiting the total number of examined subjects to address concerns of excessive pressure and overload. In a country like Turkey, which aims to develop a strong vocational education pathway, this is an important consideration. While having examination results in mathematics and Turkish could help to add weight to the certification vocational and technical graduates receive, too many compulsory subjects could reduce the time for authentic vocational alternatives, where students can develop specialised expertise.
Developing the vocational track will also require developing credible assessments to certify skills acquired in specific technical and vocational domains. Turkey has started to do this, by developing specialised vocational and technical qualifications provided by the Vocational Qualifications Authority, but so far only a couple of sectors are covered.
Determine how the examination will be marked
Since the purpose of the school-leaving examination would be to ensure that students have met minimum standards, criterion-referenced marking would be most appropriate. To do this, Turkey might consider using a scaled system that covers a range of marks to express results. Different options include numbers (e.g. 1-6), letters (e.g. A-F) or names (e.g. pass, pass with merit, etc). The categories could also be used to create a threshold to access short course programmes International Standard Classification of Education (ISCED 5) and determine eligibility for the AYT examination. This would also help to ensure that the school-leaving examination does not encourage an excessive focus on individual marks, opening up the possibility to use a broader range of question types and teacher or school-based assessments as part of the examination.
Consider the types of questions that will be used
In the past, the high stakes associated with central examinations in Turkey and the associated need for objectivity and transparency meant that multiple-choice items capable of automatic scoring were preferred. However, the school leaving examination would carry lower stakes because students would only need to reach a minimum threshold to access short course tertiary programmes or to be eligible for the second-stage university placement examination (AYT). In this context, Turkey could use the school leaving examination to introduce more open question items. This would help to enhance the educational value and validity of the examination.
In the short term, this might include more open questions that are still capable of automatic scoring, such as semi-objective constructed response items and questions set in real-world contexts (see Recommendation 4.1.3). In the future, a small number of manually marked items may be also incorporated. In the majority of OECD countries, graduation examinations include some open items and in some cases long essays (OECD, 2015[23]). While this requires manual marking, which is time-consuming and inevitably introduces a degree of subjectivity, the ability to assess broader and higher competencies is accepted by these countries to outweigh any potential loss of reliability. Building understanding and trust in the new certification system in Turkey will be important for the country to move in this direction.
Consider including a small share of teacher-assessed work
Turkey will also need to decide which, if any, elements of a school-leaving examination will be assessed by teachers in schools. The challenges of achieving reliable marks from teacher-assessed work are considerable. This is a particular concern in Turkey given the difficulties teachers have expressed in being able to confidently exercise professional judgement to determine the level of student work according to national standards (Kan, 2017[14]).
However, written central examinations can only assess a limited part of the curriculum. In contrast, teacher assessments can use a broader range of assessment tools like projects, group work, experiments and presentations where students are required to draw on broader skills and competencies like communication, creativity and teamwork (OECD, 2013[30]). This is particularly important in Turkey, where these kinds of skills have so far been neglected by the assessment system, despite being introduced to the national curriculum over a decade ago. Teacher and school-based assessments also provide the opportunity for students who typically perform less well in a high-pressure examination setting to demonstrate their abilities. This is an important consideration for a school leaving examination that aims to provide all students with a fair chance to show what they know and can do.
In the short term, Turkey might keep the share of marks that teacher or school-based assessments contribute to the overall examination result relatively small. At the same time, guidance for the teacher or school-based assessment component and moderation could be used to encourage reliability. This will need to be complemented by system‑wide efforts to help teachers confidently assess student work according to national standards (see Chapter 3 and Recommendation 4.2.3). Over time, as teachers’ assessment capacity increases, the share of marks that the teacher or school-based assessment component contribute to the final result might increase.
Recommendation 4.2.2. Enhance the validity of the university placement examination
The high stakes associated with the university placement examination means that the focus for many students, teachers and parents throughout upper secondary is on achieving high marks in it. What is assessed in the examination, and how it is assessed, therefore has a strong influence on teaching and learning in upper secondary school. It was reported to the OECD review team that the university placement examination can negatively affect education in high schools because it does not measure fully the competencies that students should be acquiring at this level. In particular, there is a view that some of the items are too abstract (i.e. not sufficiently concrete and/or related to real-world contexts) and have tended to focus more on the recall of factual knowledge and/or routine procedures than on higher order cognitive skills. There is also a concern that the examination and its previous forms have not been as effective as they might be in terms of discriminating students’ aptitude for tertiary level study, precisely because they do not focus sufficiently on higher-order competencies (Sıdkı Ağazade et al., 2014[31]). Finally, the organisation of the examination could be adapted to place less pressure on students when they are taking the examination.
Make fuller use of technology to enable a wider range of item types
Some steps have already been taken to assess a broader range of skills and knowledge in the Grade 12 examinations. The new TYT, like the previous YGS, includes some items clearly designed to test problem-solving skills in authentic contexts. The sample of items studied in 2018 suggested that these types of question were well developed in mathematics, but less apparent in other subject areas like Turkish language, where the focus was more on assessing discrete knowledge and skills like grammar, spelling and punctuation. And while the previous version of the second stage examination, the LYS, had included a small number of constructed response items, there seems to have been a step back with the 2018 version, the AYT, which was based exclusively on multiple-choice questions. This leaves considerable scope to improve the validity of the examination. This review’s recommendations from the Grade 8 examination – notably the proposal to introduce more items set in real-world situations and more constructed response items (see Recommendation 4.1.3) – should also be considered here. The OECD review team was informed that the examinations for 2019 already reflect a new approach to item development, with students presented with more authentic tasks and more questions that require the use of higher order thinking skills.
The introduction of a wider range of question items will require the use of more sophisticated technologies for marking examinations. The Placement Centre (ÖSYM) currently employs optical mark reading (OMR) technology, which severely restricts the type (and complexity) of answers that students can give. Introducing optical character recognition (OCR) technology for scoring student responses would allow for the use of a greater variety of item types, all still capable of automatic scoring. There are many international examples where OCR is used in scoring high stake examinations in contexts where the security and transparency of the examinations are of paramount importance (see Box 4.5).
Give students more time and reconsider how the correction formula is used to reduce the pressure on students
The high demand for university places means that applicants will always be under pressure. However, there are some elements of the current system which exacerbate the situation and which could be rectified in the short to medium term. First, the average time allowed students to solve each task is far shorter than is the case in comparable examinations. While the time per question has been increased as part of recent changes (for example, from 1 minute per item in the YGS to 1.125 minutes in the TYT), it remains a time-stressed examination, as is the new AYT. In contrast, the previous TEOG examination allowed 2 minutes per time. As a starting point, ÖSYM might consider allowing 2 minutes per item plus 15 minutes. This should then be checked through a field trial, i.e. by applying a test and monitoring over time how many students finish the test.
Second, the items used in the university placement examinations are, by design, complex and relatively difficult. However, students are placed under increased psychological pressure by the knowledge that incorrect answers are penalised. Research shows that the impact of applying a “correction for guessing” has little impact on the reliability of a test since corrected and uncorrected scores are highly correlated (r>0.95 according to Ebel (1972[32])). Taking into account that the instruction “do not guess” misdirects candidates (statistically speaking, they would be better off guessing unknown items even when there is a guessing correction), Mehrens and Lehmann (1986[33]) conclude that “correction for guessing should not be used”. The ÖSYM should investigate the impact of the application of its guessing correction formula with a view to using unadjusted scores for the university placement examination. In addition, highly speeded tests, i.e. tests where many items have to be answered in a short time, increases the likelihood that candidates will guess, meaning that increasing the time allowance as recommended above will ameliorate, but not solve, the problem.
Box 4.5. Using optical character recognition (OCR) technologies in high stake examinations
OCR technologies are able to convert images of handwritten or printed text into machine-encoded text. This means that they enable the use of a variety of question formats including standard multiple-choice, complex multiple-choice, multiple-matching; clustered true/false, ordering, and short-answer constructed response (words or numbers). A number of countries including the Czech Republic, Georgia, the Russian Federation and Ukraine use OCR technologies to score high-stake examinations at the interface between secondary school and universities. An example of a question from Ukraine is provided below. Other international examples are provided in Annex 4.A.
Example: Mathematics question in multiple-matching format from the Ukrainian Admission Examination (2015). Translation of the question and the completed answer grid is included below:
Match the shaded area of each geometrical shape (1-4) with the correct area (A- Д):
1. Circle of radius 4cm
2. Semi-circle of radius 6cm
3. Sector having radius of 12cm subtending an angle of 30
4. A ring with radii 4cm and 6cm.
Reduce the grades that contribute to the final placement score
The placement score for tertiary study in Turkey incorporates classroom assessment marks from Grades 9 to 12. Including marks from assessments other than the examination provides the opportunity to assess a broader range of competencies (OECD, 2013[30]). However, in Turkey, there are a number of challenges with how the classroom assessment marks are used which limit these pedagogical benefits, in addition to concerns about their reliability.
First, students and teachers reported to the OECD review team that classroom assessments frequently replicate the content and types of question items that the examination uses, i.e. tests based on multiple-choice items. This means that the classroom assessments are narrowing rather than broadening the skills, knowledge and competencies that students are encouraged to develop and demonstrate during high school.
Second, using the classroom assessment marks from the beginning of high school to calculate the university placement score may also create undue pressure on students and limit space for learning. Teachers and students are less likely to engage in the kinds of formative interaction where students feel comfortable to reveal what they do not know because from the first semester, marks matter.
Third, assessment results from the early grades of high school student may not be an accurate reflection of ability or effort. Students who have just started high school are still adjusting intellectually, emotionally and socially to a new school and phase of their education. Student performance frequently dips after these kinds of school transitions.
For all of these reasons, the ministry should adjust the system to take into account the school marks from Grades 11 and 12 only in the final placement score. This would reduce the pressure on students and give them more time to adjust to their new school. It would also create more space for teachers to use a broader range of assessments in the early grades and give more time to formative feedback. The reliability of the overall placement score would not be significantly damaged by this change and indeed the validity of the assessment could be raised by increasing the weighting of the components having greater predictive power. Taking the steps outlined below to improve the quality of classroom assessments in Grade 11 and 12 (Recommendation 4.2.3) would enhance further their pedagogical and psychometric value.
Consider how to draw on wider sources of evidence for university placement in the future
Over time, Turkey might consider using other sources of evidence to determine entry to tertiary education. Across 32 OECD member countries, only 2 other countries (Greece and Hungary) rely, like Turkey, on the results of a central examination and average grades from classroom assessments to determine entry to tertiary institutions (OECD, 2017[35]). Other sources of evidence frequently used alongside central examinations to determine admissions across OECD countries include interviews, applicants’ work experience, volunteer work, recommendations and a written statement from the applicant. Turkey’s centralised placement system has the strength of being transparent and largely objective, which is essential in a context where competition is so high. However, as access expands and the tertiary sector becomes more diverse, it will be necessary to reflect on the value that other sources of evidence might bring in terms of matching student aptitudes to course requirements and enabling alternative pathways into tertiary education.
Recommendation 4.2.3. Improve the reliability and validity of school-based assessments
Including school marks in composite examination scores provides scope for a range of different types of assessments at multiple occasions and in different contexts to inform a student’s final score. In Turkey, the university placement examination already includes school marks to calculate the final score and this report has recommended that Turkey consider basing a proportion of the final mark in the new school leaving examination on teacher-based assessments (see Recommendation 4.2.1). However, the classroom assessment marks that have contributed to the university placement examination in the past have simply duplicated the assessment tasks of the central examination. Teachers’ marks are also reported to vary significantly across different schools.
Provide teachers and schools with instructions for the school-based components of examinations
At present, central regulations for classroom assessment in Turkey provide broad guidelines, including the types of assessments to be used, e.g. examinations or projects and some expectations for their frequency (see Chapter 3). This leaves teachers with significant freedom in terms of the assessments used. However, in the years approaching centralised examinations, teachers tend to rely on multiple-choice questions that are easy to use and mark, and help students practice for their upcoming examination.
To encourage teachers to use a broader range of assessments like investigations, group work or extended essays, the ministry might provide more detailed instructions to teachers and schools on how and what they should be assessed as part of school-based assessments that contribute to central examination results. This is important to improve the validity of the examinations by ensuring that the skills assessed are a more accurate reflection of those set out in the curriculum. It can also help to reduce the negative backwash of the examinations and enhance their positive impact on student learning.
For example, the school leaving examination and/or university placement examination might specify that for the school-based components for Turkish, the mark should be derived from two assessed written tasks per semester. Teachers could be given freedom to choose suitable topics for investigation but the assessment criteria and the number of marks available for each would be prescribed. Instructions would be provided on the nature of those written tasks, e.g. format, length, genre, to ensure that those written tasks do not use multiple-choice items and assess a broader range of skills than can be assessed during the examination. To ensure more reliability in the marks, teachers would be given criteria for scoring their students’ work. In developing this kind of guidance, Turkey can draw on the types of resources provided for teacher-assessed work or “coursework” for examinations in other countries (see Box 4.6).
Box 4.6. Coursework for the Cambridge IGCSE syllabus for English as a First Language
The International General Certificate of Secondary Education (IGCSE), offered by Cambridge Assessment International Education, is an international qualification for 14‑16 year-olds that involves written and oral coursework as well as practical assessments. Coursework is developed by students and marked by their classroom teachers, with the grades contributing to their final mark.
The examination syllabus requires candidates to develop a coursework portfolio with three assignments:
Assignment 1: informative, analytical and/or argumentative writing. For example, students may be asked to write a short diary providing information on activities they undertook over a weekend, i.e. writing to inform.
Assignment 2: imaginative, descriptive and/or narrative writing. For example, students can write a detailed description of people who go to a local shop, i.e. writing to describe.
Assignment 3: a response to a text or texts chosen by the school writing. The text(s) should contain facts, opinions and arguments. Candidates respond to the text(s) by selecting, analysing and evaluating points from the material. They may write in any appropriate form they wish. For example, using letters published in a local newspaper to analyse and evaluate the views and information present on a particular topic, students can write an article for the newspaper, i.e. writing to analyse.
The assignments are graded based on clear guidelines provided by the IGCSE that explain standards of achievement to the teachers.
Source: UCLES (2018[36]), Cambridge IGCSE Subjects, http://www.cambridgeinternational.org/programmes-and-qualifications/cambridge-secondary-2/cambridge-igcse/subjects/# (accessed on 20 February 2018).
Ensure a sufficiently robust system for standardising, checking and moderating the marks awarded by teachers
Incorporating school-based assessments into any examination raises the problem of reliability and, in particular, inter-school comparability. This challenge is particularly acute for the high stakes’ university entrance examination, the AYT. The teacher-assessed marks contribute between 30 and 60 marks to the overall university placement score, which in the context of high demand for places and the determinant role of the placement score, means that every point counts.
There are three main methods of moderation to support the reliability of the marks from school-based assessments:
Consensus moderation: involves bringing groups of teachers from different schools together in order to train them in the application of the assessment criteria, and bring their judgements into line through the review of common examples of student work and discussion.
External moderation: involves reviewing student work in a school by the central examining authority, such as the General Directorate of Measurement, Evaluation and Examinations or the ÖSYM. Schools are usually required to submit a defined sample of work for reassessment. Using evidence from this sample, the central authority may make an adjustment to the marks submitted in order to bring the school into line with others.
Statistical moderation: involves the use of statistical methods to compare (correlate) school-based results and examination scores to identify schools submitting marks that lie outside expected bounds. The process can then apply a “correction factor” to the marks of a school where the examination scores differ significantly from the school-based marks. While this is an objective and efficient method, there are three major challenges to the use of statistical moderation. First, it fails to take into account the fundamental differences between the skills assessed in schools and those assessed by external tests. Second, such statistical methods are rarely appropriate in the case of small schools where measurement errors are large. Third, whilst the application of a statistical correction may move the average school mark in the right direction it may, at the same time, disadvantage individual students by applying an unwarranted penalty to their scores.
Turkey’s new provincial assessment centres, in collaboration with the ministry and the ÖSYM, would be best placed to introduce moderation as they combine technical expertise with some proximity to schools. In the short term, they might use external and/or statistical methods to improve the reliability of the marking practices of schools within their provinces. Where reviews revealed concerns about the reliability of schools’ assessment marks, the centres would be expected to work with the schools to develop training and support to address this. In the future, as teachers’ assessment capacity improves – through improved teacher understanding of the curriculum’s learning goals, initial and continuous preparation, teacher appraisal and school evaluation – school-level groups for moderation will take on a greater role.
Policy issue 4.3. Developing and making available better-quality data on national learning outcomes
In Turkey, creating a fair high school placement system that does not distort teaching and learning will ultimately mean reducing the stakes associated with placement by ensuring that all students can attend a good school. A crucial step in improving school quality will be making available more and better-quality information on the learning outcomes of Turkish students. This is important for effective policies at the system level, and for improving practices in classrooms and schools.
Through participation in TIMSS and PISA, policymakers in Turkey can see how student outcomes compare with those of students in other countries and monitor progress over time with respect to standards of knowledge and skills regarded as important internationally. However, these assessments do not provide information on student achievement against national learning objectives. Nor are they designed to provide information on individual students and institutions at a frequency that would support ongoing improvement efforts at the level of classrooms and schools. International assessments are only conducted every three to four years across a nationally representative sample of students. Countries need more granular national data on learning outcomes to inform policy and practice and monitor progress. National data are also important for interpreting the results of international assessments, especially when they indicate a significant change in a country’s learning outcomes that cannot be explained by international data alone (as was the case with Turkey’s 2015 PISA results, see Chapter 1).
However, in Turkey, policymakers and schools have very little national data that can be used for these purposes. In an understandable desire to avoid exacerbating the highly competitive atmosphere that surrounds examinations, the ministry restricts the examination data made available to schools. School principals can see the results of their own students but not those of other schools. This means that they cannot evaluate their own performance with precision nor can they make meaningful comparisons. As a consequence, most schools focus on their examination scores as an indicator of quality. Since students’ examination performance is influenced by a range of factors beyond a schools’ control this is unlikely to be an accurate indicator of quality (OECD, 2013[30]). Providing teachers and schools with more comprehensive and contextualised feedback on examinations would help them to better understand how they are doing and what they can do to improve.
Providing schools with more data on examination results will still leave major gaps in the availability of reliable information to monitor learning. Turkey currently uses classroom assessments marks from Grades 5 and above for school evaluation and system monitoring purposes but these marks are highly variable in all countries and especially in Turkey where teachers report difficulties in determining students’ levels of learning in line with national expectations. This means that it is not possible to reliably monitor learning at any point in primary or lower secondary, or during upper secondary, or to identify which groups of students or regions perform significantly below the national average. Most OECD countries and an increasing share of emerging economies use national large-scale assessments for these purposes. In Turkey, a system for this is in the early stages of development with ABİDE, and the new initiatives introduced in association with Turkey’s Education Vision for 2023.
Recommendation 4.3.1. Provide schools with meaningful examination data to improve teaching and learning
For schools to be able to critically evaluate how well they are supporting their students to learn, they need some comparative data on examination results across other schools nationally. In Turkey, schools are not provided with sufficient examination data to allow them to make direct comparisons with other schools, making it difficult for principals to evaluate their school’s overall relative performance or that of particular groups of students. Nor is adequate information provided about students’ responses to different examination items, limiting the insights that teachers, schools and the education system, in general, can draw to support improvements in practice.
Provide schools with more comprehensive examination data
The university placement examinations yield a huge amount of valuable data, which with more secondary analysis could help schools understand much better the outcomes of their students. Examination data can reveal important information about the equity of learning outcomes at a school by showing how different groups of students for example, by gender, socio-economic group or mother tongue, are performing. Such data also enables a school to understand how their results compare regionally and nationally and with other similar schools. The ministry and the Placement Centre (ÖSYM) should work together to develop an enhanced system for reporting examination data in forms which would answer these and other questions without allowing the undesirable construction of school “league tables”. This reporting would follow the same principles that are outlined below for ABİDE (see Recommendation 4.3.2).
Provide item-level analysis to improve teaching and learning
Teachers and schools also need information that enables them to identify how students responded to specific examination questions and/or groups of questions. For example, they need to know which items proved particularly difficult and how students, on average, responded to such items. This allows teachers to adjust their approach to teaching problematic topics in order to improve future performance.
Under the previous TEOG system, the General Directorate of Measurement, Evaluation and Examinations had established the foundation for this by publishing past examination papers and making them freely available on line along with the correct answers for each item. In addition, basic item statistics (level of difficulty and discrimination indices) were published in a summary analytical report. These statistics enabled teachers to identify which items students found easier or more difficult. However, reporting the share of students who responded correctly to an item does not provide information about the kinds of errors that students made overall (e.g. which incorrect options did they tend to choose). It would also be helpful for teachers to understand the profiles of the students that answered a question incorrectly. For example, did all students find a specific question difficult or only those of, say, below average ability?
The ministry and the ÖSYM should make analytical reports available for all their examinations including test score distributions and student response statistics for all items. This kind of item-level analysis, especially for the new high school entrance examination, is also critical to ensure that questions are accessible to students from all backgrounds. This will build on the ministry’s existing efforts that bring together teachers to review the Grade 8 examination questions for potential bias (e.g. gender) and accessibility.
Recommendation 4.3.2. Implement ABİDE as a fully developed national assessments in primary and lower secondary
Across the OECD, the vast majority of countries (30) have national assessments to provide reliable data on student learning outcomes that is comparative across different groups of students and over time (see Figure 4.1). When accompanied by background questionnaires, such assessments also provide insights into the factors that are influencing learning nationally and across specific groups. In Turkey, ABİDE has the potential to address a major gap in national data and expand perceptions of student achievement beyond success in national examinations towards a broader vision of learning that is more in tune with Turkey’s education and development goals. The new initiatives launched in association with the 2023 Education Vision promise to enhance yet further this approach, in particular through their emphasis on formative feedback to students and timely remediation. For this to happen, the purpose of ABİDE first needs to be clarified and, based on this, decisions taken with respect to the feasibility and desirability of different aspects of the assessments’ design and implementation. It will also be important to clarify the purpose of ABİDE in relation to other, more recent assessment initiatives, in order to ensure complementary and manage the assessment load.
Create a steering committee to guide the development of ABİDE
The first step to converting ABİDE from a pilot project to a sustainable system of national assessment should be the establishment of a high-level representative steering committee (Greaney and Kellaghan, 2008[38]). The steering committee would provide guidance to the General Directorate of Measurement, Evaluation and Examinations as it develops ABİDE. This will be important to ensure that ABİDE is perceived to be a legitimate instrument for assessing learning outcomes, in a context where the main focus of school and system-level performance are the examination results in Grades 8 and 12. The establishment of a steering committee would also help to raise the status of the assessment nationally and promote the use of its results.
The steering committee should include representatives of key education actors in Turkey from the ministry, provincial education directorates, schools and teachers. It should also consider how the needs of students with special needs and those whose mother tongue is not Turkish will be represented in the group. The steering committee might also include some representatives from educational research groups and academics. International experts with experience of developing national assessments in other countries could be engaged on an ad hoc basis to provide advice.
Define the purpose of ABİDE
At present, ABİDE seems to have been developed for to the primary purpose of providing reliable and comparative data to measure learning outcomes against the national curriculum. This reflects the main function of national assessments internationally and should be a primary function of ABİDE as part of a national evaluation framework focused on improving student learning outcomes.
Internationally, national assessments also support two other broad purposes. One is to provide information to teachers, schools and students that can be used to enhance learning. While this may be regarded as an overarching objective of all national assessment systems, the extent to which it is a central function varies considerably across countries. Having a strong formative goal influences important design questions, such as whether the assessment will measure the attainment of a representative sample of students or should provide data on every child in a given grade. Where improvement at school and classroom level is a primary goal, individual student and school data are important. Such data can be particularly valuable in contexts where there are concerns that students are not meeting minimum learning standards, where teacher assessments do not provide reliable indicators of student learning, and where teachers and schools need more external support to understand and address performance gaps. All these concerns are evident in Turkey, making it important that ABİDE is developed in a way that supports these formative functions.
Another, though more contentious, use of national assessments is as part of school and in some cases teacher accountability frameworks. Using national assessments for accountability purposes can undermine their monitoring and improvement functions, in particular, if there are high stakes attached, such as using data to publish a school ranking or penalise (or reward) staff. However, when assessment data is contextualised and used alongside other information to gain a comparative perspective on school outcomes, it can provide valuable objective input to a school evaluation and accountability system. This report provides suggestions as to how this might be done in Turkey in a way that avoids high stakes and distortions to the other purposes (see Chapter 5). These are all questions and trade-offs that the steering committee will need to review when they provide advice on the purpose of ABİDE and its design features. Turkey will need to consider whether ABİDE should also have these functions and if so, ensure that this reflected in the assessment’s design. Decisions on ABİDE will likewise impact, and also need to take into account, plans for new national assessment initiatives launched after the analysis for this review was completed: namely, the Student Learning Achievement monitoring assessment, the Turkish Language Skills Study and the Common Examinations initiative
Finally, one specific point will be to clarify the extent to which ABİDE might also be used to trial new types of question items for use in the new high school entrance examination (the OECD was told one of ABİDE’s intended functions was to trial new items for the previous TEOG examination). While ABİDE should be developed to include more complex items than those currently used in examinations – this is important if it is to serve as a means to monitor and support the implementation of Turkey’s curriculum – the need to ensure sufficient precision of measurement and high standards of reliability mean that it should not be used as a testing ground for examination reform.
Continue with plans to implement ABİDE in Grades 4 and 8 and make at least one a full cohort assessment
The purposes that Turkey decides for ABİDE will shape its design. For system monitoring, Turkey’s current plans to focus ABİDE in Grades 4 (end of primary school) and 8 (end of lower secondary) should be continued. With the ending of the TEOG, Turkey will no longer have any standardised assessment of student learning in lower secondary schools and there is no reliable national data of learning in primary schools. Turkey needs quality information on student attainment of curriculum standards at these two levels of basic education.
At present, Turkey seems to be planning to use sample-based assessments in both grades, with samples designed to be representative at the provincial level. If ABİDE is to fulfil a strong formative function and enhance school accountability, then Turkey should consider extending these assessments to the full cohort of students. Among OECD countries and partner economies using national assessments, almost half (14) countries conduct full cohort assessments at both primary and lower secondary levels (see Figure 4.1). OECD countries use full cohort assessments to provide information about the school environment and learning levels across schools as part of efforts to raise educational quality overall and as part of accountability frameworks (see Box 4.7).
Box 4.7. Using full cohort national assessments to improve teaching and learning
The National Assessment of Academic Ability in Japan
In 2007, the Japanese government decided to extend its periodic sample assessment to an annual full cohort assessment. The National Assessment of Academic Ability now assesses all Grade 6 and 9 students in mathematics and Japanese language every year and in science every three years. The rationale for the change was to provide information on learning outcomes and school conditions nationally to support equity and equality and use this information to help improve teaching across the country. The assessment is also used to identify any challenges around the implementation of national education policies. The results identify specific areas in subjects where students need more help and provide comparative data on school performance across regions.
The National Assessment Program in Literacy and Numeracy (NAPLAN) in Australia
Since 2008, Australia has administered NAPLAN annually to all students in Grades 3, 5, 7 and 9. NAPLAN assesses literacy and numeracy, and reports student performance in reading, writing, spelling, grammar, punctuation and numeracy. It provides schools with information to monitor student progress and identify students in need of additional support. The assessment is also used to monitor performance at state and national levels, to inform policymaking at both levels.
Student performance is reported on a 1-10 national achievement scale, with national minimum standards defined each year. Following the assessment, schools are provided with a detailed report on individual students’ results, which is also shared with parents. The report shows each child’s result in comparison with other children in Australia. The results are also published at the school level to serve as a monitoring tool and inform improvement practices.
Sources: MEXT (2016[39]), OECD-Japan Education Policy Review: Country Background Report; OECD (2013[30]), Synergies for Better Learning: An International Perspective on Evaluation and Assessment, http://dx.doi.org/10.1787/9789264190658-en; Santiago, P. et al. (2011[40]), OECD Reviews of Evaluation and Assessment in Education: Australia 2011, http://dx.doi.org/10.1787/9789264116672-en; NAPLAN (2017[41]), National Assessment Program, Australia, www.nap.edu.au (accessed on 20 November 2017).
However, full cohort assessments are costly and in the short term, Turkey is likely to need to prioritise where it invests its resources. Across the OECD, full cohort assessments are most common at the primary level. This focus on the early grades is informed by growing evidence that mastery of foundational competencies (reading, writing and numeracy) in the first years of school is critical for learning outcomes later on and that interventions at this age can be more effective and less costly than when students are older (French, 2013[42]). In Turkey, the fact that only a fifth of 3-4 year-olds attend early childhood education programmes signals that a strong focus on developing foundational learning skills should be a policy priority. A national assessment in primary school could also support recent reforms in Turkey to encourage more formative assessment (and less summative marking) in the early grades, by building the validity of teachers’ feedback and confidence in their judgements. Turkey has already taken steps in this direction with the recently introduced Student Learning Achievement Monitoring assessment and Turkish Language Skills Study.
While Turkey might first extend ABİDE to all students in primary schools, this would still leave a notable gap in data at the lower secondary level. Across the OECD, all but five countries have either a full cohort assessment or a compulsory central examination in lower secondary (OECD, 2015[23]). In Turkey, a significant share of students falls behind during lower secondary. Results from the international assessment TIMSS show that by the end of Grade 8, a third of Turkish students do not reach basic proficiency in mathematics and science, compared with only a fifth of students in Grade 4 (Mullis et al., 2016[43]).
Turkey might balance the costs of full cohort assessments with the education system’s information needs by conducting a census and a sample-based approach in alternate years and/or grades. An example is Mexico's new National Plan to Evaluate Learning (Plan Nacional de Evaluación de los Aprendizajes, PLANEA), which alternates full cohort with school and national sample assessments in different years and grades to provide useful, timely and reliable data for national and school needs (see Table 4.3). Any sample assessment will need to be sufficiently large to yield data that is representative at the provincial level, so that differences in learning outcomes and the learning environment across provinces can be monitored.
Table 4.3. An example of a national evaluation framework: Mexico’s PLANEA
2015 |
2016 |
2017 |
2018 |
2019 |
|
---|---|---|---|---|---|
3rd year of pre‑school |
SEN |
||||
4th year of primary |
DC |
DC |
DC |
DC |
DC |
6th year of primary |
SEN, CE |
CE |
CE |
CE |
SEN, CE |
3rd year of lower secondary |
SEN, CE |
CE |
CE |
CE |
SEN, CE |
Last year of upper secondary |
CE |
CE |
SEN, CE |
CE |
CE |
Note: SEN refers to assessment of the national education system, CE to assessment of schools and DC to census-based formative assessment.
Source: INEE (2015[37]), ¿Qué es PLANEA? [What is PLANEA?], www.inee.edu.mx/images/stories/2015/planea/fasciulosnov/Planea_1.pdf.
There are steps that Turkey can take to avoid that any full cohort assessment becomes associated with high stakes for students, teachers or schools. For example, Turkey might only publish aggregated national results and provide individual student results only to students and their parents. More detailed breakdowns of individual school performance and comparisons across different schools would be restricted to schools and policymakers. Importantly as Turkey is considering a new performance evaluation system for teachers, national assessment data should not be used as the basis for decisions on career progression. This is because this practice can narrow the curriculum that is taught by encouraging teaching to the test and is unfair for teachers, because students’ results are shaped by a range of factors beyond their control, including prior learning (see Chapter 5). These considerations apply equally to the use of examination data (see Recommendation 4.3.1).
Address other key questions for the implementation of ABİDE
In addition to the above, the directorate and ABİDE’s steering committee would need to reach an agreement on the following questions, as part of the framework for the implementation of ABİDE:
Determine ABİDE’s frequency: to monitor learning for improvement and accountability purposes, full cohort assessments will need to be run at least every two years. This will ensure that data provides an accurate, up to date perspective on teaching and learning in the country. If Turkey decides to alternate between full cohort and sample assessments as in Mexico, sample assessments for the main purpose of monitoring national trends could be administered every three or four years. Efforts should be made to ensure that the cycles of sample assessments complement, and do not duplicate international assessments.
Decide which subject areas will be assessed: in Grade 4, ABİDE might focus on literacy and numeracy, since competency in these core areas provides the building blocks for study across a wider range of subject areas in the later years of school. Among OECD countries with national assessments in primary, a third (10) assess just mathematics and reading and writing in the national language (OECD, 2015[23]). Limiting the assessment to two subjects will also reduce the costs of administering the assessment to the full cohort on an annual or biennial basis. In Grade 8, mathematics and Turkish language might be assessed as the core subjects each time the assessment is administered. In order to reflect the breadth of the secondary curriculum, as well as creating the scope to focus on subjects that are considered to be national priorities, additional subjects like science, social sciences and a foreign language might be assessed on a cyclical basis in different years. Among OECD countries, assessments at this grade most frequently assess sciences, a foreign language and social sciences, in addition to mathematics and reading and writing (OECD, 2015[23]).
Background questionnaires: the pilot of ABİDE in Grade 8 already included background questionnaires for students, teachers and school leaders. It should continue to accompany the full cohort assessment with some background questionnaires to collect information on the factors that are influencing learning and in particular, the factors that might be causing certain groups of students to perform below the national average. However, to reduce costs as well as any disruption to teaching and learning, only a sample of students and their teachers and school leaders taking the assessment might be asked to respond to the questionnaires for each cycle of the assessment.
Use the national assessment to improve teaching and assessment literacy
If ABİDE ‒ and other newly introduced national assessment initiatives ‒ are to serve an improvement function, Turkey will need to support schools and teachers to use the results. The ministry should consider developing a report following each national assessment for teachers in specific grades and subjects. Such a report might highlight areas where students commonly experience difficulties. Many countries provide this kind of report following their national assessments (see Box 4.8). Suggestions for classroom practice might be made to help teachers address these common difficulties. The report might also identify particular groups of students who tended to perform less well than their peers on average, providing teachers and schools with suggestions on how such students might be better supported in the future.
Box 4.8. Reporting student responses in the United States’ National Assessment of Educational Progress
Since 1969, the National Assessment of Educational Progress (NAEP) has been administered as the largest, ongoing national assessment of student learning in the United States. It assesses what students know and what they are able to do in a variety of subjects including mathematics, science, reading and writing in Grades 4, 8 and 12. Student performance is reported in two ways: as average scores achieved on each question in a particular subject and as the percentage of students attaining achievement levels as set by NAEP. In this way, although school-level and individual student-level results are not provided, the results report on what the students know and what students are able to in a specific subject at a particular grade level.
NAEP also publishes reports for teachers that identify the errors students most commonly make. The reports provide a question-specific breakdown of student performance as can be seen in the example of a Grade 8 science assessment report below:
Report on student assessment in science, Grade 8, 2011
The ministry and the assessment centres in the provincial directorates might work together to support teachers to draw on the results as a pedagogical resource in this way. This might include dedicated training to help teachers and schools interpret their results. In Victoria, Australia, teachers are provided with in-service courses to develop school leaders’ and teachers’ skills in interpreting the results from national assessments and examinations (Santiago et al., 2011[40]). The findings of the national assessment can also be used at the national level to develop better-targeted policies for educational improvement, and at a school level to inform school evaluations and improvement plans (see Chapter 5).
Conclusion
Turkey now faces the challenge of implementing changes to its national examinations and national assessment so that they value and encourage learning across the essential competencies needed for life and work, rather than success in high stakes examinations alone. While this is associated with many challenges, the country’s efforts over the past decade to improve the examinations and ensure their integrity provide many strengths to build upon. These include institutions like the General Directorate of Measurement, Evaluation and Examinations and the Placement Centre (ÖSYM) with proven track records in delivering secure examinations, teachers willing and able to follow formal regulations in the conduct of school-based assessments and a public which, in general, demonstrates trust in those who administer the examinations that shape the life chances of young citizens.
Table 4.4. Policy recommendations
Policies issues |
Recommendations |
Actions |
---|---|---|
4.1. Enhancing the school placement and selection process at the end of Grade 8 |
4.1.1. Consider the system for high school placement to manage demand |
● Ensure that students and parents understand how the new placement system will operate ● Develop transparent criteria for oversubscribed schools |
4.1.2. Provide more information to guide student choice, while improving flexibility between pathways |
● Develop resources, like the new e-portfolios and career guidance, to inform students’ high school preferences ● Create greater flexibility across upper secondary programmes |
|
4.1.3. Reduce any negative distortions created by the high school entrance examination |
● Consider measures to ensure that students from disadvantaged backgrounds have a fair chance of accessing selective schools ● Design the high school entrance examination to reduce negative backwash for teaching and learning ● Improve the discriminatory power of the examination |
|
4.2. Ensuring that examinations at the end of compulsory education serve effectively the functions of certification and selection |
4.2.1 Develop a national examination that helps certify achievement at the end of compulsory education |
● Determine the main purpose of the school-leaving examination ● Determine which subjects will be assessed ● Determine how the examination will be marked ● Consider the types of questions that will be used ● Consider including a small share of teacher-assessed work |
4.2.2. Enhance the validity of the university placement examination |
● Make fuller use of technology to enable a wider range of item types ● Give students more time and reconsider penalising incorrect answers to reduce the pressure on students ● Reduce the grades that contribute to the final placement score ● Consider how to draw on wider sources of evidence for university placement in the future |
|
4.2.3. Improve the reliability and validity of school-based assessments |
● Provide teachers and schools with instructions for the school-based components of examinations ● Ensure a sufficiently robust system for standardising, checking and moderating the marks awarded by teachers |
|
4.3. Developing and making available better-quality data on national learning outcomes |
4.3.1. Provide schools with meaningful examination data to improve teaching and learning |
● Provide schools with more comprehensive examination data ● Provide item-level analysis to improve teaching and learning |
4.3.2. Implement ABİDE as a fully developed national assessment in primary and lower secondary |
● Create a steering committee to guide the development of ABİDE ● Define the purpose of ABİDE ● Continue with plans to implement ABİDE in Grades 4 and 8 and make at least one a full cohort assessment ● Address other key questions for the implementation of ABİDE ● Use the national assessment to improve teaching and assessment literacy |
References
[17] Akkerman, Y. et al. (2011), Overcoming School Failure, Policies That Work, Background Report for the Netherlands, http://www.oecd.org/education/school/49528317.pdf.
[22] Barrow, L., L. Sartain and M. de la Torre (2016), The Role of Selective High Schools in Equalizing Educational Outcomes: Heterogeneous Effects by Neighborhood Socioeconomic Status, University of Chicago Consortium on School Research.
[25] Bethell, G. and S. Gabrščeek (1996), Matura Examinations in Slovenia: Case Study of the Introduction of an External Examinations System for Schools, National Examinations Center, Ljubliana, http://www.cpz-int.si/Assets/pdf/Matura.pdf.
[45] Centre of Educational Assessment (CERMAT) (n.d.), Matura Examination of the Czech Republic (2017), http://Czech Republic, https://www.cermat.cz/.
[9] Clark, J. (2014), Closing the Achievement Gap from an International Perspective: Transforming STEM for Effective Education, Springer Dordrecht Heidelberg, London, New York.
[27] Department of Education and Skills, Ireland (2018), The Education Systems, Ireland, https://www.education.ie/en/The-Education-System/ (accessed on 1 March 2018).
[29] Dufaux, S. (2012), “Assessment for Qualification and Certification in Upper Secondary Education: A Review of Country Practices and Research Evidence”, OECD Education Working Papers, No. 83, OECD Publishing, Paris, http://dx.doi.org/10.1787/5k92zp1cshvb-en.
[32] Ebel, R. (1972), Essentials of Educational Measurement, Englewood-Cliffs, New Jersey.
[12] Eurydice (2018), The Information Database on Education Systems in Europe, https://webgate.ec.europa.eu/fpfis/mwikis/eurydice/index.php/Countries (accessed on 10 January 2018).
[47] Federal Service for Supervision in the Sphere of Science and Education (Rosobrnadzor) (n.d.), Unified State Examination (EGE) 2017 (Единый государственный экзамен, ЕГЭ), http://fipi.ru/ege-i-gve-11/demoversii-specifikacii-kodifikatory.
[21] FRB of Chicago (2016), Selective Enrollment High Schools in Chicago: Admission and Impacts, https://consortium.uchicago.edu (accessed on 10 February 2018).
[42] French, G. (2013), “Early literacy and numeracy matters”, Journal of Early Childhood Studies, OMEP, Vol. 7, http://arrow.dit.ie/aaschsslarts.
[38] Greaney, V. and T. Kellaghan (2008), Assessing National Achievement Levels in Education, National Assessments of Educational Achievement, No. 1, World Bank, Washington, DC.
[20] Ho, C. (2017), “Angry Anglos and aspirational Asians: Everyday multiculturalism in the selective school system in Sydney”, Discourse: Studies in the Cultural Politics of Education, pp. 1-16, http://dx.doi.org/10.1080/01596306.2017.1396961.
[28] Hodgen, J. et al. (2010), Is the UK an Outlier? An International Comparison of Upper Secondary Mathematics Education, Nuffield Foundation, London, http://www.nuffieldfoundation.org (accessed on 15 January 2018).
[37] INEE (2015), ¿Qué es PLANEA? [What is PLANEA?], Instituto Nacional para la Evaluación de la Educación, Mexico City, http://www.inee.edu.mx/images/stories/2015/planea/fasciulosnov/Planea_1.pdf (accessed on 10 December 2017).
[14] Kan, A. (2017), Teacher Capacity Building School Classroom Assessment: Workshop Report, Ministry of National Education, Ankara.
[24] Kis, V. and E. Park (2012), A Skills beyond School Review of Korea, OECD Reviews of Vocational Education and Training, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264179806-en.
[33] Mehrens, W. and I. Lehmann (1986), Measurement and Evaluation in Education and Psychology, Holt-Saunders, Tokyo.
[39] MEXT (2016), OECD-Japan Education Policy Review: Country Background Report.
[11] Ministère de l’Éducation Nationale (2018), Les Niveaux et les Etablissements d’Enseignement [Levels and Educational Institutions], http://www.education.gouv.fr/pid24/les-niveaux-et-les-etablissements-d-enseignement.html (accessed on 15 December 2017).
[10] Ministry of Education and Research (2010), Country Background Report for Sweden, https://www.oecd.org/sweden/45957739.pdf.
[8] MoNE (2016), Millî Eğitim İstatistikleri Örgün Eğitim [National Education Statistics: Formal Education], Ministry of National Education, Ankara, https://sgb.meb.gov.tr/meb_iys_dosyalar/2016_03/30044345_meb_istatistikleri_orgun_egitim_2015_2016.pdf.
[2] MoNE (2014), Higher Education Statistics 2013-2014: Number of Students at Higher Education by Sex and Level of Education, Ministry of National Education, Ankara, http://www.turkstat.gov.tr/PreTablo.do?alt_id=1018 (accessed on 5 March 2018).
[43] Mullis, I. et al. (2016), TIMSS 2015: International Results in Mathematics, International Study Center, Lynch School of Education, Boston College, http://timssandpirls.bc.edu/timss2015/international-results/timss-2015/mathematics/student-achievement/ (accessed on 5 March 2018).
[7] MYK (2013), Turkish Qualification Framework, Vocational Qualifications Authority, Ankara.
[41] NAPLAN (2017), National Assessment Program, Australia, https://www.nap.edu.au/ (accessed on 20 November 2017).
[44] NCES (2018), National Center for Education Statistics, https://nces.ed.gov (accessed on 5 January 2018).
[35] OECD (2017), Education at a Glance 2017: OECD Indicators, OECD Publishing, Paris, http://dx.doi.org/10.1787/eag-2017-en.
[26] OECD (2017), Education in Lithuania, Reviews of National Policies for Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264281486-en.
[4] OECD (2016), Education at a Glance 2016: OECD Indicators, OECD Publishing, Paris, http://dx.doi.org/10.1787/eag-2016-en.
[5] OECD (2016), PISA 2015 Results (Volume II): Policies and Practices for Successful Schools, PISA, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264267510-en.
[23] OECD (2015), Education at a Glance 2015: OECD Indicators, OECD Publishing, Paris, http://dx.doi.org/10.1787/eag-2015-en.
[6] OECD (2013), Education Policy Outlook: Turkey, OECD Publishing, Paris, http://www.oecd.org/education/EDUCATION%20POLICY%20OUTLOOK%20TURKEY_EN.pdf.
[30] OECD (2013), Synergies for Better Learning: An International Perspective on Evaluation and Assessment, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264190658-en.
[18] OECD (2012), Equity and Quality in Education: Supporting Disadvantaged Students and Schools, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264130852-en.
[19] OECD (2011), Lessons from PISA for the United States, Strong Performers and Successful Reformers in Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264096660-en.
[15] OECD (2011), Reviews of National Policies for Education: Improving Lower Secondary Schools in Norway 2011, Reviews of National Policies for Education, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264114579-en.
[16] OECD (2010), Learning for Jobs, OECD Reviews of Vocational Education and Training, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264087460-en.
[1] OECD (2007), Reviews of National Policies for Education: Basic Education in Turkey 2007, Reviews of National Policies for Education, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264030206-en.
[40] Santiago, P. et al. (2011), OECD Reviews of Evaluation and Assessment in Education: Australia 2011, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264116672-en.
[31] Sıdkı Ağazade, A. et al. (2014), “Turkish university entrance test and academic achievement in undergraduate programs: A criterion-related validity study”, Procedia - Social and Behavioral Sciences, Vol. 116, pp. 4582-4590, http://dx.doi.org/10.1016/j.sbspro.2014.01.990.
[13] UCL Institute of Education (2015), Research into the Impact of Selective Schooling and School Composition, Secondary School Size, and Academies and Free Schools for the Guernsey Education Department Research, Report from the London Centre for Leadership in Learning, UCL Institute of Education, London, https://www.gov.gg/CHttpHandler.ashx?id=97557&p=0.
[36] UCLES (2018), Cambridge IGSCE Subjects, Cambridge Assessment International Education, http://www.cambridgeinternational.org/programmes-and-qualifications/cambridge-secondary-2/cambridge-igcse/subjects/# (accessed on 20 February 2018).
[34] Ukraine Centre for the Evaluation of Educational Quality (2015), Ukrainian Matura Examination.
[46] Ukrainian Center for Educational Quality Assurance (UCEQA) (n.d.), Admission Examination (2017).
[3] World Bank (2007), Turkey – Higher Education Policy Study, Volume I: Strategic Directions for Higher Education in Turkey, World Bank, Washington, DC, http://siteresources.worldbank.org/EXTECAREGTOPEDUCATION/Resources/444607-1192636551820/Turkey_Higher_Education_Paper_062907.pdf.
Annex 4.A. International examples of using optical character recognition (OCR) technologies in high stakes examinations
This annex presents international examples of using OCR technologies in high stakes examinations from the Czech Republic, the Russian Federation and Ukraine.
Example 1: Mother Tongue item in the clustered true/false format from the Matura examination of the Czech Republic (2017). A partial translation of the task is: “Are the words in these pairs synonyms (A) or no (N)?".
Example 2: History item testing chronological ordering (“Order these four newspapers by the date of their printing.”) from the Ukrainian Admission examination (2017). Here test takers mark their answers in the grid provided (upper right).
Example 3: Physics item in the constructed response format from the Ukrainian Admission examination (2017). “Calculate the energy of the electric field in mJ”.
Here test takers write a numerical answer in the boxes provided as shown below. The hand written numerals are recognised by the OCR software.
Example 4: Linked History items from the Unified State Examination (EGE) of the Russian Federation (2017).
The use of OCR gives test designers and item writers great flexibility and allows them to construct non-standard questions in interesting ‘mixed formats’. For example, this History question combines two linked, selected response items. Task 18 says: “Look at the postage stamp and choose two correct statements about this stamp.” Task 19 says: “Which two of the coins shown commemorate jubilees of events which happened during the life of a person on the stamp? Students are required to choose 2 from 5 in the first item and 2 from 4 in the second. They write their answers as numbers in the boxes below each item. These are then read by the OCR software.
Notes
← 1. The analysis in this chapter is based on information provided to the OECD review team in early 2018. While subsequent changes are referenced, the OECD was not in a position to analyse the reformed examinations as introduced at the end of the 2017/2018 school year, nor later changes made to the item development approach.
← 2. The Student Learning Achievement Monitoring assessment was introduced under the Ministry of National Education’s 2023 Education Vision. It is intended to provide schools with diagnostic information on students’ strengths and weaknesses in Turkish, mathematics and science. As of mid-2019, some 300.000 students in grades 4, 7 and 10 have participated in the assessment. The Turkish Language Skills Study assesses the competencies of students in four areas: listening, reading, writing and speaking. It has so far been conducted in 15 provinces prior to the nationwide placement exams, providing students with feedback on their Turkish language proficiency and suggestions on areas where they need to improve. The Common Examinations initiative refers to newly introduced joint examinations conducted at the provincial level. The purpose is to provide large-scale, comparable data on student performance as well as information for students themselves to better understand their proficiency gaps. The Ministry of National Education expects that the results obtained from these initiatives will be examined at the school level and used to inform the design of weekend courses to help students address areas of weakness. These initiatives were introduced after the analysis for this review was completed and are therefore not addressed in this report.
← 3. Following the completion of the analysis for this review, the OECD was informed that a new item development approach had been adopted for the high school and university placement exams. This approach reinforces previous steps taken to situate questions in real-life contexts and assess higher order thinking skills. The Ministry of National Education is publishing each month sample questions to help students familiarise themselves with this new style of assessment.