This chapter looks at how the assessment system in Albania measures and shapes student learning. A key challenge is how to bring practices into line with an ambitious new competency‑based curriculum. While policy promotes modern, formative approaches, many teachers lack the skills and support to apply such methods in the classroom. Summative testing continues to dominate, with teachers requiring more help on how to provide feedback and use assessment data to help students advance. The national assessment and examination system could also be strengthened in order to provide more reliable information on the extent to which students are meeting national learning expectations. This chapter suggests ways to do this, including by making fuller use of digital technologies.
OECD Reviews of Evaluation and Assessment in Education: Albania
Chapter 2. Improving learning outcomes through student assessment
Abstract
Introduction
The primary purpose of student assessment is to determine what students know and are capable of doing. Student assessment results can help students advance in their learning and support them in making informed decisions on the next step in their education. In Albania, international assessment results indicate that by age 15 students have fallen well behind their peers in most OECD and EU countries, and the majority have not mastered foundational competencies by the end of compulsory schooling (OECD, 2019[1]). In addition, Albanian students are relatively weak in performing complex tasks that require higher-order cognitive skills. The 2014 curriculum reform sought to improve learning outcomes by shifting toward a competency‑based approach to pedagogy, seeking also to make learning more relevant to young people. However, teachers need more support to detect and address learning gaps as they emerge and to assess complex competencies needed for life beyond school.
This chapter discusses how Albania can strengthen its student assessment system, including its national assessment and examination system, to improve teacher practice and student learning outcomes. The chapter recommends the revision and clarification of national assessment policies, particularly in the areas of formative assessment and portfolio assessment. It also recommends providing teachers with concrete guidance and resources, as well as high‑quality preparation and training, in specific areas of teacher assessment practice such as diagnostic assessment, providing feedback and making use of external benchmarks (e.g. national assessment results). Finally, a review of the design, administration and scoring of the national examinations is needed in order to improve the examination system’s ability to provide reliable information about student learning and to build capacity across the teacher workforce.
Key features of an effective student assessment system
Student assessment refers to the processes and instruments used to evaluate student learning. These include assessment by teachers as part of school‑based, classroom activities, such as daily observations and periodic quizzes, and through standardised examinations and assessments designed and graded outside schools.
Overall objectives and policy framework
At the centre of an effective policy framework for student assessment is the expectation that assessment supports student learning (OECD, 2013[2]). This expectation requires clear and widely understood national learning objectives. Assessment regulations must orient teachers, schools and assessment developers on how to use assessment to support learning goals.
To these ends, effective assessment policy frameworks encourage a balanced use of summative and formative assessments, as well as a variety of assessment types (e.g. teacher observations, written classroom tests and standardised instruments). These measures help to monitor a range of student competencies and provide students with an appropriate balance of support, feedback and recognition to encourage them to improve their learning. Finally, effective assessment frameworks also include assurance mechanisms to regulate the quality of assessment instruments, in particular central, standardised assessments.
The curriculum and learning standards communicate what students are expected to know and be able to do
Common expected learning outcomes against which students are assessed are important to determine their level of learning and how improvements can be made (OECD, 2013[2]). Expectations for student learning can be documented and explained in several ways. Many countries define them as part of national learning standards. Others integrate them into their national curriculum frameworks (OECD, 2013[2]).
While most reference standards are organised according to student grade level, some countries are beginning to organise them according to competency levels (e.g. beginner and advanced), each of which can span several grades (Ministry of Education of New Zealand, 2007[3]). This configuration allows for more individualised student instruction but requires more training for teachers to properly understand and use the standards when assessing students.
Types and purposes of assessment
Assessments can generally be categorised into classroom assessments, national examinations and national assessments. Assessment has traditionally held a summative purpose, aiming to explain and document learning that has occurred. Many countries are now also emphasising the importance of formative assessment, which aims to understand learning as it occurs in order to inform and improve subsequent instruction and learning (see Box 2.1) (OECD, 2013[2]). Formative assessment is now recognised to be a key part of the teaching and learning process and has been shown to have one of the most significant positive impacts on student achievement among all educational policy interventions (Black and Wiliam, 1998[4]).
Box 2.1. Purposes of assessment
Summative assessment – assessment of learning summarises learning that has taken place in order to record, mark or certify achievements.
Formative assessment – assessment for learning identifies aspects of learning as they are still developing in order to shape instruction and improve subsequent learning. Formative assessment frequently takes place in the absence of marking. For example, a teacher might ask students questions at the end of the lesson to collect information on how far students have understood the content and use the information to plan future teaching.
Source: OECD (2013[2]), Synergies for Better Learning: An International Perspective on Evaluation and Assessment, https://dx.doi.org/10.1787/9789264190658-en.
Classroom assessment
Among the different types of assessment, classroom assessment has the greatest impact on student learning (Absolum et al., 2009[5]). It supports learning by: regularly monitoring learning and progress; providing teachers with information to understand student learning needs and guide instruction; and helping students understand the next steps in their learning through the feedback their teachers provide.
Classroom assessments are administered by teachers in classrooms and can have both summative and formative purposes. They can be delivered in various formats, including closed multiple‑choice questions, semi‑constructed short‑answer questions and open‑ended responses such as essays or projects. Different assessment formats are needed for assessing different skills and subjects. In general, however, assessing complex competencies and higher‑order skills requires the use of more open‑ended assessment tasks.
In recent decades, as most OECD countries have adopted more competency‑based curricula, there has been a growing interest in performance‑based assessments such as experiments or projects. These types of assessments require students to mobilise a wider range of skills and knowledge, and demonstrate more complex competencies such as critical thinking and problem solving (OECD, 2013[2]). Encouraging and developing effective, reliable, performance‑based assessment can be challenging. OECD countries that have tried to promote this kind of assessment have found that teachers have required far more support than initially envisaged.
Effective classroom assessment requires the development of teachers’ assessment literacy
Assessment is now seen as an essential pedagogical skill. In order to use classroom assessment effectively, teachers need to understand how national learning expectations can be assessed – as well as the students’ trajectory in reaching them – through a variety of assessments. Teachers need to know what makes for a quality assessment – validity, reliability, fairness – and how to judge if an assessment meets these standards (see Box 2.2). Feedback is important for students’ future achievement and teachers need to be skilled in providing constructive and precise feedback.
Box 2.2. Key assessment terms
Validity – focuses on how appropriate an assessment is in relation to its objectives. A valid assessment measures what students are expected to know and learn as set out in the national curriculum.
Reliability – focuses on how consistent the assessment is measuring student learning. A reliable assessment produces similar results despite the context in which it is conducted, across different classrooms or schools for example. Reliable assessments provide comparable results.
Source: OECD (2013[2]), Synergies for Better Learning: An International Perspective on Evaluation and Assessment, https://dx.doi.org/10.1787/9789264190658-en.
Many OECD countries are investing increasingly in the development of teachers’ assessment literacy, starting with initial teacher education. In the past, teachers’ initial preparation in assessment has been primarily theoretical; countries are now trying to make it more practical, emphasising opportunities for hands‑on learning, where teachers can develop and use different assessments for example. Countries encourage initial teacher education providers to make this shift by incorporating standards on assessment in programme accreditation requirements and in the expectations of new teachers listed in national teacher standards.
It is essential that teachers’ initial preparation on assessment is strengthened through ongoing, in‑school development. Changing the culture of assessment in schools – especially introducing more formative approaches and performance‑based assessments, and using summative assessments more effectively – requires significant and sustained support for teachers. Continuous professional development, such as training on assessment and more collaborative opportunities in which teachers can share effective assessment approaches, provides vital encouragement. Pedagogical school leaders also play an essential role in establishing a collaborative culture of professional enquiry and learning on the subject of assessment.
Finally, countries need to invest significantly in practical resources to ensure that learning expectations defined in national documents become a central assessment reference for teachers and students in the classroom. These resources include rubrics that set out assessment criteria, assessment examples aligned to national standards and marked examples of student work. Increasingly, countries make these resources available online through interactive platforms that enable teachers to engage in developing standards, which facilitates a greater feeling of ownership of the resources and makes it more likely that they will be used.
National examinations
National examinations are standardised assessments developed at the national or state level with formal consequences for students. The vast majority of OECD countries (31) now have exit examinations at the end of upper secondary education to certify student achievement and/or for selection into tertiary education, reflecting rising expectations in terms of student attainment and the importance of transparent systems for determining access to limited further education opportunities (see Figure 2.2). National examinations are becoming less common at other transition points as countries seek to remove barriers to progression and reduce early tracking. Among those OECD countries (approximately half) which continue to use national examinations to inform programme and/or school choice for entrants to upper secondary education, few rely solely or even primarily on the results of examinations to determine a student’s next steps.
While classroom assessment is the most important assessment for learning, evidence shows that the pace of learning slows down without external benchmarks such as examinations. National examinations signal student achievement and in many countries carry high stakes for students’ future education and career options, which can help to motivate students to apply themselves (Bishop, 1999[6]). They are also more reliable than classroom assessment and less susceptible to bias and other subjective pressures, making them a more objective and arguably fairer basis for taking decisions when opportunities are limited, such as access to university or high‑demand schools.
However, there are limitations related to using examinations. For instance, they can only provide a limited snapshot of student learning based on performance in one‑off, time‑pressured exercises. To address this concern, most OECD countries complement examination data with classroom assessment information, teachers’ views, student personal statements, interviews and extracurricular activities to determine educational pathways into upper secondary and tertiary education.
Another concern is that the high stakes of examinations can distort teaching and learning. If examinations are not aligned with the curriculum, teachers might feel compelled to dedicate excessive classroom time to examination preparation instead of following the curriculum. Similarly, students can spend significant time outside the classroom preparing for examinations through private tutoring. To avoid this situation, items on examinations must be a valid assessment of the curriculum’s learning expectations and encourage high‑quality learning across a range of competencies.
Most OECD countries are taking measures to address the negative impact that examination pressure can have on student well‑being, attitudes and approaches to learning. For example, Korea has introduced a test‑free semester system in lower secondary education with activities such as career development and physical education to develop students’ life skills and reduce stress (OECD, 2016[7]).
National assessments
National assessments provide reliable information on student learning with no consequences for student progression. Across the OECD, the vast majority of countries (30) have national assessments to provide reliable data on student learning outcomes, comparative across different groups of students and over time (see Classroom assessment). The main purpose of a national assessment is system monitoring and, for this reason, they provide essential information for system evaluation (see Chapter 5).
Countries might also use national assessments for more explicit improvement purposes, such as to ensure that students are meeting national achievement standards and identify learning gaps needing further support. In these cases, providing detailed feedback to teachers and schools on common problems and effective responses is critical.
Many OECD countries also use national assessments for school accountability purposes, though there is considerable variation in how much weight is given to the data. This is because student learning is influenced by a wide range of factors beyond a school or teacher’s influence – such as their prior learning, motivation, ability and family background (OECD, 2013[2]).
National assessment agencies
Developing high‑quality national examinations and assessments requires a range of assessment expertise in fields such as psychometrics and statistics. Many OECD countries have created government agencies for examinations and assessments where this expertise is concentrated. Creating a separate organisation with stable funding and adequate resources also helps to ensure independence and integrity, which is especially important for high-stakes national examinations.
Student assessment in Albania
Since the start of the competency-based education reform in 2014, Albania has sought to make significant changes to the culture and system of assessment. This includes promoting formative assessment, diversifying assessment modes to include elements such as portfolios and, in 2019, reviewing national examinations to assess more complex, higher-order competencies. In 2020, Albania will also begin the roll‑out of a curriculum‑based national assessment.
Nevertheless, evidence suggests that there is a divergence between the intent of Albania’s assessment and curriculum frameworks and their implementation. Classroom assessment is mostly used for summative rather than formative purposes, and while teachers have adopted new requirements, such as continuous assessment, support in making full use of these practices has been limited. This limits teachers’ ability to identify and address gaps in learning. The national assessment and examination systems provide some information to teachers on student achievement and progress. However, the system provides information on student learning along a narrow range of competencies found in the curriculum framework, and the results of the national assessment (VANAF) and the National Basic Education Examination are not nationally comparable. Albania will need to review its national assessment examination systems to improve their ability to provide information on student achievement vis-à-vis the new curriculum, as well as to strengthen teacher practices in schools. To further support teachers to make better use of assessment to improve learning, Albania will also need to ensure they have access to training, guidelines and tools on classroom assessment practice.
Table 2.1. Student assessment in Albania
Reference standards |
Types of assessment |
Body responsible |
Process |
Guideline documents |
Frequency |
Primary Use |
---|---|---|---|---|---|---|
National Curriculum Framework |
National assessment |
Educational Services Centre (ESC) |
Assessment of Primary Education Pupils’ Achievements (VANAF): Grade 5 in Albanian language, mathematics and science |
Law no.69/2012, as amended, and additional regulations |
Every year |
Monitor student progress at the system level Inform students, parents and schools on students’ achievements |
National examinations |
ESC |
National Basic Education Examination: Grade 9 State Matura Examination: Grade 12 |
Law no.69/2012, as amended, and additional regulations |
Once Once |
Certification of completion of basic education Certification of completion of upper secondary school |
|
Classroom assessment |
Teachers |
Periodic assessment: 1) Continuous assessment 2) Portfolio (beginning in Grade 4) or classroom assignments, tasks and outputs (Grades 1-3) 3) Test or summative assignment Final assessment: Description along 5 levels of achievement (Grades 1-3); Numerical mark on 4-10 scale and description of strengths (Grades 4-12) |
Education Development Institute, 2015, Student assessment framework Instruction on the assessment of basic education pupils Instruction on assessment of pre-university and higher education students Education Development Institute, 2016, Achievement levels for all educational cycles |
Three times per year Once per year |
Monitor student progress Evaluate student achievement Evaluate student achievement |
|
International PISA Standards |
International assessment |
OECD |
OECD Programme for International Student Assessment (PISA): mathematics, science and reading |
Every three years |
ESC develops and disseminates national reports to inform policy and help educators make sense of and use data more effectively |
Source: MoESY (2018[10]), OECD Review of Evaluation and Assessment: Country Background Report for Albania, Ministry of Education, Sports and Youth, Tirana.
Overall objectives and policy framework
Albania’s assessment framework is well‑aligned with the curriculum, and one notable strength is the importance given to assessment for learning. However, there are few accompanying materials to help teachers translate changing expectations for student learning and assessment into practice. Key concepts such as formative assessment and continuous assessment lack clarity and concreteness. While teachers are encouraged to differentiate instruction in response to learner needs, they lack guidance on how to use assessment results to inform their planning and assess students in relation to curriculum standards. Moreover, some regulations continue to run counter to the curriculum, reinforcing outdated teacher‑centred methods of summative testing.
The Albanian curriculum framework is competency‑based
Albania’s new competency‑based curriculum framework was introduced in 2014 with a pilot in 26 schools in Grades 1 and 6. As of 2019, the new curriculum is being implemented at all grade levels. Prior to the reform, the curriculum lacked a clear and coherent vision and philosophy, and there was no official curriculum framework to steer the instructional system (UNESCO, 2017[11]). With the reform, Albania embraced a particularly ambitious framework meant to depart significantly from the primarily knowledge‑based approach found in schools, as well as to align with modern European frameworks such as the European Union’s 2006 Recommendation on Key Competences for Lifelong Learning (UNESCO, 2017[11]; European Parliament; Council of the European Union, 2006[12]). The new curriculum seeks to improve learning outcomes in part by placing an emphasis on a constructivist, student‑centred approach to teaching and learning. However, a shift away from a teacher‑centred pedagogy has yet to take hold in Albanian classrooms.
The concept of competency found in the new framework is similar to that found in many OECD countries, where competencies are often conceptualised as the ability of students to mobilise and use knowledge, skills, attitudes and values to meet complex demands (OECD, 2019[13]). National learning expectations for students in Albania are organised into three to seven competencies per subject area and seven key competencies for lifelong learning (i.e. communication and expression; thinking; learning to learn; life, entrepreneurship and the environment; personal; civil; and digital). Students are expected to develop these competencies by the end of upper secondary education. However, not all of these competencies are assessed, and there is therefore limited information on whether students are meeting this expectation.
The curriculum is organised into learning stages
The curriculum framework is organised into seven curricular “stages”, each corresponding to a certain number of grades from early childhood education through Grade 12 (see Table 2.1). The organisation into stages is meant to align with periods of child development (AQAPUE, 2014[14]). It is also intended to allow teachers to flexibly plan and organise learning based on students’ individual needs and level of progress. For every competency at every stage in schooling, there are corresponding levels of achievement and learning outcomes (MoESY, 2019[15]; MoESY, 2018[16]). These are meant to provide points of reference for mastery of competencies. Indicators of what students should be able to do at each of three levels in Grades 4-12 and four levels in Grades 1-3 are provided. Learning outcomes are described in detail using action verbs for what students should be demonstrating for each competency at each stage.
Organising learning outcomes by curriculum stage, such as the policy introduced in Albania, is an advanced practice in use in some OECD countries such as the United Kingdom (England) and New Zealand (Ministry of Education of New Zealand, n.d.[17]; Department for Education of England, 2014[18]). However, implementing this practice requires a significant investment in developing teachers’ understanding of the learning outcomes and the implications for teaching and learning at each grade level. Many teachers in Albania have not yet developed the knowledge and expertise required to develop their own grade-level learning outcomes, in part due to limited training in this area during initial teacher education programmes and ongoing professional development.
Table 2.2. Albanian curricular stages and grade levels
Grade level(s) |
Curricular stage |
---|---|
12 |
6 |
10 - 11 |
5 |
8 - 9 |
4 |
6 - 7 |
3 |
4 - 5 |
2 |
Preparatory class, 1 - 3 |
1 |
Pre-school |
0 |
Source: AQAPUE (2014[14]), Kurrikulare e Arsimit Parauniversitar të Republikës së Shqipërisë [Curriculum Framework of Pre-University Education], Ministry of Education, Sports and Youth, Tirana; MoESY (2019[15]), Nivelet e Arritjes së Kompetencave të Fushave të të Nxënit [Levels of Achievement of Learning Area Competencies Elementary School], Ministry of Education, Sports and Youth, Tirana.
Albania introduced a new assessment framework in conjunction with the new curriculum framework
Albanian released a new assessment framework which includes several elements intended to help improve the quality of teaching and learning and raise learning outcomes (AQAPUE, n.d.[19]). For example, the framework promotes the use of formative assessment and makes the conceptual distinction between assessment of learning and assessment for learning. The framework also introduces new assessment methods, such as projects and portfolios, to support the assessment of a broader range of competencies, going beyond traditional knowledge‑recall summative tests. The Albanian Ministry of Education, Sports and Youth (hereby, the ministry) notes that the major change in the new curriculum has been the implementation and reinforcement of continuous assessment and assessment for learning (MoESY, 2019[20]).
The application of new assessment concepts in the classroom is lacking
While the concepts outlined in the assessment framework are sound and in line with modern assessment frameworks, they have not been incorporated into classroom assessment practices. This is in part because teachers lack guidance on how to apply different assessment concepts in the classroom. For example, while assessment for learning is distinguished from assessment of learning, teachers are not provided with explicit classroom examples of how to use student assessment data for formative purposes, nor are they provided with guidance on the difference between feedback and summative judgement. Definitions of assessment for learning provided in Albania’s assessment framework refer to monitoring the process of achievement, collecting information to improve the process and making students aware of their strengths and needs (AQAPUE, n.d.[19]). However, accompanying illustrations and explanations of what such practices look like and entail, such as a step‑by‑step guide for implementing formative assessment, are lacking. Certain formative assessment tools and resources such as guidance on providing quality oral and written feedback and diagnostic assessments are also missing.
Further resources for applying other assessment concepts in the classroom are also lacking. For example, there are no quality standards and few exemplars provided for the development of portfolio assessment tasks. In addition, the assessment framework encourages the types of tasks in continuous and end‑of‑term assessments to be more varied, focusing on skills and attitudes and going beyond knowledge. However, there are few resources such as sample assessments and marked student work available. Resources and support for making changes to classroom assessment practice are particularly important, as these innovative assessment practices can be challenging to implement.
Classroom assessment
Albania distinguishes between three modes of classroom assessment
Albania’s national frameworks and guidelines on curriculum and assessment (see Chapter 1) differentiate between three main modes of classroom assessments in Grades 4 to 12: continuous, end‑of‑term and portfolio (see Table 2.3). Continuous assessment, which is also used in Grades 1 to 3, was introduced in the 2014 reform to help teachers more closely monitor student progress, as well as to provide students with feedback on where they are in their learning. It refers to the ongoing assessment of student oral and written work, for which students are awarded a mark using numbers or symbols of the teacher’s choosing. Teachers may choose any oral or written piece of work to make this assessment (AQAPUE, 2018[21]).
The end‑of‑term assessment is considered a more formal assessment since it is conducted within 45 minutes at a time specified by the teacher. The reform allows for this type of assessment to take the form of a task, not necessarily a written test, in order to assess a wider range of competencies. However, the use of tasks represents a significant change in practice, and teachers typically rely on knowledge‑based tests when conducting this assessment.
Student portfolios, introduced in the 2014 reform, consist of a collection of tasks, often creative, practical and involving research, for which teachers award a portfolio assessment mark (AQAPUE, 2018[21]). Portfolio assessment is meant to provide students with opportunities to demonstrate mastery of a range of competencies, as distinct from only knowledge. This is very new to teachers and students in Albania, and ensuring portfolio assessment is conducted with fidelity to the intent of the reform remains a challenge.
Table 2.3. Main modes of classroom assessment in Albania
Grades 4 to 12
Content |
Frequency |
Length |
Recording and reporting |
|
---|---|---|---|---|
Continuous assessment |
Written and oral work |
At teachers’ discretion, with a summative mark every term |
Varies based on student work being assessed |
Teacher logs; periodic report cards to students |
Portfolio assessment |
Selection of creative and research-oriented tasks and products |
Summative mark every term |
Variable |
Periodic report cards to students |
End-of-term assessment |
Test or other summative activity |
Every term |
45 minutes |
Periodic report cards to students |
Source: (AQAPUE, 2018[21]) Curriculum Guidelines, Quality Assurance Agency of Pre-University Education, Ministry of Education, Sports and Youth; (MoESY, 2018[10]), OECD Review of Evaluation and Assessment: Country Background Report for Albania, Ministry of Education, Sports and Youth, Tirana.
Continuous assessment is not clearly distinguished from formative assessment
In Albania, continuous assessment is described as a type of formative assessment in curriculum and assessment guidance documents. However, continuous assessment can serve both summative and formative functions (Muskin, 2017[22]). This conflation between the concept and practice of continuous assessment and that of formative assessment limits the ability of teachers to use formative assessment effectively. For example, the Albanian assessment framework refers to assessment for learning as formative, while Albania’s core curriculum documents refer to assessment for learning as continuous, which may lead to teachers and other stakeholders believing that the marking associated with continuous assessment is a formative assessment practice as such (AQAPUE, n.d.[19]; AQAPUE, 2014[23]; AQAPUE, 2016[24]).
For continuous assessment to indeed be formative, teachers should be using assessment results to provide feedback to students on how to improve and to shape instruction and plan future teaching (OECD, 2013[2]). In practice, teachers and principals reported to the OECD review team that teachers dutifully record a mark using symbols for each student – a continuous assessment practice – but there is little evidence that teachers use the results formatively.
Continuous assessment and end-of-term assessment in Albania serve primarily summative purposes
As part of continuous assessment in Albania, teachers will choose a particular oral or written assignment for which to award certain students a continuous assessment mark. This could be, for example, an oral response to a question posed during class or a written work students produce. Teachers may provide oral or written feedback to students’ responses. End‑of‑term assessment, on the other hand, typically consists of a written test. As noted above, these tests are typically knowledge‑based and do not cover a wide range of competencies. A summative mark is awarded for each instance of continuous assessment and for each end‑of‑term assessment.
Basic requirements for compiling a student portfolio are provided, but there are no quality standards and no clear alignment to competencies
Current guidance for portfolio assessment calls for the inclusion of at least three major tasks such as presentations, projects or written assignments, each of which should be aligned to competencies. Teachers have the freedom to select, in consultation with students, which tasks will be included in the student portfolio, as well as the criteria, including the weighting of each task, that will be used to evaluate the portfolio. One of the tasks should be a longer‑term “curriculum project,” which is meant to provide students with an extended opportunity to put competencies into practice and to integrate competencies from other learning areas (AQAPUE, 2018[21]). While teachers are provided with some examples of portfolio tasks and marking schemes for selected subject areas such as mathematics, there is no set of design principles or guidance provided on how to develop a high‑quality task aligned to competencies. This is particularly important if portfolios are to go beyond a compendium of routine assignments to develop and assess a wider range of complex competencies (Conley and Darling-Hammond, 2013[25]). It is also important because implementing a high‑quality system of portfolios is difficult. When well‑implemented, portfolios can offer additional benefits such as opportunities to promote higher-order and metacognitive thinking and reflection (Darling-Hammond, 2017[26]). At present, there is little evidence that portfolio assessment in Albania is being used as a tool for students to reflect critically on their learning.
Albania uses descriptors in Grades 1 to 3 and a numerical scale in Grades 4 to 12
In Grades 1 through 3, summative marking by subject area at the end of each term is conducted using five levels of achievement. Teachers provide a descriptive mark along the five levels on the basis of student progress (see Table 2.4). Marks are accompanied by written feedback on what the student is able to do at that level. The final descriptive mark is based on a weighted formula that takes into account students’ classroom written and oral work and homework, projects and other creative activities and tasks, and test and quiz results (AQAPUE, 2018[21]). There are no marks provided for student progress on individual competencies.
At the end of each of the three terms in a school year, or roughly every three months, teachers of Grades 4 to 12 must report a numerical mark on a four to ten scale for each of the three modes of assessment, and this periodic reporting is referred to as “periodic assessment”. Teachers calculate the final mark based on a weighted formula using the nine marks accumulated over the three terms for the three modes of assessment. Students receive a final mark along a four to ten scale, with five considered a passing mark. The marking of all student work and performance is in theory aligned to levels of achievement for individual competencies at a particular stage (see Table 2.4).
Table 2.4. Levels of student achievement in Albanian schooling
Grades 1-3 |
Grades 4-12 |
||
---|---|---|---|
Level |
Associated descriptors |
Level |
Associated marks |
I |
Unsatisfactory achievement |
I |
4 |
II |
Achievement that needs improvement |
II |
5, 6 |
III |
Satisfactory achievement |
III |
7, 8 |
IV |
Very satisfactory achievement |
IV |
9, 10* |
V |
Excellent achievement |
Note: *Albanian guidance documents suggest that Level IV and Level V in Grades 1-3 are conceptually aligned to marks of nine and ten respectively.
Source: MoESY (2019[15]) Nivelet e Arritjes së Kompetencave [Levels of Achievement], Ministry of Education, Sports and Youth, Tirana.
The system of summative marking and reporting used for periodic assessment and to generate the final report card presents several concerns. First, there is little accompanying written feedback to help students and parents understand the extent to which individual competencies have been mastered. Second, sample end‑of‑term tests show that teachers tend to assess only certain elements of the curriculum, primarily memorisation of knowledge and lower-level skills. Moreover, summative judgements made by teachers may not be reliable, as there are neither clear outcomes by grade level nor marked exemplars available that would help teachers determine the level of a student’s achievement. Finally, allocating a summative mark for continuous assessment undermines its intended formative function.
Recording requirements are burdensome
New regulations for classroom assessment stemming from the 2014 curriculum reform require more frequent assessment, and each of the three modes of classroom assessment has recording requirements for teachers. Continuous assessment is particularly burdensome: marks are recorded onto a grid with the competency assessed, a description of how the student met the competency and the date of the assessment (AQAPUE, 2018[21]). Teachers may choose how many continuous assessment marks to record per term (AQAPUE, 2018[21]). In practice, as demonstrated by samples of completed continuous assessment grids provided to the OECD review team, teachers are entering continuous assessment marks every 1-6 weeks. The records are checked regularly by principals as part of teacher appraisal, and they are made available when required by school inspections. However, this burdensome requirement serves only a compliance purpose, as records are not used to discuss how teachers use the results of continuous assessment to inform their teaching practice.
Reporting of student achievement is inconsistent and provides limited information on student achievement of competencies
In Albania, schools define their own policies for reporting on student achievement and progress to students, parents and other stakeholders based on the ministry’s general guidelines. In Grades 1 to 3, guidelines indicate that descriptive marks and written feedback should note strengths and areas for improvement. Guidelines also stipulate that final numerical marks in Grades 4 through 12 should be accompanied by a description justifying the mark and expanding on the student’s strengths with respect to the competencies in the subject area and key competencies (AQAPUE, 2018[21]).
While templates for reporting are provided by the ministry, their use is not standardised across schools in Albania. Samples of periodic report cards for Grades 4 and higher provided to the OECD review team show that report cards typically contain marks for each type of assessment in each subject. However, not all contain marks for individual competencies or written feedback on strengths and areas for improvement.
Teachers in Albania do not receive adequate preparation on assessment
All teachers in Albania received some training on the assessment strategies found in the new curriculum and assessment frameworks, but training on assessment seems insufficient for addressing the challenges teachers face in this area. For example, teachers are not yet adept at writing competency‑based test items and interpreting standardised assessment results. At the national level, some mandatory professional development on student assessment has been organised by the Agency for Quality Assurance in Pre‑University Education (formerly the Education Development Institute, see Chapter 1) in the former Regional Education Directorates (RED) and Education Offices (EO) (now regional directorates and local education offices) and for principals and teachers. Additionally, some RED/EOs have prioritised the area of analysing national assessment results and comparing them to classroom assessment results. However, mandatory national and regional professional development opportunities specific to assessment are minimal.
In Albania, teachers may choose to participate in professional development through professional learning networks. These networks include teachers of the same profile (e.g. teachers of mathematics) that meet at the regional level to discuss curriculum changes and national education priorities such as assessment (see Chapter 3). In schools, teachers receive limited support to develop their classroom assessment practices. In some cases, subject departments will discuss assessment‑related topics such as what tasks will constitute the student portfolio, but the use of subject departments to support student classroom assessment is not systematic across Albanian schools.
The quality of training on assessment varies across initial teacher education programmes
There is heterogeneity in terms of the content and quality of initial teacher education programmes across initial teacher education providers, and the ministry currently has no mechanism such as accreditation criteria and guidelines in place for ensuring teachers are trained in how to use the assessment types required by the curriculum (see Chapter 3). For example, based on evidence from the OECD mission, initial teacher education programmes do not typically provide training in psychometrics, leading to difficulties among teachers in developing competency‑based test items and in understanding national assessment and examination results at a more granular level. The ministry is currently taking steps to ensure the quality of initial teacher education, including assessment modules, by creating common standards for curriculum content; however, it is unclear at this stage if the standards will be detailed enough to guide the development of quality assessment literacy modules.
National Examinations
Students in Albania take two examinations, the National Basic Education Examination at the end of compulsory education (end of Grade 9) and the State Matura Examination at the end of upper secondary education (end of Grade 12). These examinations certify the completion of their respective education levels and are required to enter the proceeding level, upper secondary education and tertiary education respectively. The 2019 versions of both examinations have been re-designed to reflect changes to the curriculum, with attempts to introduce more items focused on complex, higher‑order tasks set in real‑world contexts relevant to young people.
The National Basic Education Examination certifies completion of compulsory education but is not typically used for placement into upper secondary education
The National Basic Education Examination is mandatory at the end of lower secondary education. Students who pass the exam receive a basic education certificate, which is required for entry into upper secondary education. Students are tested in Albanian language, mathematics and foreign language. National minority students are assessed in their primary language, Albanian language, mathematics and, optionally, a foreign language. The exam contains multiple‑choice and open‑ended questions and is paper-based (see Table 2.5).
The distribution of scores on the National Basic Education Examination shows that the reporting scale adopted for the exam is, in general, working well for the Albanian language and mathematics tests. Nearly all students pass the minimum required level (category 4). Scores in Albanian language display a near normal distribution (MoESY, 2017[27]). The distribution of scores in mathematics also appears to be acceptable even though it shows a degree of positive skewness. The tests achieve relatively large score ranges, and the distribution of students across the six passing categories (4-10) appears adequate. The six passing categories used in Albania are comparable to reporting scales used in many OECD countries. In England, the GCSE has a reporting scale of 1-9, where 4 is considered a “standard pass” (Ofqual, 2018[28]).
Results of the National Basic Education Examination are not typically used to place students into general upper secondary schools. Instead, placement is based on the catchment area of each school, though students may enrol in a school outside their catchment area if there are spaces available. In a small number of specialised schools such as foreign language schools, only when there are more students that apply than spaces available do these schools set entrance criteria that may include National Basic Education Examination results and course marks. The National Basic Education Examination is not currently a barrier to entry into upper secondary education: the pass rate for all tests taken in 2017 was 99.2% (MoESY, 2017[27]).
While the examination has no impact on placement into general upper secondary education, it nevertheless exerts pressure for meeting expected learning standards upon completion of basic education. For example, students in 9th grade reported to the OECD review team that they felt nervous about passing the examination. In the months leading up to the exam, teachers focus class time on specific topics found in the examination’s orientation programme, which is typically released in December each year. Teachers also offer free review sessions before and after the instructional day. Schools analyse their results to compare annual course marks with examination marks, identify strengths and challenges, and inform annual school plans (see Chapter 4).
The National Basic Education Examination is developed centrally but administered and marked at the regional level
The administration and scoring of the National Basic Education Examination is managed at the sub-national level by local education offices (previously by REDs and EOs). The Educational Services Centre (ESC) develops the tests and sends the tests and the answer keys to the local education offices. The latter are responsible for selecting and training test evaluators, who mark tests, according to national regulations. This includes providing training, in collaboration with regional directorates and the General Directorate for Pre‑University Education, on using the answer key. Local education offices are responsible for the proper and secure conditions of testing and scoring. Students take the exam inside schools as designated by the local education office administering the exam. Regulations prohibit teachers from administering the exam to their own students. However, exam conditions such as the level of security and the behaviour of students (e.g. tardiness) tend to vary across testing centres in Albania. In order for results to be comparable, administration procedures and the quality of marking would need to be more consistent across testing centres. Administration and marking arrangements are subject to change with the new restructuring (see Chapter 1), and the implication for marking and administration of tests remains unclear at the time of drafting this report.
The quality of test evaluators and of marking for the National Basic Education Examination varies by testing centre
A commission within each local education office selects test evaluators, who are teachers, based on criteria laid out in national regulations for the National Basic Education Examination. These criteria include having a minimum of five years of teaching experience in the relevant subject, having been accurate and precise in previous exam evaluations, and having high scores in the last appraisal for promotion examination. However, these and other criteria are not sufficient to ensure teachers with a high level of competence in assessment are selected, and in some cases RED/EOs have recruited teachers who do not meet all the criteria. This is due in part to a lack of incentives: prior to 2019 teachers were not paid for their role as evaluators, and this role is not recognised in the teacher career structure.
There is no national process for quality control and auditing. For example, there is no nationally required training for teachers to become evaluators of the National Basic Education Examination, nor is there a standardised process for ensuring teachers have the required skills and knowledge to be high-quality evaluators. Indeed, internal transparency reports sent to the ministry from the ESC have noted significant variations in the quality of scoring across RED/EOs. Certain phenomena and discrepancies such as examination scores that are higher than course marks have been observed in some RED/EOs. Results of the National Basic Education Examination are thus not comparable at the national level and cannot be used for system monitoring.
Table 2.5. Albania’s national examinations
National Basic Education Examination |
State Matura Examination1 |
|
---|---|---|
Components |
Albanian language Mathematics Foreign Language* * National minority students are assessed in the subjects of mother tongue, Albanian language, mathematics and, optionally, a foreign language |
Albanian language and literature Mathematics Foreign language Elective, by programme: - General (choose 1 of 8) - Artistic (choose 1 of 3) - Vocational (choose 1 of 35) |
Eligibility |
Mandatory at the end of lower secondary in order to receive the basic education certificate |
Mandatory at the end of upper secondary education in order to receive the State Matura Diploma, which certifies completion of upper secondary education and is required to apply to universities |
Item development |
Subject groups established by the ESC in collaboration with the ministry |
Subject groups established by the ESC in collaboration with the ministry and the ministry responsible for vocational education |
Question format |
Multiple‑choice and open‑ended questions |
Multiple-choice and open-ended questions |
Grading |
Points (50), which are converted into marks (4-10). Students fail if they accumulate less than 20% of the points on a test. The range of points corresponding to less than 20% is given a mark of 4. |
Points (60), which are converted into marks (4-10). Students fail if they accumulate less than 20% of the points on a test. The range of points corresponding to less than 20% is give a mark of 4. |
Marking |
Teachers with possibly some training from the local education office; selection based on criteria set forth in national regulations |
Teachers certified by the ESC; selection based on criteria set forth in national regulations |
Primary purpose |
Certification of completion of lower secondary education and provision of a basic education certificate |
Certification of complete upper secondary education and provision with the State Matura Diploma Better quality and fairer selection of candidates for admission to higher education institutions |
Reporting |
Results are announced by the local education office no later than 15 (fifteen) days after each test. Every pupil has access to see only his/her score. The local education office sends the results in written and electronic forms to the ESC. The ESC prepares a public report on students’ achievements. This public report is distributed to all regional directorates, local education offices and schools. |
Results are published on the ESC and the ministry website no later than 15 (fifteen) days after each test. Each student has access to see only his grade/score. The ESC sends the results to all regional directorates and local education offices in written and electronic forms. They then send them to the schools to announce them. The ESC prepares a public report, which it presents to all the interested parties. This public report is distributed to all regional directorates, local education offices and schools. |
Note: 1Students take a version of each non-elective test based on the programme in which they are enrolled.
Source: MoESY (2018[10]), OECD Review of Evaluation and Assessment: Country Background Report for Albania, Ministry of Education, Sports and Youth, Tirana; MoESY (2018[29]), Organizimin dhe Zhvillimin e Provimeve Kombëtare të Maturës Shtetërore 2019 [On the Organization and Development of the National Examinations of 2019 State Matura], Ministry of Education, Sports and Youth, Tirana.
State Matura Examination results certify completion of upper secondary education
Students must pass the State Matura Examination at the end of upper secondary education in order to graduate from upper secondary education. Students who pass the exam receive a State Matura Diploma, which is required for entry into university. Starting in 2019, students take three compulsory tests - Albanian language and literature, mathematics and foreign language - and one mandatory elective test, to be chosen from lists of electives according to programme type (i.e. general, artistic or vocational) (see Table 2.5). The content tested on the compulsory tests is different for each programme type. Prior to 2019, students were required to take two elective tests, and students in general upper secondary programmes could choose from 21 elective tests, as compared to 8 as of 2019. The pass rate across all State Matura tests in the main examination session in June 2017 was about 95% (MoESY, 2017[30]). Since about 79% of public upper secondary students attended general upper secondary programmes in 2016-2017, this review will focus on the State Matura tests this majority of the upper secondary population takes (AQAPUE, 2017[31]).
State Matura tests are designed to discriminate between levels of performance, but they include few items with authentic contexts relevant to learners
Albania’s State Matura Examination is a paper-based test that includes a combination of multiple‑choice and open‑ended questions. In 2019, the number of points possible increased from 50 points for obligatory tests and from 40 points for electives to 60 points on each and every test. The aim of this increase in points was to better discriminate between levels of performance of individual students, an important feature for the purposes of tertiary education selection.
In 2019, Albania’s national examinations were re‑designed in an effort to bring them into alignment with the new curriculum, in particular by including items that assess application of knowledge and skills in a real-world context. Research suggests that competency‑based assessments should measure whether students are able to use and adapt knowledge and skills to perform meaningful tasks in different and new situations, and that competencies tested should be clearly defined (Hipkins, 2007[32]; McClarty and Gaertner, 2015[33]). However, in Albania, test items from the 2019 State Matura Examination reveal that the exam does not yet reflect the types of applied, authentic problems demanded by the new curriculum. For example, the mathematical competencies set out in the 2018-2019 examination programme suggest test items should include authentic contexts, but most test items reviewed by the review team remain abstract and formal rather than concrete and applied (MoESY, 2019[34]; MoESY, n.d.[35]; AQAPUE, 2016[24]). Moreover, while some items on the State Matura mathematics test are set in a real‑world context, such as finding the distance between two boats as they appear from a lighthouse, students are not asked to solve practical problems they may encounter as young citizens.
The administration and scoring of the State Matura Examination is secure
The administration and scoring of the State Matura tests is tightly controlled and monitored by the ESC. Students test in 256 police-protected schools, and the names of those who have access to tests are recorded. Test marking is also tightly secured. Copies are labelled with an ID number instead of a student name and are marked independently by two evaluators in six ESC-run centres. Disagreements between the two scores are resolved by a third evaluator.
Exam evaluators are carefully selected, certified and trained to be able to both write and mark test items. Teachers are selected to be evaluators based on similar criteria as for the National Basic Education Examination, though for the State Matura evaluators must be certified as evaluators by the ESC. Teachers are certified by participating in training and taking a test.
The ESC sends to the ministry a report that provides an analysis of the exam administration process and shows any variations in the quality of scoring. The report names specific schools if there are any discrepancies between, for example, annual marks and State Matura results. In addition, the Minister of Education appoints a national committee that monitors and analyses any issues in the implementation of the State Matura Examination. Issues are then analysed by the school, local education offices and other institutions. There are reportedly some consequences based on these analyses and action may be taken by the ministry and other institutions. However, there is no clear policy regarding consequences or requirements to take action.
State Matura results are used as criteria for entry into Albanian universities
In order to be eligible to apply to a university in Albania, a student must achieve a minimum score according to a formula weighting of the State Matura Examination and upper secondary school course marks. These criteria are decided by the Council of Ministers annually, and the minimum score was set to 6.5 for 2019-2020 admission, up from 6.0 in 2018-2019 (MoESY, 2019[36]). This will likely increase the pressure on students to perform well on the State Matura. Universities in Albania are permitted to set their own additional criteria for admission, including creating their own formula for ranking students and requiring specific elective tests. Upper secondary students in general programmes often choose electives based on the recommendations provided by specific universities on gaining entry to their university or specific programmes. For example, applying to an economics department or programme could require students to take the State Matura economics test. Most universities do not set additional entrance examinations, which shows a certain level of trust in the State Matura process.
The negative backwash effect of the State Matura on teaching and learning is limited
Upper secondary teachers reported during the OECD mission that in their classes they emphasise topics found in the State Matura Examination orientation programme, which is aligned to the curriculum programme for the relevant subject area. Upon its release in December each year, teachers share the examination orientation programme with students so that students can identify and share with them their strengths and weaknesses. Teachers also use questions from previous State Matura tests to assess their students, though they receive little training on developing test items themselves. Finally, teachers provide free extra classes outside the instructional day for certain State Matura subject tests. There does not seem to be a widespread culture of teaching to the test or test tutoring in Albania as compared to other countries in the region.
National student assessment agencies
The ESC oversees the State Matura Examination and provides guidance on the implementation of the VANAF and the National Basic Education Examination
The ESC, established in 2015 under the responsibility of the ministry and preceded by the National Examination Agency (2010-2015), is the institution responsible for national and international assessments and examinations in Albania. The ESC is fully responsible for the design, administration, quality assurance and analysis of the State Matura Examination. However, responsibilities for the National Basic Education Examination and the VANAF are shared between the ESC and the local education offices, and to a limited extent with the new General Directorate for Pre‑University Education and regional directorates. This has consequences, as noted above, for the reliability of the results of these tests at the national level. The ESC has little involvement in other forms of assessment such as classroom assessment. The responsibility for developing resources and training on assessment lies with the Quality Assurance Agency (previously the Education Development Institute), though as noted above these have not been adequate.
While the ESC has staff with psychometric expertise and experience, it has limited human and financial resources. The ESC has 44 employees (2018), as compared to a team of over 200 people (2018) in Georgia’s National Assessment and Examinations Centre (NAEC) (Li et al., 2019[37]). This limits its capacity to take on greater responsibilities such as improving the reliability of the VANAF and the National Basic Education Examination or providing increased capacity‑building support directly to local education offices and schools. The ESC envisions improving its testing capacity through digital scoring and computer-based assessment. This will require a significant investment in infrastructure and resourcing in the ESC and in testing centres.
Policy issues
Albania’s assessment framework is relatively advanced as compared to those in other Western Balkan countries, and also to frameworks in many OECD member states. Albania has introduced new approaches to classroom assessment, encouraging teachers to monitor continuously the learning of their students and use portfolios of student work as a means to assess competencies across the curriculum. The State Matura Examination is well trusted and being reformed to reinforce the curriculum’s emphasis on applied learning and higher‑order skills. These are strong foundations upon which Albania can now build as it seeks to enhance the educational value of assessments and examinations.
The first priority is to increase the support provided to teachers to create more authentic assessment tasks and use assessment results formatively to guide teaching and learning in the classroom. The curriculum implies fundamental changes in practice. At present, teachers in Albania have limited training in assessment and very few practical tools to draw upon. They also lack clear guidance on how students should progress towards end of cycle learning expectations, and have no reliable external data they can use to benchmark their own judgements.
A second priority is to improve the quality of national examinations. Core criteria for quality include high standards of security and reliability. It is essential that Albania bring the National Basic Education Examination into line with the State Matura Examination in these critical respects. Albania also needs to go further in improving the test design of both examinations so that they reinforce the expectations for student learning set out in the curriculum. This means more emphasis on higher‑order cognitive skills and the application of knowledge and skills to solve real-world problems.
Policy issue 2.1. Supporting teachers to make better use of assessment to improve student learning
Albania has introduced an ambitious assessment framework (2015) that calls for practices that compare favourably with practices in OECD countries and are more advanced than those observed in other Western Balkans countries. The framework encourages teachers to assess their students regularly and use the results to inform teaching and learning. It sets an expectation that teachers will use innovative assessment practices such as student portfolio and projects to assess the competencies found in the national curriculum framework, which are higher-order in nature and include transversal skills.
While teachers generally comply with regulations to engage in assessment practices such as continuous assessment and student portfolio assessment, their ability to use these practices to assess a wider range of student competences and use results to improve learning remains limited. For example, teachers are complying with the requirement to provide a continuous assessment mark but are not using the results formatively as envisioned by the policy on continuous assessment. Several factors have contributed to this limited shift in classroom assessment practices. At the national level, rules and regulations are sometimes unclear and require further clarification to help teachers incorporate the practices found in the new framework into their pedagogy. For example, while learning outcomes are defined by stage, very little if any guidance is provided to teachers on the learning outcomes their students should attain by the end of each grade. Some national policies such as teacher appraisal also contradict the developmental intent of the national assessment framework and continue to reinforce a predominantly summative assessment culture in classrooms. Most importantly, teachers in Albania need additional in-school support and guidance to understand the new assessment framework and implement it in their classrooms. This will require significant changes to how teachers are trained to assess students.
Recommendation 2.1.1. Revise and further clarify national assessment policies
Several gaps in assessment policies have limited understanding about the intent of these policies among teachers and hampered effective implementation. First, there are no nationally defined learning outcomes by grade level that teachers and students can work towards, which means teachers do not have a reference point against which to form a valid and reliable assessment of where students are in their learning. Second, there are no examples of marked student work or external benchmarks to signal what achievement at various levels looks like. This is particularly important as Albania is making significant demands of its teachers, including the difficult task of assessing in a reliable and valid manner complex constructs such as critical thinking. Third, the definition of formative assessment is unclear and inconsistent, which has contributed to the confusion of continuous assessment with formative assessment. Lastly, this confusion is further reinforced by the heavy recordkeeping requirement for teachers and other policies such as teacher appraisal and school evaluation, which serve to monitor compliance with recordkeeping rather than the quality of teachers’ assessment of and feedback to students. These gaps and contradicting policies need to be addressed in order to provide clarity to teachers on what is expected of them.
Define expected learning outcomes by grade level
The Quality Assurance Agency should develop learning outcomes at the national level for each grade. These should be aligned to the existing learning outcomes and competencies by stage. Nationally defined learning outcomes by grade would provide teachers with a point of reference for assessing the progress of their students in a particular grade level. It would also help teachers diagnose gaps in learning prior to the end of a curriculum stage. In Serbia, for example, the new competency‑based curriculum being rolled out since 2018 includes learning outcomes for each grade in order to support teachers in understanding how their students might reach the end of cycle learning standards (Maghnouj et al., 2020[38]).
Provide teachers with examples of student work
In addition to descriptions of learning expectations, materials such as examples, including marked exemplars, of student work can be used to demonstrate what achievement of learning outcomes at different levels would look like. Initially, the Quality Assurance Agency should work with experienced teachers to develop examples of student work linked to curriculum outcomes and levels of achievement. These should be made available online. To build understanding, organised groups of teachers should then be involved in the development of these examples. Teachers should be encouraged to work in subject teams and within the professional learning networks to enrich the initial base of examples, with curation conducted by experienced teachers. They should also be encouraged to develop examples of how they come together in schools and in the professional learning networks to discuss students’ work in relation to outcomes.
Provide teachers with disaggregated VANAF results to inform teaching and learning
External benchmarks of student achievement such as results in national examinations or assessments can support teachers in making accurate judgements about student progress, as they provide a reliable point of reference for expected or adequate progress in particular marks and subjects (OECD, 2013[2]). These can be particularly helpful when the capacity of teachers is low. Results from the VANAF, once their reliability has been improved, can be used to provide such external benchmarks (see Chapter 5). To do this, the ESC should provide detailed information on the average achievement of students nationally and regionally in relation to specific outcomes so that teachers can compare their students’ performance. The ESC should also release items where students on average perform well and perform poorly so that teachers can integrate them into their own assessments. In addition, the ESC should provide teachers with student-level data reports. These should contain information by test item such as the individual student’s performance, mean student performance, the competencies assessed and an analysis of common errors.
Create a national policy on formative assessment
The distinct potential of formative assessment remains largely untapped among Albanian teachers. For example, while teachers in Albania have adopted the recordkeeping aspect of continuous assessment, there is little evidence that teachers use the results of continuous assessment to make changes to their instruction and lesson plans. Indeed, while the national assessment framework has set as a goal that classroom assessment be used to inform teacher practice and to improve student learning, the framework does not explain what this means practically. To make the formative assessment policy more tangible for teachers, the Quality Assurance Agency should:
Develop a simple visual that explains how teachers are expected to use assessment results in their teaching practice. This could take the form of a step‑by‑step diagram or figure that guides teachers in operationalising formative assessment. For example, the New Zealand Council for Educational Research (NZCER) provides teachers with a formative assessment cycle, among other formative assessment resources, that includes key questions and steps for using student test results to inform their practice (NZCER, n.d.[39]).
Create a formative assessment toolkit for teachers that includes examples of how to adapt lesson plans based on assessment results, feedback templates and tools to implement diagnostic assessments (see Recommendation 2.1.2). Concrete examples of formative assessment that have been successfully implemented by peer teachers, in particular, can help teachers incorporate formative assessment into their teaching (Hopfenbeck et al., 2013[40]). In Ireland, the National Council for Curriculum and Assessment has designed materials to support teachers and schools in expanding their assessment toolkit. These include classroom video footage, samples of student work with teacher commentary, reflection tools and checklists for reviewing individual teacher and whole school assessment practice (OECD, 2013[2]).
Help teachers provide regular feedback to students. High-quality, effective classroom feedback can accelerate learning and improve educational outcomes (Wiliam, 2010[41]). Research suggests that in order to be effective, feedback should provide students with advice on how to correct errors, misconceptions or gaps, rather than simply providing information on areas of strength or weakness (Farai, Mapaire and Chindanya, 2018[42]). Formative feedback helps students understand where they are in their learning and can be used as a means to build agency and metacognitive awareness. The Quality Assurance Agency should develop resources such as videos or written examples of formative feedback, as well as summaries of research on strategies and tools, in order to support teachers in providing effective feedback, particularly oral feedback. These resources should help teachers involve students as active participants in their assessment and should guide them in providing feedback that is specific, descriptive and constructive (Looney, 2011[43]). Resources should also describe techniques and strategies on how teachers can gather feedback from students and use this information to adjust their teaching (Kitchen et al., 2019[44]). In addition, the Quality Assurance Agency should provide further guidance on developing assessment criteria for success (or rubrics), as well as exemplars of good success criteria. This would build on the limited guidance that is already provided for developing portfolio assessment rubrics. Teachers can use criteria for success to provide feedback on areas of success and areas for improvement, while students can use the criteria to engage in self-assessment and peer assessment (OECD, 2005[45]). Finally, to support teachers in developing and discussing their feedback strategies, they will require access to professional development opportunities (e.g. in subject teams and via the professional learning networks), as well as access to sustained support from school leaders (see Chapter 3).
Set up a communication campaign to explain what formative assessment is to school staff, students and parents. Education stakeholders in Albania broadly see the purpose of assessment as summative, and there is a need to shift mind-sets and expectations about assessment’s formative role in promoting learning and growth. Parents and students will likely be resistant at first to concrete changes in the culture of assessment such as reducing the number of marks and giving more time to feedback and learner-led review. Teachers and school leaders, too, might question the time required for engaging in certain formative tasks such as increasing the amount of written descriptive feedback, in relation to mastery of competencies, students receive. This major change in the culture of assessment needs to be related to school staff, in particular teachers, so as to build ownership and to provide a “language” to communicate the change to parents and students. Strategies for communicating these changes might include meetings with the school community, promotional videos, pamphlets and other informational materials. For example, to communicate changes in the curriculum under Mexico’s Nuevo Modelo Educativo reform, the Mexican government launched a website with videos, infographics and documents explaining the changes (Secretaría de Educación Pública, 2017[46]). In Hong Kong (China), the government held seminars with reporters to discuss the reform philosophy early in the design phase of the reform. The government was also in constant contact with chief editors of major media outlets in order to further engage the public in the reform (OECD, 2011[47]).
Reduce the frequency of marking
As well as providing a summative mark at the end of each period, teachers are required to keep records of each instance of continuous assessment by recording student marks in their class diary, which contradicts the primary formative purpose of continuous assessment. Constant recorded grading reinforces a narrowly summative approach to assessment, where the mark is seen as the main aim rather than assessment being embedded as a means to improve the teaching and learning process. Teachers and schools leaders also report that this requirement is burdensome, taking time away from making changes to instructional practices and meeting with students. To remove these barriers, the ministry should:
Set a maximum of six marks per year that are used to calculate the final mark, which might consist of three end-of-term assessments and three portfolio assessments. These six assessments should summarise achievement against all competencies for the specified subject area, and marks on these assessments should be accompanied by written descriptive feedback. To ensure continuous assessment is used for formative purposes, the practice of reporting a continuous assessment mark every term and factoring in continuous assessment as part of the final mark should be abolished. Teachers could provide marks against assessment criteria as part of formative assessment, but any formative assessment marks should not be factored into the final mark.
Help teachers to integrate formative feedback into their daily assessment. As noted above, effective feedback can have a positive impact on educational outcomes. Quality oral feedback and peer discussion help students understand their progress and identify areas of strength and weakness. Written marking of tasks can also be used to correct errors, suggest alternative responses and help students understand where they need to focus in order to progress in their learning. Furthermore, written feedback can be used to identify what a student has done well and where there are gaps in understanding (Victoria State Government, 2018[48]). It provides an opportunity to provide feedback on the student’s work, as well as on the work process. Finally, positive evaluative feedback using symbols such as smiley faces or stickers with younger children can be used to reinforce desired behaviours such as demonstrating effort or particular attitudes, though using such behaviour management tools requires teachers to have adequate training and support (Evertson, 2013[49]; Tunstall and Gsipps, 1996[50]).
Better align teacher appraisal and school evaluation with the expectations of the assessment framework
Other monitoring and accountability policies, both internal and external to schools, should be used to help reinforce the intent of the assessment framework and provide clarity on what is expected of teachers. Regular teacher appraisal includes the assessment of the teacher’s annual plan, but currently this focuses on elements such as students’ estimated and actual grade point averages rather than examples of assessment methods, qualitative feedback or how data is used to inform next steps for the teacher and the learner. Similarly, the portfolio teachers are required to maintain includes only quantitative assessment data and administrative material such as training certificates, rather than evidence of teacher practice (see Chapter 3). While Albania’s school evaluation framework dedicates one of seven fields to student assessment, and includes detailed indicators and descriptions of what a school’s practices would look like at each rating level, there is no requirement that the assessment field be covered in every full school inspection (AQAPUE, 2011[51]). This means that many schools are inspected without in‑depth evaluation and feedback on their student assessment practices (see Chapter 4). The ministry should revise policies on school evaluation and regular appraisal to help teachers improve their assessment practice:
Include examples of student assessments in teacher plans and portfolios. Rather than including exclusively administrative material, teacher portfolios should include student assessment examples, as is the practice in many OECD countries (OECD, 2013[2]). In Chile, for example, as part of the teacher performance evaluation system, teachers must compile a teacher performance portfolio that includes an example of a written assessment, the associated marking rubric, an interpretation of student results and a description of the feedback given to students to improve future learning (Santiago et al., 2013[52]).
Ensure that school principals review the quality of assessment and feedback provided to students during the teacher appraisal and discuss with teachers how to improve their practice during their appraisal feedback discussion. These elements should be included in the appraisal guidelines currently being developed by the Quality Assurance Agency (see Chapter 3). In addition, school principals will need to be trained to assess the quality of teachers’ assessment practice, and this training should be part of the instructional leadership training provided by the School of Directors (see Chapter 4). Finally, the ability to assess teachers’ assessment practice should be incorporated into the ongoing revision of the school leadership standards.
Define a set of mandatory, core indicators to be used in every full school inspection, and include within these indicators on the quality of assessment practices. Inspectors should review examples of assessment during the school visit and discuss with teachers how they develop assessment material, provide feedback to students and use assessment in a formative manner. Inspectors should also look at how the principal is supporting teachers and the school as a whole to develop their assessment practices. This includes helping to make time to engage in professional learning exercises on formative practices such as providing feedback and on newer practices such as developing portfolio tasks. In Scotland (United Kingdom), for example, the school evaluation framework includes the indicator “teaching, learning and assessment”, which is further detailed along the theme of “effective use of assessment.” The illustration of a “very good” evaluation in this area includes using a variety of assessment approaches, ensuring evidence is valid and reliable, reporting on progress using reliable evidence and engaging in moderation (Education Scotland, 2015[53]).
Recommendation 2.1.2. Provide teachers with guidelines and tools to help them improve their assessment practice
In order to help teachers engage with the curriculum reform and employ the new approaches envisaged under the assessment framework, the Quality Assurance Agency already provides some guidelines, including examples and templates such as sample portfolio tasks and lesson plan templates. However, the Quality Assurance Agency, along with the ESC, needs to give significantly more attention to developing quality assessment resources in order to improve regular classroom assessment practice in schools. This includes providing further guidance on developing high‑quality portfolio tasks and on using portfolio as a tool for student self‑reflection. Teachers should also be provided with guidance on developing and using diagnostic assessments, as well as on using the results effectively. Many countries have found diagnostic assessment a useful tool for focusing teachers on each individual student, improving reliability of judgements and making sure teachers focus on core elements of the curriculum. Finally, providing feedback is an area where teachers require additional support (as noted above). Teachers would benefit from a template for providing written feedback to students when reporting on learning progress.
Provide further guidelines and tools for teachers on portfolio assessment
Teachers are free to develop their own portfolio tasks and criteria for marking these tasks, but they have not received adequate supports or training in how to develop high‑quality tasks and criteria. At present, portfolio tasks focus on basic skills and knowledge rather than on higher-order skils that are applied in real‑world contexts relevant to young people. In addition, assessment criteria sometimes consist of lists of focus areas to be assessed, such as “presentation of work and creative ability”, rather than rubrics with clear and specific indicators for levels of achievement along each criterion.
Guidelines from the Quality Assurance Agency should describe the elements of a quality assessment task, such as ensuring that the task: is set in a real-world context meaningful to young people, allows for student choice, sets out realistic and feasible task requirements, and requires students to engage in critical thinking rather than simply knowledge‑recall (Cohen, 1995[54]; Perlman, 2003[55]). The Quality Assurance Agency should also provide guidance to teachers for developing assessment criteria in the form of a rubric, which should be aligned to the target learning outcomes and include a description of performance at multiple levels for each criterion (Brown and Mevs, 2012[56]). Tools need to be matched with training and guidance on how to use them, as developing reliable performance‑based tasks and rubrics can be particularly challenging.
Portfolio assessment can also be used to promote student self‑reflection on their learning and to demonstrate achievement of key competencies for lifelong learning, such as learning to learn and communication and expression, that may otherwise be difficult to assess (Danielson and Abrutyn, 1997[57]; AQAPUE, 2014[14]). However, there is currently no reflection component in portfolio assessment nor a requirement for students to make a presentation. To promote the achievement and assessment of key competencies for lifelong learning and to promote student self‑reflection the ministry might consider the introduction of a portfolio defence, possibly at key moments such as the end of particular curriculum stages (see Box 2.3).
Box 2.3. Innovation in portfolio assessment: portfolio defence
At Envision Schools in California, United States, students are required to defend their portfolio in 10th Grade and 12th Grade, whereby they present to a panel a selection of works in different subject areas and argue how they have mastered the targeted learning expectations. The panel is typically composed of the student’s advisor, another student and about two additional teachers.
The process of portfolio defence starts when students gather their most relevant certified projects to compose their portfolio work and presentation. These certified projects are performance assessment tasks completed in each subject, around twice a year. Through their performance assessments, students are expected to demonstrate mastery of subject‑area standards, core academic competencies such as inquiry and creative expression, and 21st century leadership skills such as communication and critical thinking.
Self‑reflection is also an essential component of the portfolio defence. Students are required to reflect upon their learning and the learning process, which includes a description of their academic achievements, how they were able to succeed and how they overcame challenges. Students must prepare written reflections that accompany each of the certified projects composing their portfolio. These reflections are meant to demonstrate how they apply 21st century leadership skills to their work.
In addition, students in 12th Grade are required to present a college and career readiness plan using academic projects and personal reflections on topics such as life experiences and goals as evidence to support their plans. This stimulates them to think critically about their own future, their growth as lifelong learners and the impact of their personal and professional choices.
To mark the portfolio defence, the panel employs a scoring rubric. Students must achieve at least the proficient level in order to pass the portfolio defence, which is required to graduate. Evaluators assign a score of emerging, developing, proficient or advanced in the following domains:
Mastery of knowledge
Application of knowledge
Metacognition
Presentation skills
Performance after questions and comments from the panel
Portfolio defence evaluators regularly take part in professional learning to assist them with scoring calibration. These training sessions simulate real portfolio defence experiences. Teachers observe a student’s practice defence and discuss how they would grade students on each domain. The training allows for teachers to discuss areas of disagreement and to improve the accuracy and consistency of scoring.
Research suggests that the Envision Schools approach to fostering deeper learning, which includes the use of portfolios, has had positive effects on student outcomes. One study has found that compared to traditional high schools, students from schools such as Envision Schools have reported higher levels of collaboration skills, academic engagement, motivation to learn and self‑efficacy.
Sources: Maier (2019[58]), Performance Assessment Profile: Envision Schools, Learning Policy Institute, www.learningpolicyinstitute.org/project/cpac (accessed on 25 September 2019); American Institutes for Research (2016[59]), What Is Deeper Learning, and Why Is It Important? Results From the Study of Deeper Learning: Opportunities and Outcomes, AIR, https://www.air.org/sites/default/files/Deeper-Learning-Summary-Updated-August-2016.pdf (accessed on 25 September 2019); Peterson et al. (2018[60]), Understanding Innovative Pedagogies: Key Themes to Analyse New Approaches to Teaching and Learning, OECD Education Working Papers, https://dx.doi.org/10.1787/9f843a6e-en.
Develop diagnostic assessment tools for teachers
Diagnostic assessments, a type of formative assessment, are often used in OECD countries at the beginning of a unit of study to identify a baseline of students’ prior knowledge, strengths, weaknesses and learning needs and to inform teacher planning and instruction (OECD, 2013[2]). In France, for example, students who enter primary school (cours préparatoire) are evaluated in French language and mathematics as part of a national diagnostic evaluation (Ministère de l'Éducation nationale et de la Jeunesse, n.d.[61]). In Albania, students are progressing through school without meeting basic competencies, and teachers do not have access to diagnostic resources to identify gaps in learning as they emerge.
Teachers in Albania should be mandated to conduct diagnostic assessments in all grades, and the ESC should provide teachers with sample diagnostic questions for each grade level. These should be accompanied by response grids and, in collaboration with the Quality Assurance Agency, guidelines on how to interpret results and provide feedback. For some key transition grades such as Grades 1, 5 and 9, the ESC can provide fully standardised diagnostic assessments for teachers to use in assessing the achievement level of their students. The ESC, in collaboration with the Quality Assurance Agency, will also need to provide tools for using diagnostic assessment results to inform lesson and unit plans before the start of a course or unit and to tailor instruction to meet individual student needs. These may include guides for engaging in item-analysis to identify errors and misconceptions, as well as case studies and examples of how teachers can collaborate to share strategies on differentiating instruction on a particular topic or concept (Cambridge Assessment International Education, 2018[62]; National Center on Intensive Intervention, n.d.[63]; Ministry of Education of New Zealand, n.d.[64]). Teachers should also be encouraged to take into account data from the VANAF assessment in Grade 5 and from the newly introduced grade 3 assessment this review recommends (see Chapter 5).
Help teachers provide better feedback to students on their progress
The ministry should also provide schools with a revised student report card template. Research suggests that progress reporting documentation should provide information on students’ progress, strengths, areas for improvement, any sources of concern and recommendations for further learning (OECD, 2013[2]). Report card samples provided to the OECD review team indicate that in grades four and higher marks for each of the three modes of assessment are always reported for each subject area. Some report cards also include an achievement rating for each competency. Written feedback, however, is missing from many report cards. Moreover, when written feedback is provided, it often highlights strengths with respect to learning outcomes but does not provide feedback on how to improve.
The report card template should include spaces to report, for each subject: results of continuous assessments noting the competencies or learning outcomes targeted, scores on the three end‑of‑term assessments, scores on the three assessed portfolio tasks, the final mark and written feedback. The Quality Assurance Agency should develop guidelines on written feedback. For example, in the State of Victoria (Australia), the Department of Education and Training advises that, in their written comments, teachers use plain language that is easy to understand and include elements such as specific areas of strength and challenge, what students can do to continue learning and how parents can assist (Department of Education and Training, Victoria State Government, 2019[65]). These elements are included in a checklist that teachers can refer to as they develop written feedback (Department of Education and Training, Victoria State Government, n.d.[66]). In France, the bilan périodique, a periodic reporting document and constituent part of the livret scolaire unique, a progress reporting logbook containing student achievement results from primary school through lower secondary school, contains descriptive feedback and a mark or level of achievement for both transversal and subject-specific areas of learning (see Box 2.4).
Box 2.4. Student report cards in France: the “livret scolaire unique”
The livret scolaire unique is an individual student report card that contains the results of teachers’ evaluations of the student for all years of compulsory education. The intent behind the livret scolaire unique is to facilitate regular monitoring of students’ progress and to provide students and their families with information on how students are meeting learning expectations. The livret scolaire unique is digital and can be fully accessed online by families and teachers. This allows for a single report card to follow the student across school cycles and in cases where the student may change schools or teachers. The livret scolaire unique is composed of the bilan de fin de cycle (end of cycle review) and the bilan périodique (periodic review), as well as any school certificates (such as first aid training) the student has obtained.
The bilan de fin de cycle is developed at the end of each of the three school cycles in compulsory education. Teachers rate students along four levels of mastery for each of eight components of learning expectations all students are expected to meet by the end of compulsory education. These learning expectations are known as the Common Core of Knowledge, Skills and Culture. They include transversal competencies such as learning to learn and problem solving. Teachers evaluate students’ level of mastery based on an analysis of student achievement throughout the learning cycle. Teachers also provide short descriptive feedback and advice on how a student can achieve better learning outcomes in the following cycle.
The bilan périodique reports on a student’s progress in each subject taught over a specific period of time, typically every three months. After stating the main elements of the subject’s programme, teachers are expected to briefly describe the student’s main difficulties and achievements. At the primary level, teachers evaluate students against the learning objectives set for that subject using a 4-level scale: not achieved, partially achieved, achieved or exceeded. At the secondary level, the 4-level scale is replaced by a marking system (0-20). The bilan périodique also includes a section for communication between schools and families. There, teachers are expected to write about a student’s habits and behaviours, including punctuality, attendance, participation in class and compliance with the school’s regulations. Families can request a meeting with teachers should they have any questions regarding the elements of the report card.
Source: Ministère de l'Éducation Nationale et de la Jeunesse (2017[67]), Le Livret Scolaire [School Report Card] (accessed on 6 March 2018); Académie Caen (2016[68]), LSU: Livret Scolaire Unique, Le Suivi Pédagogique de l’élève, https://www.accaen.fr/mediatheque/communication/actualites/2016/12/presentation_LSU_Caen.pdf (accessed on 25 September 2019).
Recommendation 2.1.3. Ensure that teachers have access to quality training on assessment and incentives to participate in such training
Teachers need additional guidance and support in developing the assessment competencies required by the new curriculum. Teaching standards in Albania are not differentiated by career level and thus do not specify what novice teachers should know and be able to do with respect to student assessment. Initial teacher education programmes do not always cover the new approaches to assessment promoted by Albania’s national assessment framework. For in-service teachers, continuous professional development is limited and takes place primarily in seminar format through the professional learning networks (see Chapter 3). Subject teams in schools meet regularly to discuss their practice, which includes assessment, but they do not have access to resources that would enable them to work together meaningfully to improve practice. Albania should include in its teaching standards the assessment literacy competencies expected from teachers at different career levels and emphasise developing assessment knowledge and skills in initial teacher education programmes. The ministry should also provide resources that promote meaningful in-school collaboration and provide supports specific to assessment such as mandatory and free training and online resources.
Provide clear guidance for preparation on assessment in initial teacher education
While student assessment is a subject included in all initial teacher education programmes in Albania, there is no quality assurance process to ensure that key areas of teacher practice in the area of assessment are covered in these programmes. It is thus unclear if all initial teacher education programmes in Albania are preparing their students on how to apply some of the most innovative aspects of the assessment framework, such as continuous assessment and portfolio assessment. Programmes still use primarily a knowledge-based and didactic approach to preparing teachers rather than an applied, competency-based and student-oriented approach, the latter approach being better suited for acquiring practical assessment competencies and learning effective ways to design assessments and use results (Duda and Xhaferri, 2013[69]). In order to improve the quality of initial teacher education on assessment, the ministry should consider:
Including key design features of quality assessment preparation as part of accreditation criteria in provider guidelines. For example, in New South Wales (Australia) the Board of Studies, Teaching and Educational Standards (BOSTES) (now the New South Wales Education Standards Authority) identified 24 key elements in the area of assessment that describe the qualities of beginning teachers and provide a framework for what should be covered in initial education programmes (BOSTES, 2016[70]). These elements include: knowing the purpose of formative and summative assessment, as well as how to use both in the classroom; applying concepts such as validity and reliability in the development of assessment tasks and activities; knowing how to improve reliability, such as through moderation; having sufficient data literacy to be able to use results from large-scale assessments to improve student learning; and understanding the importance of developing criteria for evaluating performance on assessments at different levels.
Clearly defining the assessment competencies for graduates of initial education programmes, as part of the differentiated teaching standards this review recommends (see Chapter 3). These should be aligned with the accreditation criteria in provider guidelines (see above). In Ireland, for example, guidelines on required components of initial teacher education programmes define outcomes that are based on competencies for newly qualified teachers (see Chapter 3). In Australia, competencies for novice teachers (or “Graduate” level teachers) are defined in the Australian Professional Standards for Teachers (Australian Institute for Teaching and School Leadership, 2011[71]). For example, the standards call for teachers to demonstrate their understanding of a variety of assessment approaches such as diagnostic, summative and formative assessment, as well as to understand how assessment moderation can help teachers make consistent and comparable judgements.
Providing opportunities for practical experience in designing and implementing assessments. Some of this experience can be provided within initial teacher education programmes themselves, as has been done through Australia’s Assessment and Mentoring Program (AMP) (Jenkinson and Benson, 2016[72]). In the AMP, students in their final year of pre-service teaching courses mentor second year students, assessing their lesson plans and engaging in a dialogue about their teaching. In addition, mentors work together to design a lesson plan assessment tool, they engage in moderation, and they discuss with each other their work with mentees. A university AMP coordinator also functions as a mentor for mentors, engaging in moderation with mentors, providing feedback and troubleshooting areas of difficulty.
Provide mandatory and free training on key elements of the assessment framework
Assessment literacy is a priority training area in Albania, but teachers who have important weaknesses in this area are not identified systematically and do not receive additional training to help reach a minimum level of competency. Teachers who are facing major challenges in the area of assessment should receive free and mandatory professional development. This could include a seminar to review key concepts and practices and an in-school project, with mentorship, to implement the techniques learnt. In those cases where sustained follow-up is needed, an external assessment expert who can serve as a coach can also be assigned.
School principals should be responsible for identifying the learning needs of teachers in the area of assessment, and for ensuring that those who do not meet a minimum level of competency receive additional training and coaching. Student assessment should be a mandatory area of regular appraisal and teachers and school principals should set annual objectives for improving student assessment and review progress in this area. To carry out this role, school principals will need substantial support on how to appraise the quality of teacher assessment practices, as well as other elements of teaching quality (see Chapter 3).
Encourage in-school teacher collaborative learning on assessment
International research suggests that the kinds of learning opportunities that are most effective at improving teaching competence are job-embedded, collaborative and sustained over time (Goe, Biggers and Croft, 2012[73]; OECD, 2014[74]; Darling-Hammond and Rothman, 2011[75]). School-embedded approaches can help teachers relate the content of training to their school and classroom context, while also supporting the development of a culture of improvement and a shared vision for learning (OECD, 2019[76]). In Albania, however, professional development primarily occurs outside of schools in seminar format through the professional learning networks. The Quality Assurance Agency should encourage school-based subject teams to play a more active role in professional collaborative learning, which can be used to develop assessment literacy by, for example, engaging in moderation exercises. The professional learning networks should also be harnessed to support the work of subject teams, for example by providing opportunities for teachers to share how they have worked with their subject teams to engage in practices discussed in the professional learning network (see Chapter 3). The Quality Assurance Agency should clearly define the topics, such as formative assessment or reliability in grading, where both subject teams and professional learning networks should focus and engage in active learning activities. Schools might be encouraged to select a teacher who can be trained by the Quality Assurance Agency to be an assessment coordinator in each subject team or, in the case of small schools, across subject teams. The assessment coordinator would organise:
Assessment moderation: Teachers in the same subject team can mark each other’s assessments and discuss differences in their marking. Assessment coordinators can also facilitate between-school moderation within the professional learning networks, which would help align the work of these two groups. Research suggests moderation can help teachers build a shared understanding of criteria for marking and expectations for learning, and it is a key strategy for improving the reliability of teacher judgements and marking within and across schools (OECD, 2013[2]). This would also help teachers to identify learning issues early on. Moderation could be conducted at first with end-of-term assessments and then extended to other assessment forms when overall assessment literacy has improved. Assessment coordinators should draw on national data such as VANAF results to provide an external perspective and strengthen reliability of marking.
Assessment practice discussions: Teachers in Albania should use school-based teams to discuss how to best implement some of the learning from outside training, in particular from the professional learning networks, in their schools and classrooms. As further discussed in Chapter 3, school-based teams should provide opportunities for teachers to engage in peer-learning activities such as peer classroom observations, coaching, looking at student work and how to mark it, co-creation of instructional material, and targeted reflective discussions around improving teacher practice (Harrison, 2005[77]; Tang et al., 2010[78]; Darling-Hammond and Rothman, 2011[75]). In Georgia, for example, the United States Agency for International Development’s (USAID) Georgia Primary Education Project (2011-2017) focused on fostering school‑based professional development by training teachers within schools to lead peer learning and implement activities such as teacher learning circles to discuss student achievement and ways to improve instructional effectiveness (Li et al., 2019[37]).
Expand online supports for teachers
The Quality Assurance Agency should develop significantly the pre-tertiary curriculum platform to become a hub for teacher training, resources and guidance (see Chapter 3). The platform would be the home for the resources this review recommends, which include formative assessment toolkits and marked exemplars of student work. In Australia, the Australian Institute for Teaching and School Leadership has created a website with a wide variety of practitioner tools and resources, including lesson plans, guidance on providing feedback and video examples of how to use different methods to assess students (Australian Institute for Teaching and School Leadership, 2017[79]). The Quality Assurance Agency should also progressively open the platform so that teachers can upload their own assessments and resources. The Quality Assurance Agency will need to carefully screen uploads in the short run in order to ensure quality. In the long run, materials could be peer reviewed by expert teachers (e.g. Level 4 or 5 on the proposed teacher career structure; see Chapter 3).
Policy issue 2.2. Ensuring the reliability and validity of the exam system
Albania is seeking to align its examinations with the new curriculum, which is entering its final year of implementation in 2019. This represents an opportunity for revising not only the content but also the design and administration process of the National Basic Education Examination and the State Matura Examination. While both exams are relatively fit for purpose and compare positively in terms of design to exams in other Western Balkan countries, they would benefit from continuous improvement to increase their reliability in assessing students’ learning. Reliability is important for providing comparable information on individual student achievement of learning expectations. In the case of the National Basic Education Examination, testing conditions and the quality of marking vary across the local education offices that administer and mark the test. This is due in part to a lack of adequate training and incentives for teachers responsible for administering and marking the exam.
The design of the exams could also be improved to better assess students’ application of knowledge and skills, as well as to assess a wider range of competencies. For example, because the vast majority of students pass the National Basic Education Exam and the results are not typically used to place students into general upper secondary programmes, Albania could afford to be more innovative with the exam design to enhance the positive backwash on classroom practice. This might include the introduction of a project element and also a portfolio defence. Over time, some of these innovations could be adapted to the State Matura Examination.
Recommendation 2.2.1. Reinforce the reliability of the National Basic Education Examination and review its design
The ministry and the ESC should improve the National Basic Education Examination to ensure that it reliably measures students’ level of proficiency and assesses a wider range of the competencies found in the curriculum. This will require a revision of both the exam administration and the test design. Albania faces several challenges in the administration conducted by local education offices (previously by RED/EOs), including security breaches, test-taking conditions that are not adequately controlled and difficulties recruiting qualified teachers to serve as test administrators or markers. This can be improved in the short term through stronger oversight by the ESC and more support for evaluators, and in the longer term through computer-based administration and marking. The test design can also be improved by adjusting the number and quality of test items to increase reliability and to better assess curriculum competencies. Finally, Albania should consider involving teachers in test-item development and introducing a project‑based component in order to promote teachers’ use of these approaches in their classrooms.
Give the ESC a mandate for quality control
Currently, the ESC does not play a quality control role for the National Basic Education Examination. The agency produces a report for the ministry analysing the results and any notable discrepancy, but this report is not used to hold the local education offices accountable for the quality of the administration of the exam. To ensure better comparability at the national level and to improve the reliability of the exam, Albania should consider giving the ESC a mandate for quality control, which could include:
Sending observers to testing centres to monitor the administration of the exam against the exam administration manual. In addition to increasing the reliability of results by ensuring testing conditions are the same across local education offices, sending observers to testing centres would also help build trust in the results of the National Basic Education Examination (UNESCO Institute for Statistics, 2017[80]).
Auditing a random sample of tests in each local education office to ensure the quality of marking meets standards set forth in marking schemes and to identify local education offices where further training and support is needed.
As Albania builds the capacity and systems needed to expand the use of on-screen marking and implement computer-based assessments in the medium term (see Recommendation 2.2.2), the ministry may choose to move to a system of centralised administration and marking of the National Basic Education Examination, as is currently the case for the State Matura Examination. The ministry will need to provide additional resources to the ESC for it to carry out any new expanded mandate.
Provide incentives and improved training to teachers to strengthen the reliability of marking and administration
In addition to more oversight, steps are also needed to attract and train more qualified examiners. Prior to 2019, there were no incentives to apply to this role. Though the practice of not remunerating teachers was due to change in 2019, the ministry will need to ensure remuneration is high enough to attract quality candidates. To further incentivise teachers, this role should be recognised in the career structure. For example, the role of evaluator could be incorporated into “Level 4” in the career structure proposed in this review (see Chapter 3). The competencies needed for this role - in assessment marking, for example - could be incorporated into the differentiated teaching standards.
Training for administrators and evaluators also needs to be improved. Individual local education offices design and deliver their own training, causing the quality of this training to be variable. While the ESC trains representatives from local education offices, this training has been inadequate for ensuring reliability of results. It is important that administrators and evaluators follow strict procedures set forth in ministry regulations in order to ensure reliability. The ESC should be responsible for ensuring that administrators and evaluators of the National Basic Education Examination receive adequate training that is designed at the national level. This could be accomplished through a train‑the‑trainer model, whereby those in local education offices who train administrators and evaluators must attend training delivered by the ESC. Additionally, as is done with the State Matura, evaluators should also be required to pass a test developed by the ESC to certify their role.
Revise test items to improve reliability and alignment with the curriculum
As Albania reviews the design of the National Basic Education Examination, it should consider the following to improve the properties of the exam:
Increase the number of test items. The ESC should consider increasing the number of items within each test to improve its general psychometric properties and, at the same time, to assess a broader set of competencies. Based on the test samples reviewed by the OECD, the number of items of medium difficulty seem limited. This limits the capacity of the tests to discriminate across the ability range. Albania could increase the number of medium difficulty items within the existing framework because each test lasts for 150-minutes with a maximum possible score of 50 points. This allows 3 minutes per marking point, which is more generous than the average time typically found at this level in other exam systems. For example, in Singapore’s Primary School Leaving Examination, the first part of the test, which includes multiple-choice and short-response questions, allows for 1.3 minutes per marking point (Singapore Examinations and Assessment Board, 2019[81]). Part two, which includes short‑answer and long‑answer questions, allows for 1.6 minutes per marking point.
Remove restrictions on multiple-choice items. The mathematics test could be made more reliable by including 20-25 multiple-choice items rather than the 13 multiple‑choice items currently required, especially if these items were targeted to the middle of the ability range. In the Albanian language test, removing the 13 multiple‑choice item restriction would give test setters more freedom in the design of the test, particularly when such items are text‑based. For example, test setters would have greater flexibility to choose from a wider range of texts and vary their length and the number of constructed items. This would allow them to generate better test items, particularly those that test skills in context.
Revise the quality of test items. Most free‑response items are not functioning as multi‑point items but rather dichotomously, meaning students are scoring all or none of the points, especially in mathematics. While this ensures discrimination at the extremes of the ability range it weakens discrimination in the middle of the range where most students are found. The functioning of such items should be checked through statistical analysis and, where necessary, their scoring should be revised. Complementary items may then be introduced to ensure better syllabus coverage and enhanced discrimination across the ability range.
Include more application of knowledge and skills. A national examination aligned to a competency‑based curriculum often includes more application and problem solving in real‑world contexts (see Recommendation 2.2.2), as is described, for example, in the 2018-2019 Albanian National Basic Education Examination Programme for mathematics. While this review was unable to analyse test samples from the 2019 examination, one would expect to see significant shifts in test items as compared to those prior to the curriculum‑aligned examination in 2019. For example, in Albanian language, students would engage with more real-world rather than “literary” sources of text. Tasks in mathematics would prompt students to use authentic data and practical contexts in addition to traditional abstract and “formal” mathematics.
As the ESC engages in an ongoing review of the National Basic Education Examination design, it should also expand the pool of teachers who participate in the development of test items to build system capacity, as is recommended in this review for the State Matura Examination (see Recommendation 2.2.2).
Assess a broader set of competencies
In addition to achievement in traditional subject areas, Albania’s curriculum framework sets the expectation that students will master the seven key competencies for lifelong learning (AQAPUE, 2014[14]). However, these competencies are not systematically assessed, which means students and parents, as well as teachers, administrators and policy makers, have little information on student learning and progress in these areas. Assessing these competencies through a formal examination can also have positive backwash effects, helping teachers and students take competencies more seriously and helping to shift the focus of teaching and learning in classrooms. In addition, the type of quality investment needed to develop grading criteria for a formal examination would yield benefits for classroom assessment, including portfolio assessment.
In order to evaluate students’ achievement of a wider range of competencies, a school‑based, cross-disciplinary project could be introduced as a component of the National Basic Education Examination. Such a component would build on the “curriculum project” that is already required as part of classroom assessment, and it could assess transversal skills found within the key competencies such as communication, use of digital media and ICT, collaboration, problem solving and project management. In Ireland, for example, the reform to the Junior Leaving Certificate, which accompanied the curriculum reform, was deliberately revised to include more project-based work and to help shift the focus of assessment toward supporting teaching and learning throughout the whole lower secondary education cycle (MacPhail, Halbert and O’Neill, 2018[82]) (see Box 2.5).
Box 2.5. Lower secondary examination in Ireland
In 2015, Ireland introduced a new framework for the Junior Cycle of education (lower secondary level, three years in total). An assessment model called the Junior Cycle Profiles of Achievement (JCPA) is included in the framework. The reform takes a dual approach to assessment, increasing the focus on classroom-based assessment and formative assessment alongside the final external examination.
Under the JCPA model, students must take two Classroom-Based Assessments (CBAs), one in their second year and one in their third year, in each subject. These assessments might include oral presentations, written work, practical activities, artistic performances and scientific experiments. Teachers are provided with assessment criteria, called Features of Quality, and other guidelines and materials for engaging in CBAs. In addition, teachers participate in Subject Learning Assessment Review meetings organised by subject to discuss students’ work related to the CBAs. The purpose of these meetings is to help teachers be more consistent in their judgements, provide better feedback to students, and align more closely their judgements to assessment standards, including the Features of Quality.
Related to the second CBA is the written Assessment Task, which requires that students demonstrate an understanding of the knowledge and skills covered in the second CBA. This task is specified and published by the National Council for Curriculum and Assessment (NCCA). Students complete the task in class under the supervision of teachers and in accordance with guidelines provided by the NCCA. The task is marked centrally by the State Examinations Commission (SEC).
At the end of their third year, students take external examinations in most subjects. All exams are created, administered and marked centrally by the SEC. Most subjects have only one common level of difficulty, though English, Mathematics and Irish have two levels (ordinary and higher).
As education in Ireland is compulsory up to age 16, or three years of secondary education, students who receive their junior cycle certification must choose whether to continue with schooling or pursue other training opportunities. Their assessment results in junior cycle – classroom-based and external – act as key pieces of information that help them make this important decision.
Sources: Ireland Department of Education and Skills (2015[83]), Framework for Junior Cycle 2015, www.education.ie (accessed on 18 December 2019); Eurydice (2018[84]), Assessment in Lower Secondary Education – Ireland, https://eacea.ec.europa.eu/national-policies/eurydice/content/assessment-lower-secondary-education_en (accessed on 18 December 2019); NCAA (n.d.[85]), Junior Cycle, https://www.curriculumonline.ie /Junior-cycle/ (accessed on 19 December 2019).
Albania will need to clearly define the structure and provide detailed assessment criteria to standardise the marking of the school-based project component. This will help ensure a high enough level of reliability and trust in the examination, which is important given its role in the certification of student completion of basic education. The Quality Assurance Agency, in co-ordination with the ESC, will need to:
Develop criteria for the design and marking. This would include the time allocated to teachers and students for developing the project, the target competencies or learning outcomes and their weightings, the final product of the project, how the product will be assessed (e.g. by panel) and the marking scheme for the product. In Ireland, teachers are provided with assessment guidelines and assessment criteria for engaging in Classroom-Based Assessments (CBAs) in lower secondary education (see Box 2.5).
Train teachers to supervise the project-based assessment and to mark it. This training could be included in mandatory training on assessment as described in this review. It could also be included as a focus in both professional learning networks and in school-based subject teams.
Introduce moderation, including a role for an external evaluator. An external evaluator would serve on a panel that marks the project-based assessment or mark the project‑based assessment themselves. This external evaluator could be a teacher from another school who has been certified as an evaluator for the State Matura or, once the ESC begins certifying them, as an evaluator for the National Basic Education Examination. The ESC will need to monitor closely the results of this school‑based assessment.
Recommendation 2.2.2. Review the design, administration and scoring of the State Matura to improve the exam’s quality
In 2019, the State Matura Examination was re‑designed in an effort to bring it more into line with the 2014 curriculum framework. However, test items in the new State Matura do not yet reflect the authentic contexts relevant to learners that are demanded by the competency-based curriculum, particularly in mathematics where traditional approaches to formal mathematics continue to dominate instead of more applied approaches. This section suggests how item design and development might be improved and teacher skills in item‑writing strengthened. It also argues the value of Albania moving in the future to computer‑based assessments, building on the recent introduction of digital marking. This has the potential to bring gains not only in terms of test quality, but also security, efficiency, reliability, and over time, cost-effectiveness.
Improve training and broaden the pool of potential test writers
While the ESC’s item writers have attempted to respond to the new specifications of the State Matura Examination, the ESC needs to provide more training and orientation in developing items that reflect the competencies required by the new exam programme. For example, sample test items for the 2019 State Matura test in mathematics use few contexts that are practical and relevant to young people. The ESC training for item writers and test developers should include how to test the application of knowledge and skills and how to develop items set in authentic, real-world contexts. In the UK GCSE mathematics exams, item writers address this issue by developing questions in contexts which are familiar to virtually all students (e.g. ordering meal combinations at a fast-food restaurant or calculating the number of text messages sent between friends) (OCR, 2018[86]; Assessment and Qualifications Alliance, 2017[87]).
As Albania reviews the training provided to test item writers and test developers, it should consider how it could use the item development process to build capacity across the teacher workforce. At present, the ESC develops items for the State Matura with small subject groups that include teachers. In 2019, five teachers participated in each subject team, and the majority of these teachers had previously served as test item writers for the State Matura. Over time, the ESC should look more deliberately at how it can extend the pool of teachers who contribute to item-writing. For example, the ESC might consider offering specific training on item-writing for interested teachers in lower and upper secondary who demonstrate subject-area expertise (Kuan, 2011[88]). Universities can also be encouraged to offer specialised post-graduate courses in item-writing and test development. After being trained, these teachers might be invited to develop test items, possibly through an online portal, and to submit them for review and comments to more experienced item writers or a group of experts (Waddington, Nentwig and Schanze, 2008[89]). Items that meet quality standards could be included in the item pool. Teachers might then be tasked with sharing what they learnt with other teachers through the professional learning networks or school‑based subject teams. Teachers will also need to be incentivised to apply to participate in item-writing. Item writers should be remunerated, and training and university courses should contribute to advancing to Level 4 or Level 5 on the new career structure or to taking on a role such as assessment coordinator (see Chapter 3).
Eliminate the pre-determined cut-score and move to a norm-referenced or criteria-related approach to standard-setting
Currently, the pass/fail cut-score for State Matura Examination tests and National Basic Education Examination tests is set at a fixed level of 20% of the maximum possible score. This is clear and easy for all stakeholders to understand, but it neither takes into account possible variations in the difficulty of tests from one year to the next nor links results to expected levels of achievement found in the curriculum. A fixed cut‑score can also produce comparisons of results over time that are misleading: an increase in the pass rate may in fact be due to the use of an easier test rather than an increase in the absolute level of achievement.
In the medium term, as part of the ongoing review of both national examinations, Albania should move to either a norm-referenced or criteria‑related approach to standard‑setting on both exams. The following are considerations to take into account, alongside building political and public support for the chosen course:
In the norm‑referencing approach used in OECD countries such as Finland, students are classified based on a comparison among them, which means their scores have meaning only relative to the scores of other students (OECD, 2013[2]). The proportions of the testing cohort falling into each reporting category are set in advance, which means the level of difficulty of the test does not lead to advantage or disadvantage for individual students. The drawback is that the relative outcomes are always the same (i.e. same proportions in each reporting category) and absolute improvements or declines in student learning outcomes will not affect these results. This means that the exam would not be able to be used to monitor trends in student learning. If Albania chooses norm-referencing, the trends in exam results should not be included in the school report card (see Chapter 4).
In criteria-related systems, found in many OECD countries such as Slovenia, Lithuania and Latvia, the test items and student responses are analysed against expected levels of achievement. This approach to standard-setting is used to make judgements about absolute levels of performance in relation to established standards or criteria (OECD, 2013[2]). Using a mixture of subjective judgement and statistical evidence, grade thresholds are reviewed and adjusted annually in order to compensate for test difficulty and, hence, maintain absolute standards. The drawback is that this process is less transparent and more complicated to explain to the general public. Moving to a criteria-related system would be a big shift in culture for Albania, and, though such a shift aligns with the intended shift in classroom assessment practice, it may not be readily accepted in a system where this is not the tradition.
Prepare for the expanded implementation of digital marking in the short term and computer‑based assessment in the medium term
In 2019, Albania shifted from a paper-based to digital system of marking, oftentimes referred to as on-screen marking (OSM) or e‑marking, for the multiple‑choice sections of the State Matura Examination. OSM offers several benefits over pen‑and‑paper marking, particularly since, using OSM, tests do not have to physically move once they have been scanned and saved. This improves efficiency and security, as the single copy of a paper‑based test will typically change hands and spaces more often than a secure digital copy (Coniam, 2009[90]). In terms of the reliability of marking, research suggests that paper‑based and on-screen marking offer comparable levels of reliability (Johnson, Nádas and Bell, 2009[91]; Johnson, Hopkin and Shiell, 2012[92]; Coniam, 2009[90]).
Albania’s shift toward OSM will also serve as a stepping stone toward computer‑based assessment, or the presentation of traditional tests and innovative item-types on-screen. This would bring several benefits, including: immediate marking; a complete database of item‑level and student‑level data; increased reliability of scoring for multiple‑choice questions; greater security; and, potentially, lower costs.
As Albania reviews its transition to OSM and looks toward implementation of computer‑based assessments, it should consider the following:
Develop a comprehensive plan for expanding OSM implementation and moving toward computer-based assessments. The plan should include the rationale for moving towards OSM and computer‑based assessments and the desired purposes and characteristics of future applications within Albania’s system of assessment (e.g. application to the National Basic Education Examination and the VANAF (see Chapter 5) and to open-ended items on the State Matura). The plan should also describe the technical specifications to be developed, in particular how students will encounter computer‑based tests and how student responses will be processed. For example, students could enter written responses on paper-based tests, which would be digitised for processing, or they could enter responses directly on-screen. Another consideration would be how tests are marked, automatically or by humans on-screen, and where they are marked, in regional centres or “at home”. Albania should draw on lessons learnt from the 2019 implementation of OSM for multiple-choice questions on the State Matura to inform decisions made in developing the plan and its underlying rationale. This is important as the plan will determine the infrastructure and software, as well as associated costs, required to refine and expand OSM and deliver computer-based assessments.
Invest in the modernisation of IT systems and infrastructure. The ministry will need to invest in the physical and logistical infrastructure needed to expand OSM. This includes providing a secure network and fit‑for‑purpose computing devices and meeting the needs of transforming a physical test into a secure digital file that can be marked on-screen. In Hong Kong (China), for example, tests are delivered to a scanning centre and scanned into digital files (see Box 2.6). If Albania chooses to conduct OSM in regional marking centres, additional investment would be needed to update the IT infrastructure. In Hong Kong (China), a multi‑million dollar grant for IT modernisation was allocated to the Hong Kong Examinations and Assessment Authority in 2005, almost two years prior to the implementation of OSM in Year 11 examinations in early 2011 (Coniam, 2013[93]; Coniam, 2009[90]).
Ensure that item-types are adapted for digital marking. Albania should review how items are marked using OSM, namely whether multiple‑choice items are scored automatically or manually and whether markers score all open‑ended items in a single test or score only certain items “cut” from full tests (see Box 2.6). When planning for expansion to the National Basic Education Examination and the VANAF, Albania will need to take important decisions on how multiple‑choice and open‑ended items are marked.
Ensure that the hardware and software allows tests to be easily navigated, manipulated and annotated. This is important as research suggests that digital annotation of text responses should be similar to ink‑and‑paper annotation, as annotation is linked to how markers engage with the text (e.g. active reading, deeper reflection) and is important for ensuring on-screen annotation does not lead to less accurate marks (Shaw, 2008[94]).
Train teachers in marking on-screen and marking specific to the hardware and software used. Learning from the software application for OSM that was used in the State Matura in 2019, the ESC should further develop and deliver training for markers. Training could be mandated to be included in the obligatory training to certify markers provided by the ESC. In Hong Kong (China), for example, teachers who will be marking must engage in training and demonstrate a certain standard of marking before marking actual tests on-screen (Coniam, 2013[93]).
Box 2.6. On-screen marking in Hong Kong, China
In Hong Kong (China), on-screen marking (OSM) was first put into place by the Hong Kong Examinations and Assessment Authority (HKEAA) in 2007 for the marking of English and Chinese language papers of the Hong Kong Diploma of Secondary Education Examination (HKDSE). With this technology, the HKEAA intended to improve the quality, efficiency, reliability and security of its marking scheme. All scripts of the HKDSE administered by the HKEAA have been marked using OSM since 2012, and since 2015 this has included the written papers of all subjects of the HKDSE.
After examinations are completed by candidates and collected by the HKEAA, answer scripts are scanned, saved and kept in a secure database for recordkeeping. Candidates’ scripts, which may include both multiple‑choice and long answer questions, are then randomly distributed to markers using a secure intranet system accessed only by users. Scripts are manually marked at assessment centres which provide around 1 500 workstations for OSM. This means markers do not need to collect scripts or return them to the HKEAA, which improves the level of security. Most markers hired by the HKEAA to carry out on-screen marking are qualified full‑time teachers.
How the scripts are divided and distributed for marking is related to the subject area. For example, markers of science, technology, engineering, and mathematics (STEM) subjects generally mark scripts by section (a set of questions with standardised responses), as most of them require closed‑ended responses. By contrast, markers of social science subjects are usually given specific questions to mark instead of a whole section. This is because scripts for social subjects are composed of extended responses (e.g. essay questions), structured responses and standardised responses (e.g. multiple‑choice questions).
In addition to improving the security of the grading scheme by eliminating the movement of scripts, on-screen marking has also improved the quality of marking. For example, OSM incorporates features such as the random distribution of special scripts with pre-assigned scores to verify if markers are adhering to the pre-established marking standards. This helps improve marking consistency. OSM has also increased the grading system’s efficiency. For example, scripts are immediately distributed to two markers at the same time for double marking. The calculation of marks is also improved, as marks are automatically assembled and checked by the computer system.
Sources: HKEAA (2018[95]), Onscreen Marking, Hong Kong Examinations and Assessment Authority, http://www.hkeaa.edu.hk/en/exam_personnel/osm/ (accessed on 19 September 2019); HKEAA (2015[96]), Onscreen Marking System, Hong Kong Examinations and Assessment Authority, http://www.hkeaa.edu.hk/DocLibrary/Media/Leaflets/leaflet_OSM_Dec2015_eng.pdf (accessed on 19 September 2019); Yang, Yan and Conima (2017[97]), A Qualitative Study of Markers’ Perceptions on Onscreen Marking in Five Subject Areas, Educational Research and Evaluation, https://doi.org/10.1080/13803611.2018.1446836.
Develop more complex test items using computer-based assessment
As described in this review, Albania’s national assessment system could be improved to assess a wider range of competencies, particularly Albania’s key competencies for lifelong learning, which are currently not assessed by any national assessment or examination. As Albania looks toward the adoption of computer-based assessment, it should make use of this technology to develop more complex test items, as computer‑based assessment allows for item-types that can be used to measure higher‑order thinking and transversal skills such as problem solving (Tilchin and Raiyn, 2015[98]).
For example, in the United States, Smarter Balanced assessments, which are aligned to Common Core State Standards and include questions to measure 21st century competencies, allow students to select and move items on-screen to construct their response, which allows for responses that are more complex (Soland, Hamilton and Stecher, 2013[99]). Computer-based international assessments such as PISA are interactive and feature a combination of text, images and items that students can manipulate, such as clickable roadmaps that automatically calculate travel time, in order to solve real-world problems (OECD, 2018[100]). These advanced features offer assessors opportunities to measure how students are applying skills and knowledge and to assess levels of achievement with respect to competencies.
Table of recommendations
Policy issue |
Recommendations |
Actions |
2.1. Supporting teachers to make better use of assessment to improve student learning |
2.1.1. Revise and further clarify national assessment policies |
Define expected learning outcomes by grade level |
Provide teachers with examples of student work |
||
Provide teachers with disaggregated VANAF results to inform teaching and learning |
||
Create a national policy on formative assessment |
||
Reduce the frequency of marking |
||
Better align teacher appraisal and school evaluation with the expectations of the assessment framework |
||
2.1.2. Provide teachers with guidelines and tools to help them improve their assessment practices |
Provide further guidelines and tools for teachers on portfolio assessment |
|
Develop diagnostic assessment tools for teachers |
||
Help teachers provide better feedback to students on their progress |
||
2.1.3. Ensure that teachers have access to quality training on assessment and incentives to participate in such training |
Provide clear guidance for preparation on assessment in initial teacher education |
|
Provide mandatory and free training on key elements of the assessment framework |
||
Encourage in-school teacher collaborative learning on assessment |
||
Expand online supports for teachers |
||
2.2. Ensuring the reliability and validity of the exam system |
2.2.1. Reinforce the reliability of the National Basic Education Examination and review its design |
Give the ESC a mandate for quality control |
Provide incentives and improved training to teachers to strengthen the reliability of marking and administration |
||
Revise test items to improve reliability and alignment with the curriculum |
||
Assess a broader set of competencies |
||
2.2.2. Review the design, administration and scoring of the State Matura to improve the exam’s quality |
Improve training and broaden the pool of potential test writers |
|
Eliminate the pre‑determined cut‑score and move to a norm‑referenced or criteria‑related approach to standard‑setting |
||
Prepare for expanded implementation of digital marking in the short term and computer-based assessment in the medium term |
||
Develop more complex test items using computer‑based assessment |
References
[5] Absolum, M. et al. (2009), Directions for Assessment in New Zealand: Developing Students’ Assessment Capabilities, Ministry of Education, Wellington, http://assessment.tki.org.nz/content/download/5374/46264/version/4/file/Directions+for+Assessment+in+New+Zealand.PDF (accessed on 3 February 2020).
[68] Académie Caen (2016), LSU: Livret Scolaire Unique, Le Suivi Pédagogique de l’élève, Ministère de l’Éducation nationale et de la Jeunesse, https://www.ac-caen.fr/mediatheque/communication/actualites/2016/12/presentation_LSU_Caen.pdf (accessed on 25 September 2019).
[59] American Institutes for Research (2016), What Is Deeper Learning, and Why Is It Important? Results From the Study of Deeper Learning: Opportunities and Outcomes, AIR, https://www.air.org/sites/default/files/Deeper-Learning-Summary-Updated-August-2016.pdf (accessed on 25 September 2019).
[21] AQAPUE (2018), Curriculum Guidelines, Ministry of Education, Sports and Youth, Tirana, https://www.ascap.edu.al/category/udhezues/ (accessed on 25 September 2019).
[31] AQAPUE (2017), State Education Inspectorate Annual Report 2017, Agency for Quality Assurance in Pre-university Education, Tirana.
[24] AQAPUE (2016), Kurrikula Bërthamë për Arsimin e Mesëm të Laertë [Core Curriculum for Upper Secondary Education: (Grades X, XI, XII)], Ministry of Education, Sports and Youth, Tirana.
[23] AQAPUE (2014), Kurrikula Bërthamë për Klasën Përgatitore dhe Arsemin Fillor [Core Curriculum: For Preparatory and Primary Education], Ministry of Education, Youth and Sports, Tirana.
[14] AQAPUE (2014), Kurrikulare e Arsimit Parauniversitar të Republikës së Shqipërisë [Curriculum Framework of Pre-University Education], Ministry of Education, Youth and Sports, Tirana.
[51] AQAPUE (2011), Inspektimi dhe Vlerësimi i Brendshëm i Shkollës [School Inspection and Internal Assessment], Agency for Quality Assurance in Pre-University Education, Tirana.
[19] AQAPUE (n.d.), Student Assessment Framework in Pre-University Education, Ministry of Education, Sports and Youth, Tirana.
[87] Assessment and Qualifications Alliance (2017), GCSE Mathematics, AQA, https://filestore.aqa.org.uk/sample-papers-and-mark-schemes/2017/november/AQA-83001H-QP-NOV17.PDF (accessed on 25 September 2019).
[79] Australian Institute for Teaching and School Leadership (2017), Tools & Resources, https://www.aitsl.edu.au/tools-resources (accessed on 28 September 2019).
[71] Australian Institute for Teaching and School Leadership (2011), Australian Professional Standards for Teachers, Education Services Australia, https://www.aitsl.edu.au/docs/default-source/national-policy-framework/australian-professional-standards-for-teachers.pdf?sfvrsn=5800f33c_64 (accessed on 28 September 2019).
[6] Bishop, J. (1999), Are National Exit Examinations Important for Educational Efficiency?, Cornell University ILR School, https://digitalcommons.ilr.cornell.edu/cgi/viewcontent.cgi?article=1022&context=articles (accessed on 16 December 2019).
[4] Black, P. and D. Wiliam (1998), “Assessment and classroom learning”, Assessment in Education: Principles, Policy & Practice, Vol. 5/1, pp. 7-74, http://dx.doi.org/10.1080/0969595980050102.
[70] BOSTES (2016), Learning Assessment: A Report on Teaching Assessment in Initial Teacher Education in NSW, Board of Studies, Teaching and Educational Standards, Sydney, https://educationstandards.nsw.edu.au/wps/wcm/connect/c204171e-a570-4947-8107-dc934ab2f70b/learning-assessment-report.pdf?MOD=AJPERES&CVID= (accessed on 16 December 2019).
[56] Brown, C. and P. Mevs (2012), Quality Performance Assessment: Harnessing the Power of Teacher and Student Learning, Center for Collaborative Education; Nellie Mae Education Foundation, https://www.nmefoundation.org/getmedia/135171d7-0926-4664-9b61-2fa0d9dec7df/QPA-report-2-2012?ext=.pdf (accessed on 10 September 2019).
[62] Cambridge Assessment International Education (2018), Developing Your School with Cambridge: A Guide for School Leaders, University of Cambridge Local Examinations Syndicate, Cambridge, https://www.cambridgeinternational.org/Images/271302-developing-your-school-with-cambridge.pdf (accessed on 13 December 2019).
[54] Cohen, P. (1995), “Designing Performance Assessment Tasks”, ASCD, Vol. 37/6, http://www.ascd.org/publications/newsletters/education-update/aug95/vol37/num06/Designing-Performance-Assessment-Tasks.aspx (accessed on 16 December 2019).
[93] Coniam, D. (2013), “The Increasing Acceptance of Onscreen Marking – the ‘Tablet Computer’ Effect”, Journal of Educational Technology & Society, Vol. 16/3, pp. 119-129, http://www.jstor.org/stable/jeductechsoci.16.3.119 (accessed on 3 February 2020).
[90] Coniam, D. (2009), “A Comparison of Onscreen and Paper-Based Mmarking in the Hong Kong Public Examination System”, Educational Research and Evaluation, Vol. 15/3, pp. 243-263, http://dx.doi.org/10.1080/13803610902972940.
[25] Conley, D. and L. Darling-Hammond (2013), Creating Systems of Assessment for Deeper Learning, Stanford Center for Opportunity Policy in Education, Stanford, https://edpolicy.stanford.edu/sites/default/files/publications/creating-systems-assessment-deeper-learning_0.pdf (accessed on 16 December 2019).
[57] Danielson, C. and L. Abrutyn (1997), Introduction to Using Portfolios in the Classroom, Association for Supervision and Curriculum Development, Alexandria. Print.
[26] Darling-Hammond, L. (2017), Developing and Measuring Higher Order Skills: Models for State Performance Assessment Systems, Learning Policy Institute, Washington, DC, https://learningpolicyinstitute.org/sites/default/files/product-files/Models_State_Performance_Assessment_Systems_REPORT.pdf (accessed on 4 April 2019).
[75] Darling-Hammond, L. and R. Rothman (eds.) (2011), Teacher and Leader Effectiveness in High-Performing Education Systems, Alliance for Excellent Education and Stanford Center for Opportunity Policy in Education, Washington, DC, https://edpolicy.stanford.edu/sites/default/files/publications/teacher-and-leader-effectiveness-high-performing-education-systems.pdf (accessed on 16 April 2019).
[18] Department for Education of England (2014), The National Curriculum in England: Framework Document, Department for Education of England, London, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/381344/Master_final_national_curriculum_28_Nov.pdf (accessed on 16 December 2019).
[65] Department of Education and Training, Victoria State Government (2019), Student Reporting Advice, https://www.education.vic.gov.au/school/teachers/teachingresources/practice/Pages/reportsparents.aspx#link98 (accessed on 13 December 2019).
[66] Department of Education and Training, Victoria State Government (n.d.), Student Reporting Advice: Student Report Writing Checklist, https://www.education.vic.gov.au/Documents/school/teachers/teachingresources/reporting/studentreportwritingchecklist.pdf (accessed on 13 December 2019).
[69] Duda, A. and E. Xhaferri (2013), Teacher Education and Training in the Western Balkans: Report on Albania, European Union, Luxembourg, http://dx.doi.org/10.2766/62639.
[53] Education Scotland (2015), How Good is Our School?, Education Scotland, Livingston, https://education.gov.scot/improvement/Documents/Frameworks_SelfEvaluation/FRWK2_NIHeditHGIOS/FRWK2_HGIOS4.pdf (accessed on 3 February 2020).
[12] European Parliament; Council of the European Union (2006), Recommendation of the European Parliament and of the Council of 18 December 2006 on Key Competences for Lifelong Learning, https://eur-lex.europa.eu/eli/reco/2006/962/oj (accessed on 16 December 2019).
[84] Eurydice (2018), Assessment in Lower Secondary Education - Ireland, https://eacea.ec.europa.eu/national-policies/eurydice/content/assessment-lower-secondary-education_en (accessed on 18 December 2019).
[49] Evertson (2013), Handbook of Classroom Management, Routledge, http://dx.doi.org/10.4324/9780203874783.
[42] Farai, C., L. Mapaire and A. Chindanya (2018), “Effectiveness of Feedback and How It Contributes to Improved Instruction and Learner Performance: A Case Study of Newly Qualified Mathematics Educators in Johannesburg West Schools in Gauteng Province”, Journal of Education, Society and Behavioural Science, Vol. 24/1, pp. 1-13, http://dx.doi.org/10.9734/jesbs/2018/36269.
[73] Goe, L., K. Biggers and A. Croft (2012), Linking Teacher Evaluation to Professional Development: Focusing on Improving Teaching and Learning, National Comprehensive Center for Teacher Quality, Washington, DC, https://eric.ed.gov/?id=ED532775 (accessed on 3 February 2020).
[77] Harrison, C. (2005), “Teachers developing assessment for learning: mapping teacher change”, Teacher Development, Vol. 9/2, pp. 255-263, http://dx.doi.org/10.1080/13664530500200251.
[32] Hipkins, R. (2007), Assessing Key Competencies: Why Would We? How Could We?, NZCER, Wellington.
[95] HKEAA (2018), Onscreen Marking, http://www.hkeaa.edu.hk/en/exam_personnel/osm/ (accessed on 19 September 2019).
[96] HKEAA (2015), Onscreen Marking System, Hong Kong Examinations and Assessment Authority, http://www.hkeaa.edu.hk/DocLibrary/Media/Leaflets/leaflet_OSM_Dec2015_eng.pdf (accessed on 19 September 2020).
[40] Hopfenbeck, T. et al. (2013), “Balancing Trust and Accountability? The Assessment for Learning Programme in Norway: A Governing Complex Education Systems Case Study”, OECD Education Working Papers, No. 97, OECD Publishing, Paris, https://dx.doi.org/10.1787/5k3txnpqlsnn-en.
[83] Ireland Department of Education and Skills (2015), Framework for Junior Cycle 2015, http://www.education.ie (accessed on 18 December 2019).
[72] Jenkinson, K. and A. Benson (2016), “Designing Higher Education Curriculum to Increase Graduate Outcomes and Work Readiness: The Assessment and Mentoring Program (AMP)”, Mentoring & Tutoring: Partnership in Learning, Vol. 24/5, pp. 456-470, http://dx.doi.org/10.1080/13611267.2016.1270900.
[92] Johnson, M., R. Hopkin and H. Shiell (2012), “Marking Extended Essays on Screen: Exploring the Link Between Marking Processes and Comprehension”, E-Learning and Digital Media, Vol. 9/1, pp. 50-68, http://dx.doi.org/10.2304/elea.2012.9.1.50.
[91] Johnson, M., R. Nádas and J. Bell (2009), “Marking essays on screen: An investigation into the reliability of marking extended subjective texts”, British Journal of Educational Technology, Vol. 41/5, pp. 814-826, http://dx.doi.org/10.1111/j.1467-8535.2009.00979.x.
[44] Kitchen, H. et al. (2019), OECD Reviews of Evaluation and Assessment in Education: Student Assessment in Turkey, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/5edc0abe-en.
[88] Kuan, L. (2011), EQUIP2 Lessons Learned in Education: Student Assessment, FHI 360, https://www.epdc.org/sites/default/files/documents/EQUIP2%20LL%20Student%20Assessment%20AAR.pdf (accessed on 19 December 2019).
[37] Li, R. et al. (2019), OECD Reviews of Evaluation and Assessment in Education: Georgia, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/94dc370e-en.
[43] Looney, J. (2011), “Integrating Formative and Summative Assessment: Progress Toward a Seamless System?”, OECD Education Working Papers, No. 58, OECD Publishing, Paris, https://dx.doi.org/10.1787/5kghx3kbl734-en.
[82] MacPhail, A., J. Halbert and H. O’Neill (2018), “The development of assessment policy in Ireland: a story of junior cycle reform”, Assessment in Education: Principles, Policy & Practice, Vol. 25/3, pp. 310-326, http://dx.doi.org/10.1080/0969594x.2018.1441125.
[38] Maghnouj, S. et al. (2020), OECD Reviews of Evaluation and Assessment in Education: Serbia, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/225350d9-en.
[58] Maier, A. (2019), Performance Assessment Profile: Envision Schools, Learning Policy Institute, http://www.learningpolicyinstitute.org/project/cpac (accessed on 25 September 2019).
[33] McClarty, K. and M. Gaertner (2015), Measuring Mastery: Best Practices for Assessment in Competency-Based Education, American Enterprise Institute, https://www.aei.org/wp-content/uploads/2015/04/Measuring-Mastery.pdf (accessed on 25 September 2019).
[67] Ministère de l’Éducation Nationale et de la Jeunesse (2017), Le Socle Commun et l’Evaluation des Acquis - Le Livret Scolaire, http://eduscol.education.fr/cid104511/le-livret-scolaire.html%20accessed%202%20March%202018 (accessed on 6 March 2018).
[61] Ministère de l’Éducation nationale et de la Jeunesse (n.d.), Évaluations des acquis et besoins des élèves au CP, https://eduscol.education.fr/cid142232/evaluations-2019-2020.html (accessed on 28 September 2019).
[3] Ministry of Education of New Zealand (2007), The New Zealand Curriculum, http://nzcurriculum.tki.org.nz/content/download/1108/11989/file/The-New-Zealand-Curriculum.pdf (accessed on 16 December 2019).
[64] Ministry of Education of New Zealand (n.d.), Diagnostic Assessment Occurs Before Learning, https://seniorsecondary.tki.org.nz/Mathematics-and-statistics/Assessment/Diagnostic-assessment (accessed on 16 December 2019).
[17] Ministry of Education of New Zealand (n.d.), Years and Curriculum Levels, New Zealand Government, http://nzcurriculum.tki.org.nz/The-New-Zealand-Curriculum (accessed on 25 September 2019).
[15] MoESY (2019), Nivelet e Arritjes së Kompetencave të Fushave të të Nxënit [Levels of Achievement of Learning Area Competencies Elementary School], Ministry of Education, Sports and Youth, Tirana.
[34] MoESY (2019), Provimi i Maturës Shtetërore 2019 Lënda: Matematikë (GJIMNAZ) Model Testi [2019 State Matura Course: Mathematics (GIMIMZ) Test Model], Ministry of Education, Sports and Youth, Tirana.
[36] MoESY (2019), Vendimi nr 295 datë 10.5.2019 “për përcaktimin e kritereve të notës mesatare në institucionet e arsimit të lartë për vitin akademik 2019-2020” [Decision No. 295, dated 10.5.2019 “On the Determination of the Average Score Criteria in the Higher Education], Ministry of Education, Youth and Sports, Tirana, http://darelbasan.edu.al/index.php/matura-shteterore/1246-vendimi-nr-295-date-10-5-2019-per-percaktimin-e-kritereve-te-notes-mesatare-ne-institucionet-e-arsimit-te-larte-per-vitin-akademik-2019-2020 (accessed on 25 September 2019).
[20] MoESY (2019), Vlerësimi i Nxënësit në Arsimin e Mesëm të Ulët dhe në Arsimin e Mesëm të Lartë [Evaluation of Students in Lower Secondary Education and Upper Secondary Education], Ministry of Education, Sports and Youth, Tirana.
[16] MoESY (2018), Nivelet e Arritjes së Kompetencave Klasa X, XI, XII [Levels of Achievement Class X, XI, XII].
[10] MoESY (2018), OECD Review of Evaluation and Assessment: Country Background Report for Albania.
[29] MoESY (2018), Organizimin dhe Zhvillimin e Provimeve Kombëtare të Maturës Shtetërore 2019 [On the Organization and Development of the National Examinations of 2019 State Matura], Ministry of Education, Sports and Youth, Tirana.
[30] MoESY (2017), Matura Shtetërore 2017 [2017 State Matura], Ministry of Education, Sports and Youth, Tirana, http://qsha.gov.al/msh.html (accessed on 18 September 2019).
[27] MoESY (2017), Provimet Kombëtare të Arsimit Bazë 2017 [National Exams of Basic Education 2017], Ministry of Education, Sports and Youth, Tirana.
[35] MoESY (n.d.), Program Orientues për Maturën Shtetërore: Lënda: Matematika Bërthamë Viti Shkollor 2018 -2019 [Orientation Programme for State Matura: Subject: Core Mathematics School Year 2018-2019], Ministry of Education, Sports and Youth, Tirana.
[22] Muskin, J. (2017), “Continuous Assessment for Improved Teaching and Learning: A Critical Review to Inform Policy and Practice”, Current and Critical Issues in Curriculum, Learning and Assessment No. 13, https://unesdoc.unesco.org/ark:/48223/pf0000255511 (accessed on 16 December 2019).
[63] National Center on Intensive Intervention (n.d.), Example Diagnostic Tools, American Institutes of Research, https://intensiveintervention.org/intensive-intervention/diagnostic-data/example-diagnostic-tools (accessed on 13 December 2019).
[85] NCAA (n.d.), Junior cycle, https://www.curriculumonline.ie/Junior-cycle/ (accessed on 19 December 2019).
[39] NZCER (n.d.), Formative Assessment, https://www.nzcer.org.nz/formative-assessment (accessed on 28 September 2019).
[86] OCR (2018), “GSCE Mathematics”, https://www.ocr.org.uk/Images/528855-question-paper-paper-5.pdf (accessed on 25 September 2019).
[13] OECD (2019), OECD Future of Education and Skills 2030 Concept Note, OECD Publishing, http://www.oecd.org/education/2030-project/teaching-and-learning/learning/learning-compass-2030/OECD_Learning_Compass_2030_concept_note.pdf (accessed on 25 September 2019).
[1] OECD (2019), PISA 2018 Results (Volume I): What Students Know and Can Do, PISA, OECD Publishing, Paris, https://dx.doi.org/10.1787/5f07c754-en.
[76] OECD (2019), TALIS 2018 Results (Volume I): Teachers and School Leaders as Lifelong Learners, TALIS, OECD Publishing, Paris, https://dx.doi.org/10.1787/1d0bc92a-en.
[100] OECD (2018), Test Questions, https://www.oecd.org/pisa/test-2012/testquestions/question1/ (accessed on 28 September 2019).
[8] OECD (2017), Education in Lithuania, Reviews of National Policies for Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264281486-en.
[7] OECD (2016), Education Policy Outlook: Korea, OECD, Paris, http://www.oecd.org/education/policyoutlook.htm (accessed on 10 January 2019).
[9] OECD (2015), Education at a Glance 2015: OECD Indicators, OECD Publishing, Paris, http://dx.doi.org/10.1787/eag-2015-en.
[74] OECD (2014), TALIS 2013 Results: An International Perspective on Teaching and Learning, TALIS, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264196261-en.
[2] OECD (2013), Synergies for Better Learning: An International Perspective on Evaluation and Assessment, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264190658-en.
[47] OECD (2011), Lessons from PISA for the United States, Strong Performers and Successful Reformers in Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264096660-en.
[45] OECD (2005), Formative Assessment: Improving Learning in Secondary Classrooms, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264007413-en.
[28] Ofqual (2018), GCSE 9 to 1 grades: a brief guide for parents, https://ofqual.blog.gov.uk/2018/03/02/gcse-9-to-1-grades-a-brief-guide-for-parents/ (accessed on 24 September 2019).
[55] Perlman, C. (2003), “Performance Assessment: Designing Appropriate Performance Tasks and Scoring Rubrics”, ERIC, p. 12, https://eric.ed.gov/?id=ED480070 (accessed on 16 December 2019).
[60] Peterson, A. et al. (2018), “Understanding Innovative Pedagogies: Key Themes to Analyse New Approaches to Teaching and Learning”, OECD Education Working Papers, No. 172, OECD Publishing, Paris, https://dx.doi.org/10.1787/9f843a6e-en.
[52] Santiago, P. et al. (2013), Teacher Evaluation in Chile 2013, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264172616-en.
[46] Secretaría de Educación Pública (2017), Nuevo Modelo Educativo, https://www.gob.mx/sep/documentos/nuevo-modelo-educativo-99339 (accessed on 25 September 2019).
[94] Shaw, S. (2008), “Essay Marking on-Screen: Implications for Assessment Validity”, E-Learning and Digital Media, Vol. 5/3, pp. 256-274, http://dx.doi.org/10.2304/elea.2008.5.3.256.
[81] Singapore Examinations and Assessment Board (2019), PSLE, https://www.seab.gov.sg/home/examinations/psle (accessed on 4 March 2019).
[99] Soland, J., L. Hamilton and B. Stecher (2013), Measuring 21st Century Competencies: Guidance for Educators, RAND Corporation, https://asiasociety.org/files/gcen-measuring21cskills.pdf (accessed on 16 December 2019).
[78] Tang, S. et al. (2010), “A case study of teacher learning in an assessment for learning project in Hong Kong”, Professional Development in Education, Vol. 36/4, pp. 621-636, http://dx.doi.org/10.1080/19415250903554087.
[98] Tilchin, O. and J. Raiyn (2015), “Computer-Mediated Assessment of Higher-Order Thinking Development”, International Journal of Higher Education, Vol. 4/1, http://dx.doi.org/10.5430/ijhe.v4n1p225.
[50] Tunstall, P. and C. Gsipps (1996), “Teacher Feedback to Young Children in Formative Assessment: a typology”, British Educational Research Journal, Vol. 22/4, pp. 389-404, http://dx.doi.org/10.1080/0141192960220402.
[11] UNESCO (2017), Albania Education Policy Review: Issues and Recommendations (extended report), UNESCO, Paris, https://unesdoc.unesco.org/ark:/48223/pf0000259245 (accessed on 16 January 2020).
[80] UNESCO Institute for Statistics (2017), Quick Guide No. 3 Implementing a National Learning Assessment, UNESCO Institute for Statistics, Montreal, http://uis.unesco.org/sites/default/files/documents/quick-guide-3-implementing-national-learning-assessment.pdf (accessed on 3 February 2020).
[48] Victoria State Government (2018), Feedback and reporting, https://www.education.vic.gov.au/school/teachers/teachingresources/practice/Pages/insight-feedback.aspx (accessed on 12 December 2019).
[89] Waddington, D., P. Nentwig and S. Schanze (2008), Making it Comparable: Standards in Science Education, Waxmann Verlag GmbH, Print.
[41] Wiliam, D. (2010), “The role of formative assessment in effective learning environments”, in The Nature of Learning: Using Research to Inspire Practice, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264086487-8-en.
[97] Yang, M., Z. Yan and D. Conima (2017), “A Qualitative Study of Markers’ Perceptions on Onscreen Marking in Five Subject Areas”, Educational Research and Evaluation, Vol. 23/7-8, pp. 290-310, http://dx.doi.org/10.1080/13803611.2018.1446836.