This section explains the indices derived from the PISA 2022 student, school, well-being and Information and Communication Technology (ICT) familiarity questionnaires used in this volume. Several PISA measures reflect indices that summarise responses from students or school representatives (typically principals) to a series of related questions. The questions were selected from a larger pool on the basis of theoretical considerations and previous research. The PISA 2022 Assessment and Analytical Framework (OECD, 2023[1]) provides an in-depth description of this conceptual framework. Item response theory (IRT) modelling and classical test theory were used to test the theoretically expected behaviour of the indices and to validate their comparability across countries. For a detailed description of the methods, see the section “Statistical criteria for reporting on scaled indices” in this chapter, and the PISA 2022 Technical Report (OECD, forthcoming[2]).
This volume uses four types of indices: simple indices, complex composite indices, new scale indices and trend scale indices. In addition to these indices, several single items of the questionnaires are used in this volume. The volume also uses data collected on students’ performance in mathematics, reading and science. These assessments are described in the PISA 2022 Assessment and Analytical Framework (OECD, 2023[1]), the PISA 2022 Technical Report (OECD, forthcoming[2]) and in Volume I of PISA 2022 Results (OECD, forthcoming[3]).
Simple indices are constructed through the arithmetic transformation or recoding of one or more items in the same way across assessments. Here, item responses are used to calculate meaningful indices, such as the recoding of the four-digit ISCO-08 codes into “Highest parents’ socio-economic index (HISEI)” or teacher-student ratio based on information from the school questionnaire.
Complex composite indices are based on a combination of two or more indices. The PISA index of economic, social and cultural status (ESCS) is a composite score derived from three indicators related to family background.
Scale indices are constructed by scaling multiple items. Unless otherwise indicated, the two-parameter logistic model (2PLM) (Birnbaum, 1968[4]) was used to scale items with only two response categories (i.e. dichotomous items), while the generalised partial credit model (GPCM) (Muraki, 1992[5]) was used to scale items with more than two response categories (i.e. polytomous items).1 Values of the index correspond to standardised Warm likelihood estimates (WLE) (Warm, 1989[6]).
For details on how each scale index was constructed, see the PISA 2022 Technical Report (OECD, forthcoming[2]). In general, the scaling was done in two stages:
1. The item parameters were estimated based on all students from approximately equally weighted countries and economies;2 only cases with a minimum number of three valid responses to items that are part of the index were included. For the trend scales, the scaling process began by fixing the item parameters of the trend items to the parameters that had been estimated for each group in the previous assessment, a procedure called fixed parameter linking. To compute trends, a scale needed to have at least three trend items, but some trend scales consisted of both trend items and new items. In this case, the item parameters for the trend items were fixed at the beginning of the scaling process, but the item parameters for the new items were estimated using the PISA 2022 data.
2. For new scale indices, the Warm likelihood estimates were then standardised so that the mean of the index value for the OECD student population was zero and the standard deviation was one (countries were given approximately equal weight in the standardisation process2). For the trend scales, to ensure the comparability of the scale scores from the current assessment to the scale scores from the previous assessment, the original WLEs of PISA 2022 were transformed using the same transformation constants of the original WLEs from the assessment to which the current assessment was linked.
Sequential codes were assigned to the different response categories of the questions in the sequence in which the latter appeared in the student, school, ICT or well-being questionnaire. For reversed items, these codes were inverted for the purpose of constructing indices or scales.
Negative values for an index do not necessarily imply that respondents answered negatively to the underlying questions (e.g. reporting no support from teachers or no school safety risks). A negative value merely indicates that a respondent answered more negatively than other respondents did on average across OECD countries. Likewise, a positive value on an index indicates that a respondent answered more favourably, or more positively, on average, than other respondents in OECD countries did (e.g. reporting more support from teachers or more school safety risks).
Some terms in the questionnaires were replaced in the national versions of the student, school, ICT or well-being questionnaire by the appropriate national equivalent (marked through brackets < > in the international versions of the questionnaires). For example, the term < qualification at ISCED level 5A > was adapted in the United States* to “Bachelor’s degree, post-graduate certificate program, Master’s degree program or first professional degree program”. All the context questionnaires, including information on nationally adapted terms, and the PISA international database, including all variables, are available through www.oecd.org/pisa.