The OECD would like to acknowledge and thank the Ayrton Senna Institute (São Paulo, Brazil), a supporter for the development phases of the psychometric work on the assessment, and 2E Estudios y Evaluaciones, the contractor who conducted data processing and scaling for the SSES 2023 Main Survey.
Social and Emotional Skills for Better Lives
Annex A. Technical background
Construction of social and emotional skill assessment scales
Social and emotional skill scales in SSES are scaled to fit approximately normal distributions with means around 500 and standard deviations around 100. In statistical terms, a one-point difference on a skill scale therefore corresponds to an effect size (Cohen’s d) of 0.01; and a 10-point difference to an effect size of 0.10.
The SSES assessment, like all assessments, is susceptible to several possible measurement errors. Despite the extensive investments SSES makes in monitoring the process of translation, standardising the administration of the assessment, selecting questions, and analysing the quality of the data, complete comparability across countries and subpopulations cannot always be guaranteed. While self-reported questionnaires are a preferred method for measuring psychological traits, they can be affected by the respondents’ interpretation of the questionnaire item. These self-reported measures are susceptible to multiple biases: social desirability bias, where students provide answers they think are more socially acceptable; reference-group bias, where students compare themselves to the group of persons around them while answering questions, and when the reference group itself can differ from one student to another, and from school to school; response style bias, where students from different cultures provide different patterns of responses, such as providing more extreme or more modest responses.
SSES acknowledges these potential biases and tries to minimize the effect of these potential biases on the variables and relations between variables presented in this report.
Acquiescent response style
Acquiescence refers to tendencies among respondents to provide their agreement or disagreement to different positively and negatively worded statements irrespective of the content, wording and direction. Such response styles may result in biased measures and calculation of acquiescence response sets (ARS) has been suggested as a way of modelling such response tendencies for Likert-type items (Primi et al., 2020[1]). One way to control for acquiescence is using a balanced set of items per scale in which positively and negatively worded items are paired within scales. One of the design features of the SSES assessment was to have both positively and negatively worded items within each item set measuring a particular construct scale. However, the items were not evenly balanced. To derive an acquiescence response set, 25 pairs of items across all scales were selected. To control for acquiescent response styles, Multiple Group Confirmatory Factor Analysis (MGCFA) models were estimated using acquiescence response sets as control variables as part of multiple indicator multiple cause (MIMIC) models, which generally showed improved model fit and higher levels of measurement invariance.
Trend scales
Some social and emotional skill assessment scale items were replaced with new items between SSES 2019 and SSES 2023. To allow skill scales to be compared between sites in SSES 2019 and 2023, a set of social and emotional skill assessment scales were constructed using only items in common between the two years. These scales are referred to as ‘trend scales’ and are used for all analyses that compare sites in SSES 2019 and SSES 2023 in this report. Social and emotional skill assessment scales constructed for SSES 2023 using both trend items and new items were also produced and these are referred to as ‘main scales’.
Wherever possible, analyses use trend scales so that both SSES 2019 and SSES 2023 sites can be included. Where it is not possible to include SSES 2019 sites in analyses due to changes to the questionnaire between rounds – for example, levels of student absence and tardiness were not measured in SSES 2019 – main scales are used to compare between SSES 2023 sites.
Achievement motivation was measured in SSES 2019 as an ‘additional skill’ created from items used to evaluate other skills. In SSES 2023, achievement motivation is measured using a new set of dedicated items. For this reason, it was not possible to compute a trend scale for achievement motivation.
Cross-site comparability of social-emotional assessment scales
The SSES 2019 Technical Report (OECD, 2021[2]) and the SSES 2023 Technical Report (forthcoming) explain in detail the scaling procedures and the construct validation of all social-emotional assessment scales. This section presents a summary of the analyses carried out to validate the cross-site comparability of the social and emotional skill assessment scales used in this report. The internal consistency of scaled indices, factor analysis to assess construct dimensionality and the invariance of item parameters are the three approaches that SSES 2019 and 2023 used to examine the comparability of scaled indices across sites. Based on these three approaches, all indices examined in this report meet the reporting criteria. Internal consistency refers to the extent to which the items that make up an index are inter-related. Cronbach’s Alpha was used to check the internal consistency of each scale within the sites and to compare it amongst sites. The coefficient of Cronbach’s Alpha ranges from 0 to 1, with higher values indicating higher internal consistency. Similar and high values across sites are an indication of reliable measurement across sites. Commonly accepted cut-off values are 0.9 for excellent, 0.8 for good, and 0.7 for acceptable internal consistency. The reliability for each of the social and emotional skill assessment scales was higher than 0.7 in each site and for each scale (concretely in 178 of the 225) with following exceptions in SSES 2023:
Achievement motivation: Delhi (0.65)
Assertiveness: Bogotá (0.66), Delhi (0.42), Kudus (0.60), Sobral (0.67)
Creativity: Delhi (0.58)
Curiosity: Delhi (0.69), Kudus (0.66)
Emotional control: Delhi (0.63)
Empathy: Bogotá (0.68), Delhi (0.53), Kudus (0.60), Sobral (0.65)
Energy: Bulgaria (0.67), Bogotá (0.68), Delhi (0.40), Kudus (0.64), Sobral (0.60), Ukraine (0.67)
Optimism: Delhi (0.53), Kudus (0.68)
Persistence: Delhi (0.60), Kudus (0.69)
Responsibility: Delhi (0.59), Sobral (0.68)
Self-control: Bulgaria (0.61), Bogotá (0.66), Delhi (0.51), Kudus (0.47), Mexico (0.69), Peru (0.69), Sobral (0.62), Ukraine (0.64)
Sociability: Delhi (0.66), Kudus (0.68)
Stress resistance: Bogotá (0.69), Delhi (0.42), Kudus (0.51), Sobral (0.65)
Tolerance: Bogotá (0.64), Delhi (0.56), Kudus (0.61), Mexico (0.69), Sobral (0.66), Ukraine (0.64)
Trust: Bulgaria (0.69), Delhi (0.50)
Exceptions for SSES 2019 are noted in the SSES 2019 Technical Report (OECD, 2021[2]).
The analyses of the SSES data involved a series of iterative modelling and analysis steps. These steps included the application of confirmatory factor analysis (CFA) to evaluate constructs and a multiple-group confirmatory factor analysis (MGCFA) to review measurement invariance across groups (gender, age cohorts and sites). In assessing measurement equivalence for SSES trend scales, comparisons were made between cycle groups (Round 1 and Round 2). In addition, MGCFA models were estimated using acquiescence response sets as control variables as part of multiple indicator multiple cause (MIMIC) models, which generally showed improved model fit and higher levels of measurement invariance.
All items had a Likert-type format with five categories and included both positively and negatively worded statements. The five categories were ‘strongly disagree’, ‘disagree’, ‘neither agree nor disagree’, ‘agree’ and ‘strongly agree’. Each item was scored from 0 to 4 for items with positively worded statements and reverse scored for the negatively worded ones.
The SSES student surveys in Delhi (India), Helsinki (Finland), Mexico and Ukraine were conducted in Autumn 2023 and were therefore not included in the data for estimating the scaling parameters for the student direct assessment.
In testing for measurement invariance, three different models were specified and compared (i.e. configural, metric and scalar models):
Configural invariance is the least constrained model. In this model, it is assumed that the items measuring the underlying latent construct are equivalent across all groups of reference (e.g. sites). If the strength of the associations between the groups are the same, then the latent construct is assumed to have the same meaning for all groups (i.e. the structure of the construct is the same). Configural invariance would allow examining whether the overall factor structure stipulated by the measures fit well for all groups in your sample. However, for scales reaching configural invariance, neither scores nor their associations can be directly compared across groups.
The metric level of invariance is achieved if the structure of the construct is the same across groups (i.e. configural invariance is achieved) and the strength of the association between the construct and items (factor loadings) is the same across groups. Metric invariance would allow for comparisons of within-group associations among variables across groups (e.g. correlations or linear regression), but not for the comparison of scale mean scores.
Scalar level invariance is achieved when metric invariance has been achieved and the intercepts/thresholds for all items across groups are equivalent. When scalar invariance is achieved, it is assumed that differences in scale means across groups are free of any cross-group bias. At this level of measurement equivalence, scale scores can be directly compared across groups.
Results of the MGCFA are presented in Table A.1. Finally, IRT (Item Response Theory) Generalised Partial Credit Model (GPCM) was used to scale items and generate scores.
Table A.1. Levels of measurement invariance for social and emotional skills scales
Age cohorts |
Gender |
Sites |
|
---|---|---|---|
Curiosity |
Metric |
Metric |
Metric |
Tolerance |
Metric |
Scalar |
Metric |
Creativity |
Scalar |
Scalar |
Metric |
Responsibility |
Metric |
Scalar |
Metric |
Self-control |
Metric |
Scalar |
Metric |
Persistence |
Metric |
Scalar |
Metric |
Achievement motivation |
Metric |
Scalar |
Metric |
Sociability |
Metric |
Scalar |
Metric |
Assertiveness |
Scalar |
Scalar |
Metric |
Energy |
Metric |
Metric |
Metric |
Empathy |
Metric |
Metric |
Metric |
Trust |
Metric |
Scalar |
Metric |
Stress resistance |
Scalar |
Metric |
Metric |
Optimism |
Scalar |
Scalar |
Metric |
Emotional control |
Scalar |
Metric |
Metric |
Construction of background indices
This section explains the indices derived from the SSES 2023 background questionnaires. Several SSES measures reflect indices that summarise responses from students to a series of related questions. There are two different types of indices:
Simple indices: constructed using an arithmetic transformation or recoding of one or more items in exactly the same way across assessments. Here, item responses are used to calculate meaningful variables, such as the recoding of the four-digit International Standard Classification of Occupations (ISCO) 2008 codes into “Highest parents’ socio-economic index (HISEI)”.
Scale indices: constructed through combining multiple items which are intended to measure an underlying latent construct. The indices were scaled using Generalised Partial Credit Model (GPCM) unless otherwise indicated. For example, the index of socio-economic status based on data from parental education, parental occupation and home possessions, was derived from component scores obtained through principal component analysis.
Student-level simple indices
Student age
Student age (Age_Std) was calculated as the age in months at the time of the questionnaire administration. It is the difference between the date the student questionnaire was administered and the student’s date of birth. Student age was derived from information about the student’s date of birth and the actual start date of the administration of the student questionnaire. Generally, data from the Student Tracking Forms (STF) were given priority over information provided by students’ when responding to the questionnaire.
Gender
A student gender variable (Gender_Std) was computed by using valid codes (i.e. not missing) from the student questionnaire variable STQM00401 STF (1 for girls, 2 for boys and 3 for other). When Gender_Std had a missing value, STF_Gender from Student Tracking Form was used.
Grades
SSES collected information on school grades in three subjects: reading (Sgrade_Read_Lang), mathematics (Sgrade_Math) and the arts (Sgrade_Arts). As different sites used different grading systems, all grades were transformed on a scale from 1 to 50.
Parents’ level of education
In the student questionnaire, respondents were asked about the highest level of education of each of their parents with questions using nationally appropriate terms according to the International Standard Classification of Education scheme (ISCED) (UNESCO, 2011[3]). Respondents were asked to select from ten levels ranging from no completion of ISCED level 1 (primary education), through to completion of ISCED level 8 (Doctoral or equivalent level). An index, HISCED, was derived by taking the highest level of education of either parent from the student questionnaire. If the data was only available for one parent, then that is used as the highest level.
Parents’ highest occupational status
Occupational data was collected using open-ended questions in the student questionnaires (STQM011- STQM014). The responses were coded to four-digit ISCO codes and then mapped to the international socio-economic index of occupational status (ISEI) (Ganzeboom and Treiman, 2003[4]). The highest occupational status of parents (HISEI) corresponds to the higher ISEI score among parents or to the only available parent’s ISEI score. A higher ISEI score indicates higher levels of occupational status.
Immigrant background
Information on the country of birth of students and their parents was also collected. Included in the database are three country-specific variables related to the country of birth of the student, and their mother and father (STQM11901, STQM11902 and STQM11903). The variables indicate whether the student, mother and father were born in the country of assessment or elsewhere. The index on immigrant background (IMMBACK) is calculated from these variables and has the following categories: 1) native students (students who are born in the country of assessment and students who had at least one parent born in the country of assessment), and 2) non-native students (students who are born abroad and/or parents who are born abroad). Students with missing responses for either the student or for both parents were given missing values for this variable.
Life satisfaction
SSES asked (STQM13501) students: “Overall, how satisfied are you with your life as a whole these days?”. Students answered the question on a 10-point scale where 0 represents “not at all satisfied” and 10 represents “completely satisfied”. The final life satisfaction index variable (ST_LIFESAT) was transformed, with 50 the score of an average student and 10 the standard deviation.
Education expectations
SSES asked students what level of education they expect to complete (STQM13901). Response categories are based on the International Standard Classification of Education (ISCED). Response categories range from ISCED level 2 (lower secondary education), through to ISCED level 8 (Doctoral or equivalent level).
Career expectations
Students were asked ‘“what kind of job [they] expect to have when [they] are about 30 years old” (STQM02401). This was an open-ended question and students were asked to enter a job title. Responses to this question are recoded based on the International Standard Classification of Occupations (ISCO) to a 4-digit ISCO-08 code. This variable was used to derive several indices related to career expectations:
Expectations of a managerial or professional career: refers to ISCO major groups 1 and 2.
Uncertain expectations: refers to students who did not cite a specific occupation they expect to have at age 30.
Health professionals: refers to ISCO sub-major codes 22 and 32 and code 2634 which includes health professionals (doctors, nurses, veterinarians), health associated professionals (medical and pharmaceutical technicians, nursing and midwifery associate professionals and veterinary technicians and assistants) and psychologists.
ICT, science and engineering professionals: refers to ISCO sub-major codes 21, 25, 31 and 35 which includes science and engineering professionals, information and communications technology professionals, science and engineering associate professionals and information and communications technicians.
Green jobs: refers to "environmentally friendly" occupations as classified in (Scholl, Turban and Gal, 2023[5])
Teaching professionals: refers to ISCO sub-major group 23 which includes university and higher education teachers, vocational education teachers, secondary education teachers and primary school and early childhood teachers.
Career development activities
SSES asked students if they have done any of the following activities to find out about future study or types of work: “attended job shadowing or work-site visits”, “visited a job fair”. “spoke to a career advisor”, “completed a questionnaire to find out about [their] interests and abilities”, “researched the internet for information about careers”, “went to an organised tour in an ISCED 3-5 institution” or “researched the internet for information about ISCED 3-5 programmes”. The number of activities each student had done was calculated.
Entrepreneurial intention
SSES asked (STQM14101) students: “Do you see yourself starting your own business or company in the future?”. Students answered the question on a 10-point scale where 0 represents “not at all likely” and 10 represents “definitely”.
Student-level scale indices
Current psychological well-being
The index of current psychological well-being (ST_WELLBEING) was constructed using students responses about how they have been feeling over the last two weeks (“At no time”, “Some of the time”, “More than half of the time”, “Most of the time”, “All of the time”) in relation to the following statements: “I have felt cheerful and in good spirits”, “I have felt calm and relaxed”, “I have felt active and vigorous”, ”I have woken up feeling fresh and rested” and “My daily life was filled with things that interest me”. Higher scale scores correspond to higher perceived levels of positive student well-being. The final current psychological well-being index variable (ST_WELLBEING) was transformed, with 50 the score of an average student and 10 the standard deviation.
Test and class anxiety
The index of test and class anxiety (ST_ANXTEST) was constructed using students responses about the extent to which they agree (“strongly disagree”, “disagree, “neither agree nor disagree”, “agree”, “strongly agree”) with the following statements: “I often worry that it will be difficult for me taking a test”, “Even if I am well prepared for a test I feel very anxious”, “I get very tense when I study for a test”, “I worry that I will get poor marks in school” and “I feel anxious about failing in school”. Students received higher scores on this scale if they indicated higher levels of anxiety. The final test and class anxiety index variable (ST_ANXTEST) was transformed, with 50 the score of an average student and 10 the standard deviation.
Health behaviours
The index of health behaviours (ST_HEALTHBEH) was constructed using students responses about how often (“Never”, “Once a week or less”, “2-3 days a week”, “4-6 days a week”, “Every day”) they do the following behaviours: “Eat breakfast”, “Eat fruit and vegetables”, “Do at least 20 minutes of vigorous physical activity”, “Sleep 8 hours of more at night” and “Smoke cigarettes or drink alcohol”. Students received higher scores on this scale if they indicated healthier behaviours. The final health behaviours index variable (ST_HEALTHBEH) was transformed, with 50 the score of an average student and 10 the standard deviation.
Satisfaction with relationships
The index of students’ satisfaction with their relationships (ST_RELSATIF) was constructed using students’ responses about how satisfied they are with their relationships with their parents or guardians, friends, classmates and teachers. Students answered these question – one for each relationship - on a 10-point scale where 0 represents “not at all satisfied” and 10 represents “completely satisfied”. Students received higher scores on this scale if they were more satisfied with their relationships. The final relationship satisfaction index variable (ST_RELSATIF) was transformed, with 50 the score of an average student and 10 the standard deviation.
Body image
The index of students’ satisfaction with their body image (ST_BODYIMAGE) was constructed using students’ responses about the extent to which they (“strongly disagree”, “disagree, “neither agree nor disagree”, “agree”, “strongly agree”) with the following statements: “I like my look just the way it is”, “I consider myself to be attractive”, “I am concerned about my weight” and “I like my body”. Students received higher scores on this scale if they indicated higher levels of positive body image. The final body image index variable (ST_BODYIMAGE) was transformed, with 50 the score of an average student and 10 the standard deviation.
Bullying
The index of bullying (ST_BULLY) was constructed using students responses (STQM039) about how often (“Never or almost never”, “A few times a year”, “A few times a month”, “Once a week or more”) they experienced the following in the past 12 months: “Other students left me out of things on purpose”, ”“Other students made fun of me”, “I was threatened by other students”, “Other students took away or destroyed things that belonged to me” and “I got hit or pushed around by other students”. The bullying scale asked students how often they had experienced bullying in school over the past 12 months by reporting on the frequency of the situations mentioned above. Students received higher scores on this scale if they indicated a higher frequency of occurrence of these situations. The final bullying index variable (ST_BULLY) was transformed, with 50 the score of an average student and 10 the standard deviation.
Absence and tardiness
The index of students’ absence and tardiness (ST_DISRUP) was constructed using students’ responses about how often in the past two weeks (“Never”, “One or two times”, “Three or four times”, “Five or more times”) they do the following: “Arrived late for school”, “Skipped some classes” and “Skipped a whole school day”. Students received higher scores on this scale if they indicated higher levels of absence and tardiness. The final absence and tardiness index variable (ST_DISRUP) was transformed, with 50 the score of an average student and 10 the standard deviation.
Scaling related to the index of socio-economic status
A measure of parental socio-economic status (SES) was derived for each site, based on three indices: highest level of parental occupation (HISEI), highest level of parental education (PARED) and household possessions (HOMEPOS). The household possessions index (HOMEPOS) consists of student-reported possessions at home, resources available at home and the number of books at home. HOMEPOS is a summary index of all household and possession items (STQM130, STQM131, and STQM1334 and STQM134). Computation of missing values for respondents with missing data for only one index variable were imputed with predicted values plus a random component based on a regression of the other two index variables within sites. If there was missing data on more than one index variable, the index was not computed for that student and a missing value was assigned. Variables with imputed values were then used for a principal component analysis at the site level. After the imputation process, each of the three indices were standardised to have a mean of 0 and a standard deviation of 1 across the participant sites. Lastly, the arithmetic mean of the three standardised indices was calculated to create the SES scale score for each student.
Cross-site comparability of background scaled indices
While the SSES 2019 Technical Report (OECD, 2021[2]) and the SSES 2023 Technical Report (forthcoming) explain in detail the scaling procedures and the construct validation of all contextual questionnaire data, this section presents a summary of the analyses carried out to validate the cross-site comparability of the main scaled indices used in this report. The internal consistency of scaled indices, factor analysis to assess construct dimensionality and the invariance of item parameters are the three approaches that SSES used to examine the comparability of scaled indices across sites. Based on these three approaches, all indices examined in this report met the reporting criteria.
Internal consistency refers to the extent to which the items that make up an index are inter-related. Cronbach’s Alpha was used to check the internal consistency of each scale within the sites and to compare it amongst sites. The coefficient of Cronbach’s Alpha ranges from 0 to 1, with higher values indicating higher internal consistency.
Similar and high values across sites are an indication of reliable measurement across sites. Commonly accepted cut-off values are 0.9 for excellent, 0.8 for good, and 0.7 for acceptable internal consistency. The average reliability for each of the scale indices (current psychological well-being, test and class anxiety, satisfaction with relationships, body image and bullying) described above was higher than 0.70, and by site only in the following exceptions:
Current psychological well-being: Delhi (0.61)
Body image: Delhi (0.49), Jinan (0.67), Kudus (0.42), Ukraine (0.63)
The average reliability for the scale indices Health behaviours and Absence and tardiness was lower, 0.58 and 0.68 respectively, being systematically low for all sites only with the following exceptions of acceptable internal consistency:
Absence and tardiness: Bulgaria (0.76), Helsinki (0.76) and Turin and Emilia-Romagna (0.75)
Exceptions for SSES 2019 are noted in the SSES 2019 Technical Report (OECD, 2021[2]).
The analyses of the background scale indices also involved a series of iterative modelling and analysis steps. Items from all scales were initially evaluated through an exploratory factor analysis (EFA). A confirmatory factor analysis (CFA) was then carried out on the scales, with only acceptable items from the EFA, to assess the constructs. Generally, maximum likelihood estimation and covariance matrices are not appropriate for analyses of categorical questionnaire items because the approach treats items as if they are continuous. Therefore, the SSES analysis relied on robust weighted least squares estimation (WLSMV) models (Muthén, du Toit and Spisic, 1997[6]; Flora and Curran, 2004[7]) to estimate the confirmatory factor analysis.
For ease of interpretation, all negatively worded items were reverse coded, so the highest value for each item represents a higher attribute.
The SSES student surveys in Delhi (India), Helsinki (Finland), Mexico and Ukraine were conducted in Autumn 2023 and were therefore not included in the data for estimating the scaling parameters for the student background questionnaire. Furthermore, a multiple-group confirmatory factor analysis (MGCFA) was used to test measurement invariance. For the student questionnaire, the MGCFA was evaluated for the following groups; gender, age cohorts and sites. In testing for measurement invariance, three different models were specified and compared (i.e. configural, metric and scalar models):
Configural invariance is the least constrained model. In this model, it is assumed that the items measuring the underlying latent construct are equivalent across all groups of reference (e.g. sites). If the strength of the associations between the groups are the same, then the latent construct is assumed to have the same meaning for all groups (i.e. the structure of the construct is the same). Configural invariance would allow examining whether the overall factor structure stipulated by the measures fit well for all groups in your sample. However, for scales reaching configural invariance, neither scores nor their associations can be directly compared across groups.
The metric level of invariance is achieved if the structure of the construct is the same across groups (i.e. configural invariance is achieved) and the strength of the association between the construct and items (factor loadings) is the same across groups. Metric invariance would allow for comparisons of within-group associations among variables across groups (e.g. correlations or linear regression), but not for the comparison of scale mean scores.
Scalar level invariance is achieved when metric invariance has been achieved and the intercepts/thresholds for all items across groups are equivalent. When scalar invariance is achieved, it is assumed that differences in scale means across groups are free of any cross-group bias. At this level of measurement equivalence, scale scores can be directly compared across groups. Results of the MGCFA are presented in Table A.2. Finally, items were scaled using the Generalised Partial Credit Model (GPCM).
Note: More detailed information on measurement invariance of the scales in the background questionnaires can be found in chapter 14 of the SSES 2019 Technical Report (OECD, 2021[2]) and in the SSES 2023 Technical Report (forthcoming).
Table A.2. Levels of measurement invariance – scales in the student background questionnaire
Age cohorts |
Gender |
Sites |
|
---|---|---|---|
Current psychological well-being |
Metric |
Scalar |
Metric |
Test and class anxiety |
Scalar |
Scalar |
Metric |
Health behaviours |
Metric |
Scalar |
Metric |
Satisfaction with relationships |
Metric |
Scalar |
Metric |
Body image |
Metric |
Metric |
Metric |
Bullying |
Scalar |
Scalar |
Metric |
Absence and tardiness |
Metric |
Scalar |
Metric |
References
[7] Flora, D. and P. Curran (2004), “An Empirical Evaluation of Alternative Methods of Estimation for Confirmatory Factor Analysis With Ordinal Data.”, Psychological Methods, Vol. 9/4, pp. 466-491, https://doi.org/10.1037/1082-989x.9.4.466.
[4] Ganzeboom, H. and D. Treiman (2003), “Three Internationally Standardised Measures for Comparative Research on Occupational Status”, in Advances in Cross-National Comparison, Springer US, Boston, MA, https://doi.org/10.1007/978-1-4419-9186-7_9.
[6] Muthén, B., S. du Toit and D. Spisic (1997), Robust Inference using weighted least squares and quadratic estimating equations in latent variable modelling with categorial outcomes, http://www.statmodel.com/bmuthen/articles/Article_075.pdf.
[2] OECD (2021), https://www.oecd.org/education/ceri/social-emotional-skills-study/sses-technical-report.pdf, OECD Publishing, Paris.
[1] Primi, R. et al. (2020), “Classical Perspectives of Controlling Acquiescence with Balanced Scales”, in Springer Proceedings in Mathematics & Statistics, Quantitative Psychology, Springer International Publishing, Cham, https://doi.org/10.1007/978-3-030-43469-4_25.
[5] Scholl, N., S. Turban and P. Gal (2023), “The green side of productivity: An international classification of green and brown occupations”, OECD Productivity Working Papers, No. 33, OECD Publishing, Paris, https://doi.org/10.1787/a363530f-en.
[3] UNESCO (2011), http://uis.unesco.org/en/isced-mappings.