Weighting of countries and subnational jurisdictions
For the calculation of percentages in tables and charts, responses were weighted so that each country is equally represented; that is, by setting the total weight of each country equal to one. This aims to ensure that countries with responses for subnational jurisdictions are not over-represented in calculated statistics. When multiple responses were received from the same country, each response was given an equal weight. For example, the same weighting of 0.143 was applied to each of the seven responses from Canada in Figure 4.1, which include two Canada-level responses and five responses from provinces and territories. Thus, the total weight for all Canadian responses is the same as the total weight for other countries having submitted only one response.
Weighting of ECEC curriculum frameworks
The ECEC in a Digital World policy survey collected some data on ECEC curriculum frameworks as classified by the standardised age groups described above. In these cases, weights and percentages were calculated for each age group separately so that each country’s total weight equals one in the responses for each age category.
In some countries and jurisdictions, information was provided for more than one curriculum framework. For example, Germany provided data on two curriculum frameworks for the age group “0-5/primary school entry”. The total weight for Germany was therefore split equally between these curricula so that each had a weight of 0.5 for the given age category in the results presented in Figure 4.4.
Weighting for questions with multiple items
Several questions in the ECEC in a Digital World policy survey included multiple items, asking respondents to choose from a selection of response categories. Figure 4.1 shows an example of the results from one such question, where question items included “Preparing young children for social and political participation in the digital age” and “Promoting young children’s agency and empowerment as users of digital technologies”. For each item, respondents could report “do not know” or assign a level of importance from “low” to “very high”.
If a country or jurisdiction did not select any response category for any question item (that is, all data were missing for a question), they were excluded from the calculation of weights. For example, Australia (Victoria) was excluded from Figure 4.1 because its response to each item on policy challenges regarding digitalisation and young children and ECEC was missing (see Tables B.1 and B.2). However, countries and jurisdictions were included in the calculation of weights if they included at least one non-missing answer to a relevant item. For example, Australia (South Australia) was included in the calculation of weights for Australia in Figure 4.1.
The same weight was used for each country and jurisdiction across every item relating to the same question. Thus, Australia and Australia (Tasmania) were given a weight of 0.33 across all items, even though Australia (South Australia) had a missing response for some items.
Weighting for questions with multiple response options
In some questions of the ECEC in a Digital World policy survey, countries and jurisdictions could select multiple response categories in response to a single question item. For example, in the Canada-level response for school-based programmes, it was reported that digital devices are provided to ECEC settings by both regional and local authorities (see Table B.10).
In these cases too, the total weight assigned to responses for each country was set to one. Within each country, this total weight was equally divided between the number of jurisdictions that provided at least one answer (in any response category) to one of the relevant question items. For example, all countries’ and jurisdictions’ responses were included in the calculation of weights for Table 5.3 because they all gave at least one non-missing answer to at least one of the four relevant question items. Here, similarly, the same weight was used for each country or jurisdiction across every item relating to the same question.
Treatment of “not applicable” and “not known”, and missing responses
Information reported by countries or jurisdictions as “not applicable” or “not known” was checked against explanatory notes provided by countries and jurisdictions and sometimes recoded to enhance the comparability of information. In cases where questionnaires presented blank items (missing responses), comments provided by countries and jurisdictions were considered for the interpretation of the data.
Weighted percentages were calculated using the weights assigned to each country or jurisdiction as described above. Generally, “not applicable” and “not known” answers were included in the calculation of weighted percentages.
However, countries or jurisdictions with missing data for a question item were excluded from the calculation of weighted percentages for that item. For example, Australia (South Australia) was excluded from the total N in the calculation of weighted percentages of countries and jurisdictions identifying of “Preparing young children for social and political participation in the digital age” as a policy challenge in Figure 4.1.
Significance tests
Where appropriate, tests of statistical significance were conducted to understand whether observed differences in sampled data are likely to represent actual differences within the population. In this report, differences are labelled as statistically significant when a difference would be observed less than 5% of the time if there was actually no difference in corresponding population values (statistical significance at the 95% level). In other words, the risk of reporting a difference as significant when such difference, in fact, does not exist, is contained at 5%.
Calculation of indicators in Chapter 1
Tables 1.2 and 1.3 list countries and jurisdictions that have been identified as active on a particular policy lever and as addressing a particular policy challenge, respectively. The selection of countries and jurisdictions was informed by responses to the policy survey and case studies submitted by countries and jurisdictions, as well as by desk research by the OECD Secretariat and qualitative analysis presented throughout the report. Concerning selection on the basis of the policy survey, respondents’ answers to a selection of relevant items were computed in weighted averages to identify countries and jurisdictions scoring above a predefined threshold.