This chapter combines the indicators presented in the previous chapter into two synthetic indices of digital risks and digital opportunities. These indices are found to be non-correlated with each other, implying that increased digital opportunities are not necessarily associated to higher digital risks. Digital opportunities are found to be highly correlated with access to ICT, which suggests that providing broad access is a necessary but not a sufficient condition to create opportunities. While digital risks are diverse in nature, the prevalence of digital security incidents is a powerful predictor of other digital risks, as countries’ digital maturity and digital strategies can reduce all digital risks while increasing digital security. As analysis based on available indicators is limited due to the lack of harmonised data, this chapter also discuss the statistical agenda going forward.
How's Life in the Digital Age?
Chapter 3. Comparing well-being in the digital age across OECD countries
Abstract
The statistical data for Israel are supplied by and under the responsibility of the relevant Israeli authorities. The use of such data by the OECD is without prejudice to the status of the Golan Heights, East Jerusalem and Israeli settlements in the West Bank under the terms of international law.
Introduction
The previous chapter exposed the opportunities and risks of the digital transformation in each dimension of people’s well-being. While taking stock of all available evidence is a necessary first step, it is useful to synthesise existing information in order to identify countries’ strengths and weaknesses in order to inform an adequate policy response. One way to prepare for policy prioritisation and intervention is to draw some international comparison on the extent of digital opportunities and risks. Some countries are able to benefit from the opportunities brought about by the digital transformation, while managing to mitigate its risks. Other countries have embraced the opportunities but also face high risks, and some other countries neither enjoy the opportunities nor face the risks. The analysis starts by building logical clusters of countries, before discussing the underlying dynamics that might contribute to different equilibria.
In practice, this chapter combines the various indicators presented in the preceding chapter to build two synthetic indicators of digital opportunities and risks, which are then used to map countries along these two axes. A first notable result is the complete lack of any cross-country correlation (i.e. 0.00) between overall opportunities and risks. This implies that embracing the opportunities of the digital transformation is not inescapably associated with being exposed to risks. Similarly, countries that have been exposed to few of the opportunities of the digital transformation may still be exposed to high risks.
As a second step, the chapter reviews some of the factors that prominently represent, and partly explain, overall digital opportunities and risks. Opportunities of the digital transformation are found to be highly correlated with Internet access, which suggests that providing broad access is a necessary condition for creating digital opportunities. However, providing access to digital technologies is not a sufficient condition for reaping the benefits of the digital transformation, as individuals also require the right economic, regulatory and cultural conditions to benefit from access.
While the opportunities of the digital transformation are strongly correlated with Internet access, this is not the case for risks. Risks of the digital transformation occur regardless of the degree of digitalisation of the country and seem to depend on other factors. This partially reflects the diversity of risks that the digital transformation brings about. Each risk of the digital transformation is subject to a range of enabling or inhibiting factors. The share of population having experienced digital security incidents is the indicator that most strongly correlates with the overall index of risks of the digital transformation. When trying to explain some of the driving factors of countries’ performance, the roles of framing conditions and cultural factors are important. A detailed examination of a country’s relative performance is provided in Chapter 4 through the presentation of specific country profiles.
A final, and perhaps most important, finding is that international comparisons are inhibited by a lack of harmonised indicators, so that a strong effort from the statistical community is warranted in the future. The chapter highlights current issues and lays out a concreate statistical agenda going forward.
Evaluating individual country performance
Chapter 2 has presented 33 indicators of opportunities and risks of the digital transformation in the 11 dimensions of well-being and the additional dimension of ICT access and use. While it is important to compare country performance in each of these dimensions, the large number of indicators makes it hard to synthesise exactly how individual countries are performing across the board. For this reason, one of the key outputs of this report are the digital well-being wheels presented in Chapter 4 for individual countries. These wheels present the performance of an individual country across the 33 indicators relative to other OECD countries. The digital well-being wheel presents opportunities in dark blue and risks in yellow, with longer bars denoting either higher opportunities or higher risks. The first inner circle corresponds to the minimum outcome observed among OECD countries, while the second inner circle corresponds to the maximal outcome. The digital well-being wheel is shown in Figure 3.1 below for Finland. It shows that people in Finland reap a lot of the benefits of digitalisation and are relatively protected from its risks.
Comparing opportunities and risks across countries
In order to understand countries’ relative opportunities and risks related to the digital transformation, a synthetic indicator of digital opportunities is constructed by aggregating the 20 normalised indicators of opportunities across the dimensions discussed in Chapter 2 (ICT access and usage, education and skills, income, consumption and wealth, jobs, work-life balance, health, social connections, governance and civic engagement, and subjective well-being). For each indicator, countries are scored according to their comparative performance (0 when in the bottom third of all OECD countries, 0.5 when in the middle third of OECD countries, 1 when in the top third of OECD countries). Missing data values are excluded, and ranks are renormalized between 0 and 1 to avoid distortions in case of data gaps. The resulting synthetic index of digital opportunities is calculated as the average score across 20 indicators. A similar procedure is conducted for the synthetic index of digital risks, which encompasses 13 risk indicators across the same dimensions (ICT access and usage, education and skills, jobs, work-life balance, health, social connections, governance and civic engagement, environmental quality and digital security).
Figure 3.2 depicts the results and maps countries in the dual space of opportunities and risks of the digital transformation. First, it is striking that there is a zero cross-country correlation between digital opportunities and risks (the correlation is actually equal to 0.00). The figure also shows that a number of countries located in the upper-right quadrant (e.g. Luxembourg, the United Kingdom and, to a lesser extent, Denmark, Sweden and the Netherlands) enjoy high opportunities while at the same time facing high risks. On the contrary, countries such as Greece, Latvia and the Czech Republic benefit less from the opportunities of the digital transformation relative to other countries but also face fewer risks.
However, there are also countries that combine low opportunities with high risks. Countries in the upper left quadrant of Figure 3.2 (notably Chile, Italy and Hungary) have embraced few of the opportunities of the digital transformation, but are exposed to high risks. Other countries in the lower right quadrant (e.g. Finland, Norway, Korea, Canada and Switzerland) combine high opportunities from the digital transformation while avoiding a number of its risks.
Figure 3.3 provides further details on countries’ performance and adds information about the number of missing indicators in each area. The highest scores in opportunities (Panel A) are generally in countries with the highest levels of Internet penetration: the Nordic countries, Luxembourg, the Netherlands and the United Kingdom. In these countries, there is a low divide in Internet access and use among different population groups. Many people have access to the services offered by the digital transformation and make use of them. However, there are differences in the ability of these digitally advanced countries to mitigate the risks of the digital transformation. Panel B shows, for instance, that in Sweden and Denmark, high opportunities go together with high risks, while Finland has low risks when it comes to the production of e-waste, the share of children experiencing cyber-bullying or abuses of personal information.
Figure 3.4, Panel A confirms the instrumental importance of Internet access for reaping opportunities for well-being in the digital age. There is a large and significant correlation (0.77) between the average rank in terms of overall digital opportunities and the share of households with broadband Internet access. The eight leading countries in terms of digital opportunities are also the leaders in terms of broadband Internet diffusion among households. Risks of the digital transformation are harder to characterise as they are diverse. First, there is a low correlation (0.16) between digital risks and ICT access, suggesting that Internet diffusion does not mechanically brings about higher risks. Second, the strongest cross-country correlation (0.68) is observed between risks of the digital transformation and cyber-insecurity, measured as the percentage of people having experienced digital security incidents over the last 3 months. This suggests that the indicator of cyber-insecurity captures other important digital risks, possibly reflecting the overall digital maturity of each country as well as of the scope and effectiveness of national digital strategies.
Government policy certainly plays a role in determining countries’ uptake of digital technologies and the mitigation of potential adverse effects. National digital strategies (NDS) have been implemented by the large majority of OECD country governments with the primary goals of strengthening e-government services, developing ICT infrastructure, promoting ICT skills and strengthening digital security (OECD, 2017). These strategies may have a variety of objectives, with many countries considering effects on GDP growth, productivity and competitiveness, but only a few explicitly considering the importance of the strategy to advance quality of life and well-being (with the exception of the NDS of Estonia, Lithuania, the Netherlands and Turkey).
Among the named priorities in countries’ national strategies, there is substantial variation in the degree to which NDS’s cover the mitigation of key potential well-being risks (OECD, 2015a). Most national strategies focus on facilitation ICT access and use, supporting e-government services, and mitigating security risks. However, many opportunities and risks are not covered by a large number of countries. For example, advancing the inclusion of elderly and disadvantaged group is a named objective in the NDS of only four countries, and developing a sound regulatory approach for digital environments appears in three. This means that some of the key adverse effects the digital transformation, for example the sources and consequences of extreme use or the spread of misinformation online, may not be addressed. These differences in policy and regulatory approaches, alongside other cultural, economic and political factors, may explain the different paths that countries take with respect to reaping the benefits and mitigating the risks of the digital transformation. Culture is another explanation of the observed cross-country differences in digital opportunities and risks, as it is a strong determinant of a country’s predisposition for innovation and technological change (Herbig and Dunphy, 1998).
These various factors are likely to affect not only the emergence of technological innovations inside a country but also the extent to which people in these countries are open to embracing new technologies and adopt innovations. In 2015, the OECD compiled data on people’s perceptions of the benefits of science and technology from the Eurobarometer and a variety of national sources (OECD, 2015a). While the indicator is experimental due to the variety of sources used, the variation between attitudes towards technology is striking. In Estonia, the Netherlands and Luxembourg, positive attitudes are dominant, with over 80% of people agreeing that science and technology have a positive effect. In other European countries, i.e. the Czech Republic, Italy and Hungary, this value ranges between 60 and 70%. Yet, it is unclear whether more positive attitudes result in a higher uptake of technologies or rather the other way around.
One major limitation of the present analysis is the number of missing indicators in some countries. As discussed previously, country coverage is severely limited for some indicators; in 4 countries, at least 15 indicators out of 33 are missing. However, Figure 3.3 shows no strong association between the number of missing indicators by country and their relative performance: for instance, both the top three and the bottom three performing countries in terms of opportunities have a complete set of indicators (Panel A). In any case, specific measurement efforts would be needed to fill these data gaps in the future.
The statistical agenda ahead
Due to the pace of the digital transformation, governments, industry and civil society alike struggle to identify the nature of the impacts of digitalisation on people’s lives (Gluckman and Allen, 2018). Currently, the understanding of many well-being impacts of the digital transformation, such as those on mental health, social connections and subjective well-being, remains limited to small-scale studies often focused on a specific country or population group. Because of the recent nature of these technologies, National Statistical Offices (NSOs) may not have yet integrate measures of the use of such technologies and its impacts into relevant data collections. This section reviews the measurement challenges discussed throughout this report and suggests priorities for the statistical agenda ahead. Existing data gaps are first recalled, covering both gaps in the set of indicators presented in this publication and indicators that were not included due to lack of quality data. Based on this assessment, suggestions are made to improve the evidence base on the impacts of the digital transformation on people’s well-being. It is incumbent upon the research community, governments, academic institutions and civil society organisations, to advance knowledge on the well-being impacts of the digital transformation.
Data gaps
Evidence on the impacts of the digital transformation in each dimension of well-being has been gathered in this report, but available indicators are often available for only a subset of countries. The country coverage of indicators used in this report is heavily unbalanced, with a few countries lacking data for a large number of indicators (Figure 3.5). The countries with the largest number of missing data are Australia, New Zealand, Israel and the United States. Absence of data limits comparison of opportunities and risks presented above.
One reason for the imbalance of indicators is the lack of harmonisation similar to the one that has taken place at the European level to collect data on the access and use of digital technologies, mostly through Eurostat’s model questionnaire on ICT usage in households and individuals. This survey is closely aligned with the OECD model survey on ICT access and usage by households and individuals (OECD, 2015b), and is therefore a reliable source for a number of indicators included here. In addition, other European-wide surveys, such as the EWCS and the EU-SILC, provide additional evidence on the relationship between computer use and job quality, or Internet access and subjective well-being.
While most OECD countries have a dedicated ICT survey to measure the use of digital technologies of households and individuals, differences remain in the extent to which countries use harmonised survey questions. A number of indicators in this report rely on information of the use of specific online activities, such as expressing opinions online or accessing online health information. In the countries with large data gaps, a number of ICT use questions are not included in these surveys, giving rise to missing indicators in a number of dimensions. Besides ICT access and usage surveys and large European-wide surveys, this publication relies heavily on data from other international survey instruments such as the PIAAC, PISA and TALIS surveys implemented by the OECD’s Directorate for Education and Skills. Some countries have opted out of participation in parts of these surveys, resulting in missing data. For example, France, Italy and Spain did not participate in the problem-solving in technology-rich environments assessment that is used to measure digital skills in PIAAC. Because this data is used for both the digital skills and the digital skills gap indicators, these countries are missing data points.
The indicators used in this report to construct the digital well-being wheels presented in Chapter 4 have been closely examined, with a detailed quality review included in the Annex to this chapter, which lays out the main statistical issues for each indicator, and suggests future improvements.
Furthermore, a number of opportunities and risks of the digital transformation that were identified as important in Table 1.1 and discussed in Chapter 2 do not feature in the digital well-being wheel due to lack of data availability. These impacts have been documented through qualitative descriptions or country-specific studies, but their measurement has not been incorporated in international survey vehicles. A list of these indicators is shown in Table 3.1.
The proposed indicators in this table fall into a number of categories. First are indicators of how people spend their time. Because extreme use of mobile devices has only been a concern recently, surveys have so far insufficiently focused on the amount of time that people spend on mobile devices. Similarly, it remains unclear how digital technologies have affected people’s habits and whether other activities have been crowded out by the use of digital technologies. Second are indicators of new technologies and online activities that have not been included in survey vehicles. Examples are exposure to disinformation, use of digital health monitoring tools, and self-reported victimisation of hate speech online. Finally, a third group of indicators relate to the causal effect of the digital transformation on various well-being outcomes. This is the case for indicators of digital technology use on mental health and subjective well-being, as well as those measuring the effects of automation and computer-based jobs on labour market polarisation. These are the most challenging, because they require collecting longitudinal data in to study effects on individuals over time. Concrete actions that data producers, notably National Statistical Offices (NSOs), can take in order to fill the missing gaps are suggested below.
Table 3.1. Types of opportunities and risks currently not covered by indicators
Dimension |
Proposed indicator |
Main issue |
Survey type |
Feasibility |
---|---|---|---|---|
CT access and use |
Frequency of use of mobile devices |
Include harmonised question on frequency of mobile phone use and Internet use in ICT access and use surveys |
ICT surveys |
High |
Jobs and earnings |
ICT-driven jobs in other sectors |
Include task-based and industry (ISIC) covariates in one survey vehicle to monitor the proportion of ICT-driven jobs by sector |
Labour force surveys, PIAAC |
High |
Extent of job polarisation driven by digital skills and job automation |
Longitudinal data on job tasks, computer use at work and digital skills in labour market surveys would be necessary in order to estimate these effects |
Labour force surveys, PIAAC |
Medium |
|
Work-life balance |
Time spent in transportation associated with telework |
Information on Internet use in time use surveys; harmonisation across time use surveys |
Time use surveys |
Medium |
Time spent on childcare responsibilities associated with telework |
Information on Internet use in time use surveys; harmonisation across time use surveys |
Time use surveys |
Medium |
|
Health |
Diffusion of health monitoring tools |
Inclusion of appropriate survey questions in national health surveys or ICT access and use surveys; harmonisation across health surveys |
Health surveys, ICT surveys |
High |
Mental health effects of digital devices on adults |
Include covariates of self-reported health and subjective well-being in ICT surveys; include improved covariates of ICT use in General Social Surveys with well-being outcome variables; longitudinal data is needed to assess causality |
GSS, Health, ICT surveys |
Medium |
|
Crowding out of healthy behaviour |
Information on Internet use in time use surveys; harmonisation across time use surveys |
Time use surveys |
High |
|
Social connections |
Reduced frequency of offline contact |
Information on Internet use in time use surveys; harmonisation across time use surveys |
Time use surveys |
High |
Hate speech and online harassment |
Introduction of an appropriate and standardised survey question in national victimisation survey; or use of web-scraping and machine learning to count instances online |
Victimisation surveys or innovative techniques |
High/ Medium |
|
Civic engagement and governance |
Exposure to disinformation online |
Inclusion of appropriate survey questions in ICT surveys |
ICT surveys |
High |
Personal security |
Physical injury associated with automated technology |
Introduction of an appropriate survey question in national victimisation surveys |
Victimisation surveys |
High |
Environmental quality |
Net carbon footprint of digital activities and technologies |
Very difficult to estimate the direct effect of the various factors impacting energy use affected by digital technologies |
Energy accounts |
Low |
Reduced personal automobile mileage associated with digital vehicle sharing options |
Very difficult to estimate the direct effect of changes in behavioural patterns and the rise of vehicle platforms and demand changes in automobile mileage |
Household consumption surveys |
Low |
|
Housing |
Diffusion of Smart Home Technologies |
Introduction of an appropriate survey question in household consumption surveys |
Household consumption surveys |
High |
Subjective well-being |
Causal effect of Internet use on subjective well-being |
Longitudinal studies and improved covariates associated with subjective well-being and ICT access and use are necessary to improve evidence. |
ICT surveys, General social surveys |
Medium |
Improving statistical vehicles
Suggestions on the design of statistical vehicles are made below in order to improve the coverage and comparability of multiple indicators at the same time. These suggestions concern the harmonisation of ICT surveys that could be tied to the OECD model survey, the inclusion of subjective well-being questions into ICT surveys, time use surveys, and the construction of longitudinal data.
Using the OECD model survey to improve comparability
A major step to improve understanding of the impact of the digital transformation lies in the harmonisation of ICT access and use data across countries. The OECD model survey on ICT access and usage by households and individuals (OECD, 2015b) is an attempt to standardize survey questions related to ICT access and use across countries in order to align measures. This tool contains a number of questions that form the basis of indicators included in this report, particularly on specific online activities as well as on exposure to data privacy and online security incidents. However, a number of further improvements would be desirable.
Currently, the partial adoption of the model survey by NSOs limits comparison of opportunities and risks in a number of specific domains, such as health and governance and civic engagement. While some countries measure the access and usage of ICTs by households and individuals using stand-alone surveys, others include dedicated ICT modules in existing household surveys, which limits the number of questions that can be included in the survey. In addition, two indicators of Internet access and use in this report rely on a large set of questions on a variety of Internet uses. This is important, because the variety of activities that people perform reflects the depth of their usage of the Internet. With the second digital divide increasingly driven by differences in skills, it is vital to monitor the uptake of a range of online activities of different groups in the population, as this may be a source of exclusion and inequality in the future.
A specific issue for the harmonisation of indicators pertains to the recall period of questions based on the model survey. For activities performed on the Internet, the model survey suggests a recall period of 3 months (with a few exceptions, notably for online consumption, due to possible seasonality differences, and for e-government, because needs to access government services may be less frequent). Some countries, however, use recall periods of 12 months or unspecified, limiting comparability. The model survey also suggest reference periods for questions on the frequency of uses (of computers, mobile phones, etc.), but here too there are differences among countries. Better alignment of reference periods would improve comparability. The second revision of the OECD model survey provides a more detailed account of methodological differences in how countries measure ICT access and use by households and individuals.
Beyond harmonisation, the model survey needs to be reviewed in a timely manner in order to keep up with the rapid pace of the digital transformation. Emerging trends, such as experiences of misinformation and new online activities, are not well reflected in the OECD model survey. In addition, the model survey has to keep up with changes in the frequency and intensity of use of digital devices. For some demographic groups, mobile phone use has become so intense that “several times a day” may not suffice as the most frequent response option, as more and more people are online all the time. Similarly, the highest response option for daily use of mobile phone, “more than one hour”, does not allow identifying extreme users. In the same vein, at a time where 26% of US adults are online “Almost constantly”, it would be useful to have more granular response options, beyond the “daily” option currently included in the Eurostat questionnaire.
Finally, in order to facilitate the monitoring of ICT use trends, regular data collections are imperative for cross-country comparisons. Currently, for some indicators included in this report, the most recent data for some countries refer to 2012 or earlier, which may be too far in the past to make relevant comparisons.
Improving existing surveys with covariates of subjective well-being
For a large part, the data in this report come from ICT surveys targeting households and individuals or other large household survey vehicles. A key problem with these data sources is that they are not designed to assess the relationship between digital transformation and people’s well-being. As a result, while observations can be made about trends over time and between groups in the uptake of certain digital activities, these surveys do not allow establishing a link between use of these activities and well-being impacts. This is especially the case for indicators of subjective well-being.
There is sufficient evidence to believe that use of personal digital devices and specific online activities may have a strong influence on people’s mental health, feelings of achievement, and life satisfaction. Surveys in ICT use should include a core set of questions on subjective well-being to better understand its relationship with the exposure to these digital innovations.
Time use surveys can shed light on the effects of digital technologies
Time use surveys (TUS) may provide new insights into the effects of using digital devices. TUS are particularly useful to shed light on the well-being effects of the digital transformation because they can track how this may change the way people work and spend their time, and whether digital activities may crowd out exercise or sleep. Unfortunately, there are as many varieties of time use surveys as there are countries having implemented them. Table 3.2 reviews digital variables across ten selected national time use surveys. The most common variable across these surveys is the digital equipment of the dwelling, which is included in seven surveys. All surveys ask about the use of digital technologies, but in a non-comparable way: some ask about daily duration of usage (two out of ten), others about the frequency of use (three out of ten), while the remaining five use a categorical “yes/no” question regarding technology use. Table 3.2 also shows that only two surveys, in France and the United States, allow assessing subjective well-being during digital activities. Such information is key in evaluating how people experience these activities.
Table 3.2. Digital variables included in selected time use surveys
Digital activity |
Affects measured during some activities |
|
---|---|---|
Canada |
Socialising or communicating, using technology (versus in person) Duration – use of technology Number of text messages sent per day |
No |
Denmark |
Digital equipment in the dwelling Frequency of usage – computer Duration on Internet Internet activities: bank, shopping, information, e-mails Teleworking Computer use for work at home Internet use for work at home |
No |
Finland |
Digital equipment in the dwelling Frequency – computer use for leisure, by activity Frequency – use of Internet, by activity Social network user |
No |
France |
Digital equipment in the dwelling Frequency of usage – Internet Use of Internet, by activity |
Yes |
Germany |
Media use Use of computer/smartphone Programming/repair computer or smartphone Information obtained via computer/smartphone Communication via computer/smartphone Other activities via computer/smartphone |
No |
Italy |
Digital equipment in the dwelling Teleworking Job search on Internet |
No |
Mexico |
Digital equipment in the dwelling Use of mass media |
No |
Turkey |
Digital equipment in the dwelling Computing activities, by type Use of Internet, by activity Training in computing |
No |
United Kingdom |
Digital equipment in the dwelling Household management using the Internet, by activity Computing activities, by type |
No |
United States |
Household management using the Internet, by activity Computer use, by purpose (leisure, volunteering) Online shopping |
Yes |
Collecting more longitudinal data to understand causal effects
The lack of longitudinal data prevents establishing causal linkages between use of digital technologies and effects on people’s well-being. Examples are plenty, from estimating the effects on job quality of computer use, to measuring the effects of digital devices on social connections, (teenage) mental health and subjective well-being. Currently, analysis of the relationship between use of digital technologies and potential outcomes is reliant on cross-sectional data that neglect potential selection bias and endogeneity problems. The lack of robust evidence has sparked a lively academic debate in some areas, notably in understanding the impacts of the digital transformation on mental health. Ideally, longitudinal data of ICT use in combination with appropriate subjective well-being variables would be the best way to understand the well-being impacts of new technologies. For cost and logistical reasons, longitudinal data is not common for large-scale household surveys, and certainly not for ICT use surveys. A broad research consortium involving NSOs and academics could expand the evidence base on the causal effects of the introduction of new technologies on well-being.
Leveraging innovative technologies to monitor new online trends
Finally, digital innovations themselves offer a response to some of the measurement challenges raised in this report, in particular for indicators of misinformation, hate speech, cyber security violations and cyberbullying. Innovations in the field of big data analysis based on machine learning strategies may in the future allow measuring the intensity of these phenomena in different countries. For example, Amador et al. (2017) created a model to recognize disinformation on Twitter in the context of the 2016 US general election. Google is developing algorithms to detect hate speech on its websites. More work could take place under the umbrella of the OECD Smart Data Strategy, an organisation-wide initiative aimed at expanding the evidence base using new methods of collecting, processing and analysing data. Along with National Statistical Offices, the OECD intends to explore the ways in which machine learning and other big data analysis tools can be used in monitoring some of the opportunities and risks of the digital transformation, providing evidence in a variety of well-being domains.
References
Amador Diaz Lopez, J., A. Oehmichen and M. Molina-Solana (2017), “Characterizing political fake news in Twitter by its meta-data”, Cornell University Library.
Gluckman, P. and K. Allen (2018), “Understanding wellbeing in the context of rapid digital and associated transformations: Implications for research, policy and measurement”, The International Network for Government Science Advice, Auckland, www.ingsa.org/wp-content/uploads/2018/10/INGSA-Digital-Wellbeing-Sept18.pdf.
Herbig, P. and S. Dunphy, (1998) “Culture and innovation”, Cross Cultural Management: An International Journal, Vol. 5, No. 4, pp. 13-21, https://doi.org/10.1108/13527609810796844.
OECD (2017), OECD Digital Economy Outlook 2017, OECD Publishing, Paris, https://doi.org/10.1787/9789264276284-en.
OECD (2015a), OECD Digital Economy Outlook 2015, OECD Publishing, Paris, https://doi.org/10.1787/9789264232440-en.
OECD (2015b), “OECD model survey on ICT access and usage by households and individuals: Second revision”, Working Party on Measurement and Analysis of the Digital Economy, www.oecd.org/sti/ieconomy/ICT-Model-Survey-Access-Usage-Households-Individuals.pdf.
Annex 3.A. Quality assessment of available indicators used in this report
Annex Table 3.A.1. Detailed quality assessment of indicators
Dimension |
|
Indicator |
Quality |
Harmoni-sation |
Country coverage |
Timeli-ness |
Key measurement issue |
Possible solutions |
Feasibility of improvement |
---|---|---|---|---|---|---|---|---|---|
ICT access and use |
1 |
Access to digital infrastructures |
|
|
|
|
Some methodological differences; some data is outdated |
|
High |
2 |
Individuals using the Internet |
|
|
|
|
Some methodological differences; some data is outdated |
Improve alignment in questions in order to improve cross-country comparison |
High |
|
3 |
Variety of uses of the Internet |
|
|
|
|
Activities measured differ across country; new activities (e.g. teleworking) are not reflected in ICT access and usage surveys |
Improve alignment in questions in order to improve cross-country comparison; ensure question relevance by including new online activities |
High |
|
4 |
Inequality of Internet uses |
|
|
|
|
Same as no. 3 |
Same as no. 3 |
High |
|
Education and skills |
5 |
Digital skills |
|
|
|
|
Lack of country coverage, long interval between surveys |
More regular tests can improve in the monitoring of digital skills |
High |
6 |
Digital skills gap |
|
|
|
|
Same as no. 5 |
Same as no. 5 |
High |
|
7 |
Digital resources at school |
|
|
|
|
The measure only considers availability of digital resources, not what they are used for, nor does it consider other types of e-learning devices. |
An improved measure would consider the use of computer-based learning tools, rather than access to computers, per se. |
High |
|
8 |
Teachers’ lack of ICT skills |
|
|
|
|
Because the measure is based on self-defined skills needs it is not an objective measure of teachers’ skills across countries |
A standardised test on teacher skills would provide a more reliable measure |
Medium |
|
9 |
Online courses |
|
|
|
|
Different timeframes specified across countries; does not consider a wider range of e-learning tools such as mobile applications, Youtube videos, etc. |
A wider definition of online courses, harmonised definition and harmonised timeframe |
High |
|
Income, consumption and wealth |
10 |
Wage premium associated with digital skills |
|
|
|
|
Lack of country coverage, long interval between surveys |
|
High |
11 |
Online consumption |
|
|
|
|
Does not consider the frequency of online purchases by individuals, which is important as online consumption becomes more widespread |
An improved measure may ask for frequency of online shopping |
High |
|
12 |
Selling online |
|
|
|
|
Some methodological differences; some data is outdated |
Improve alignment in questions in order to improve cross-country comparison |
High |
|
Jobs |
13 |
Employment in information industries |
|
|
|
|
Employment in information industries as classified in this measure does not indicate the degree of digitalisation of jobs in these industries; moreover, this indicator does not capture job creation associated with the digital transformation in other sectors; some data is outdated |
An additional measure of highly digital jobs in other sectors would reflect employment in digital jobs better; in addition, regular measurement can help to assess the growth in employment over time in the ICT sector |
Medium |
14 |
People using the Internet when looking for a job |
|
|
|
|
Some methodological differences; some data is outdated |
An alternative measure might consider online job search among unemployed people; alignment question timeframe in order to improve cross-country comparison |
High |
|
15 |
Mean job automatibility |
|
|
|
|
Probabilities of automation are based on current technological possibilities; it does not consider future innovations that may lead to further automation |
It is virtually impossible to predict which jobs survive in the future; the current measures provides a good sense of which jobs are more at risk and in which countries |
Low |
|
16 |
Reduction in extended job strain associated with computer-based jobs |
|
|
|
|
The measure only considers the difference extended job strain between workers with computer-based jobs and those who do not have computer-based jobs, so no causality can be established |
Time series data is necessary to better analyse the effects of computer-based and 'digital' jobs and job quality |
Medium |
|
17 |
Job stress associated with computer-based jobs |
|
|
|
|
The measure only considers the difference in job stress between workers with computer-based jobs and those who do not have computer-based jobs, so no causality can be established |
Time series data is necessary to better analyse the effects of computer-based and 'digital' jobs and job stress |
Medium |
|
Work-life balance |
18 |
Penetration of teleworking |
|
|
|
|
Lack of harmonisation in survey question across countries; some data is outdated |
Align question reference timeframe in order to allow cross-country comparisons |
High |
19 |
Increased worries about work when not working |
|
|
|
|
The measure only considers the difference in worries about work between workers with computer-based jobs and those who do not have computer-based jobs, so no causality can be established |
Time series data is necessary to better analyse the effects of computer-based and 'digital' jobs and worries about work when not working |
Medium |
|
Health |
20 |
Making medical appointments online |
|
|
|
|
There are many more e-health services, notably the use of Electronic Health Records, that better represent digitalisation in patient-provider interactions |
Better data on the use of Electronic Health Records among service providers |
High |
21 |
Accessing health information online |
|
|
|
|
Methodologies are not strictly comparable for certain countries (Australia, Canada, New Zealand and the United States); some data is outdated |
Align question reference timeframe in order to allow cross-country comparisons |
High |
|
22 |
Digital addiction among children |
|
|
|
|
The current measure does not capture a pathological digital addiction |
Self-reported diagnoses of digital addiction may be unreliable, but better survey measures of pathological digital addiction may be included in (children's) health surveys |
Medium |
|
Social connections |
23 |
Using online social networks |
|
|
|
|
Methodological differences exist for Australia, Israel, Japan, Korea, New Zealand and the United States, particularly in the reference period; this measure would particularly benefit from the inclusion of subjective well-being covariates |
Align question timeframe in order to allow cross-country comparisons |
High |
24 |
Children experiencing cyberbullying |
|
|
|
|
Self-reports are problematic, both in a school- and home-setting, because children may not be comfortable to admit victimisation in the presence of others |
A home-setting may be a safer environment for self-report measures, but the KidsOnline survey currently has limited geographic reach; it is hard to conceive of a better measure than self-reported victimisation |
Low |
|
Governance and civic engagement |
25 |
People expressing opinions online |
|
|
|
|
Measure is not sensitive to intensity or frequency of online civic or political engagement; lack of harmonisation limits comparability across countries |
Besides self-report data innovative techniques like web-scraping can help in measuring online civic and political engagement |
High |
26 |
Individuals interacting with public authorities online |
|
|
|
|
The current measure does not consider the quality of the e-government experience; methodological differences in certain countries (Israel, Mexico) limit comparability |
Improved measures may consider citizen's satisfaction with e-government services |
High |
|
27 |
Availability of open government data |
|
|
|
|
Potential challenges in comparing countries’ efforts. For more information, see Ubaldi (2013). |
|
||
28 |
Individuals excluded from e-government services due to lack of skills |
|
|
|
|
Lack of geographic coverage outside of Europe |
|
High |
|
29 |
Individuals experiencing disinformation |
|
|
|
|
No official data on self-reported disinformation exists; in addition, self-reports may be affected by the ability to recognise disinformation and by mistrust in information in general |
Besides including self-reported questions in survey vehicles, innovative techniques using web-scraping and machine learning may be developed in the future to measure the prevalence of misinformation |
High |
|
Environmental quality |
30 |
E-waste generated per capita |
|
|
|
|
Countries’ efforts in measuring e-waste vary substantially, see detailed information in Baldé (2017) |
|
Medium |
Security |
31 |
Individuals experiencing cyber-security events |
|
|
|
|
Self-reported measures may not be the best way to measure cyber-security as it does not provide insight into the type or significance of cyber-security events; methodological differences exist across countries; some data is outdated |
Innovative techniques may help track and record cyber-security incidents using machine learning and big data analysis in the future |
Medium |
32 |
Individuals experiencing abuse of personal information |
|
|
|
|
Like with cyber-security events, improved measures may be developed thanks to digital innovations; methodological differences exist across countries and some data is outdated |
Innovative techniques may help track and record online privacy incidents using machine learning and big data analysis in the future; better alignment of questions across countries |
Medium |
|
Subjective well-being |
33 |
Life satisfaction gains from Internet access |
|
|
|
|
Current analysis is based on cross-sectional data and only distinguishes differences in life satisfaction between people who do and do not have Internet access; lack of geographic coverage outside of Europe; Internet access does not reflect Internet use. |
Longitudinal data would be necessary to understand causal impacts; more detailed covariates on the intensity and frequency of Internet use is necessary to understand impacts of use and extreme use |
Medium |
Note: The four columns of quality, harmonisation, country coverage, and timeliness are marked when an indicator faces limitations in each of these areas. Quality refers to the relevance, validity and accuracy of the indicator; harmonisation refers to the degree to which the indicator is measured in a consistent way across countries; country coverage refers to whether the indicator is available for all OECD countries; and finally, timeliness concerns the availability of recent data for the indicator.