Like the external quality assurance systems in many other OECD and partner countries, the National System for Evaluation of Higher Education (SINAES) in Brazil evaluates both higher education institutions and individual study programmes within those institutions. Private and public institutions are subject to periodic re-accreditation, based on on-site reviews coordinated by INEP. Whereas for private institutions, re-accreditation is a pre-requisite for their continued operation, legally protected public institutions the process is essentially a formality. In both cases, the period for which re-accreditation is granted varies depending on the status and institutional quality score awarded to the institution. Institutions are also subject to annual monitoring, based on the average performance of their programmes in relation to SINAES programme-level indicators and the results of CAPES evaluations for “stricto sensu” postgraduate programmes. This chapter examines these processes and provides recommendations for the future development of the systems in place.
Rethinking Quality Assurance for Higher Education in Brazil
7. Assuring the quality of higher education institutions
Abstract
7.1. Focus of this chapter
The external quality assurance system in Brazil evaluates institutions as well as programmes
Like the external quality assurance systems in many other OECD and partner countries, the National System for Evaluation of Higher Education (SINAES) in Brazil evaluates both higher education institutions and individual study programmes within those institutions. While the ongoing programme-level evaluation mechanisms discussed in Chapter 5 (ENADE, programme-level indicators, on-site reviews for renewal of recognition) attract considerable public attention and absorb a large share of the resources devoted to external quality assurance, SINAES also involves periodic institutional evaluations, which inform regulatory decisions by SERES on whether or not to re-accredit institutions.
Legally, both private and federal public institutions are subject to periodic re-accreditation (recredenciamento), based on on-site reviews coordinated by INEP. For private institutions, successful re-accreditation is a prerequisite for their continued operation (although “de-accreditation” is rare). For federal public institutions, the process is essentially no more than a formality, as they systematically score three or more on a five-point evaluation scale and, in any case, cannot have their accreditation removed.
The period for which (re-)accreditation is valid varies depending on the status and institutional quality score (CI) already awarded to the institution. Universities and university centres are only re-accredited every eight to ten years, while colleges must be re-accredited at least every five years.
In addition, institutions are subject to annual monitoring, based on their average performance of their programmes in relation to SINAES programme-level indicators and the results of CAPES evaluations for stricto sensu postgraduate programmes. The weighted averages of the Preliminary Course Score (CPC), discussed in Section 5.2 and, where applicable, the scores attributed by CAPES for new and existing postgraduate programmes, discussed in Chapter 6 are used to produce an overall score for each institution called the “General Course Index” (Índice Geral de Cursos, IGC).
Institutional evaluation, including self-evaluation, has a central place in the original legislation governing the system
The wording of the legislation establishing the SINAES in 2004 recognises the central role of institutions in structuring and providing higher education, alongside their research and engagement functions, and acknowledges the importance of institutional autonomy and profile in allowing HEIs to fulfil their missions. The first article of the law establishing SINAES states:
The aim of SINAES is to improve the quality of higher education, to expand its provision, to increase institutional efficiency and effectiveness [eficácia] and academic and social impact [efetividade], and especially to promote deepening of the social commitments and responsibilities of higher education institutions, through developing their public mission, the promotion of democratic values, respect for difference and diversity, [and] the affirmation of autonomy and institutional identity. (Article 1 of Law 10 861 of 2004 establishing SINAES (Presidência da República, 2004[1]) bold added by the OECD Secretariat)
The legislation states that evaluation of the federal higher education system will involve institutional evaluation, programme-level evaluation and assessment of the performance of students through ENADE. It places institutional evaluation first and develops objectives and evaluation criteria for institutional review in more detail than for programme-level evaluation and ENADE. It specifies ten main dimensions to be taken into account in internal and external institutional evaluation processes, including the Institutional Development Plan (PDI); institutional policies; social responsibility; management; infrastructure; student support and financial sustainability. The seventh dimension listed is “planning and evaluation, especially the processes, results and effectiveness of institutional self-assessment”.
To undertake this institutional self-evaluation, the 2004 law specifies that all HEIs must create an Internal Evaluation Commission (Comissão Própria de Avaliação, CPA). This body is tasked both with coordinating all internal evaluation processes inside the institutions and transmitting institutional and programme-level information to INEP, as input to external evaluation activities. The CPA must include representatives from all sections of the academic community (different categories of staff) and social partners (“organised civil society”), and have formal independence from other management and collegiate bodies in the institution.
Internationally, HEIs have varying levels of autonomy to take responsibility for self-evaluation and quality assurance
The distinction between a) internal evaluation processes within HEIs; b) external programme evaluation and; c) external institutional evaluation, as seen in Brazil, is found in many higher education systems in OECD and partner countries. However, the extent to which systems rely on each of these three components varies considerably.
The systems of quality assurance in Ireland, England and Scotland, for example, dispense almost entirely with external programme evaluation and rely instead on internal quality assurance systems within institutions (self-evaluation), which are verified through external institutional reviews (QAA, 2018[2]; QQI, 2018[3]). Most quality accreditation activities in the diverse quality assurance landscape in the United States also involve institutional reviews, which verify internal quality processes (Hegji, 2017[4]). Quality assurance systems in many other European higher education systems, including the Netherlands, Sweden or Portugal, all include both programme and institutional review in their external quality assurance systems, alongside internal quality assurance. In all three of the latter countries there have been initiatives to move to systems based primarily or exclusively on institutional review (NVAO, 2016[5]; UKÄ, 2018[6]; A3ES, 2018[7]). In contrast, in Mexico, where there is no comprehensive and compulsory system of external quality assurance in higher education, existing external quality assurance mechanisms focus primarily on programme-level accreditation (CIEES, 2018[8]).
Overall, while external quality assurance systems in many countries may initially have included a strong focus on programme-level review, there has been a general trend among policy-makers and international bodies working in quality assurance to recommend increased institutional responsibility for quality and to focus external evaluation efforts primarily on the institutional level. This is the philosophy reflected, for example, in the current European Standards and Guidelines for quality assurance in higher education (ESG, 2015[9]), which serve as a reference for quality assurance in the 48 countries of the European Higher Education Area.
Despite its legal basis, Brazil’s current quality assurance model gives comparatively limited responsibility to institutions for assuring their own quality.
While the letter of the law governing quality assurance in higher education in Brazil accords a central role to institutional autonomy and self-evaluation, the practical implementation of the SINAES imposes a complex system external programme-level scrutiny on a three-year cycle. For institutions that perform poorly in ENADE and on the CPC, this leads to regular programme-level inspections, using prescriptive processes that limit the room for manoeuvre for institutions. For institutions that tend to perform well in relation to ENADE and the CPC, particularly universities and university centres that are only subject to institutional review every eight to ten years, on-site evaluations by external reviewers are comparatively infrequent occurrences with limited consequences.
There are few incentives for institutions in this position to develop strong internal quality assurance systems that go beyond the minimum requirements imposed by the legislation, or to promote quality enhancement internally on a continual basis. Interviews conducted by the OECD review team in several institutions suggest that Internal Evaluation Commissions (CPAs) focus primarily on ensuring compliance with SINAES rules and delivering data to INEP, rather than developing internal quality systems tailored to institutional needs or promoting innovation and quality improvements. This contrasts with the situation in many European countries and in the United States, where institutional review and evaluation of internal quality procedures form the core of external quality assurance practices.
The remainder of this chapter explores these issues, reviewing the processes currently in place in Brazil to assess the quality of individual higher education institutions, including ongoing monitoring through the General Course Index (IGC) and periodic reaccreditation reviews.
7.2. Strengths and weaknesses of the current system
Indicator-based monitoring of institutions: the General Course Index
Relevance: rationale and objectives of the current system
As discussed in Chapter 4, new private higher education institutions, and new campuses of existing private providers in municipalities outside the location of their headquarters, are required to obtain accreditation from SERES before they can start operating. Institutions undergo an on-site review by an external review commission appointed by INEP, which attributes the institutions and Institutional Score (Conceito Institucional) or CI, on a five-point scale. Institutions that receive a score of three or above receive formal accreditation, and, in the logic of SINAES, this institutional score is the official quality “grade” attributed to the institution and published on the e-MEC platform.
New public institutions are exempt from this initial accreditation process, as they are effectively accredited as part of their acts of establishment. Public, like private, institutions are formally required to undergo renewal of their accreditation through a process involving another on-site review periodically that leads to a new CI and which we examine below. However, whereas the re-accreditation process could theoretically lead to the “de-accreditation” of private HEIs, public institutions are protected by their legal status. This means re-accreditation is purely an administrative formality for public institutions. Even for private institutions in Brazil, the risk of “de-accreditation” appears to be low. Although clear data on the number of cases of institutional “de-accreditation” have not been made public by SERES, the OECD review team understands only a handful of private institutions have last accreditation in the last decade.
All institutions with operational courses are subject to the cycle of programme-level evaluation through ENADE, on the basis of which INEP calculates the Preliminary Course Score (CPC), discussed in Chapter 5, by combining ENADE results with other programme indicators. Once three cohorts of students have graduated (over three years) and, depending on the subjects in their programme profile, potentially been subject to three cycles of ENADE, INEP calculates another composite indicator: the General Course Index. This “Index”, also on a scale of one to five, is calculated based upon:
The average CPCs of the last three-year period (in which all subjects have been subject to a round of ENADE), for the programmes that have been evaluated in the institution, weighted by the number of enrolments in each of the programmes included in the calculation;
The average of the evaluation score of the stricto sensu postgraduate programmes awarded by CAPES in the last available evaluation round, converted to a compatible scale and weighted by the number of enrolments in each of the corresponding postgraduate programmes;
The averaged (enrolment-weighted) sum of scores from undergraduate and stricto sensu postgraduate programmes (INEP, 2017[10]).
The General Programmes Index (IGC) provides a single, synthetic, and comparative indicator of institutional performance. At the time of its creation, the IGC was conceived a means to allow the Ministry of Education “to identify the most precarious institutions and focus its attention on them” (Schwartzman, 2013[11]). In this sense, it mirrors the function of the CPCs in renewal of programme recognition.
Effectiveness: division of responsibilities
The methodology used to calculate the IGC, as well as the calculation and presentation of results (in e-MEC), are the responsibility of INEP. Institutions bear no responsibility in this process, apart from participation in the administration of ENADE and in the CAPES assessments of postgraduate programmes, and in reporting administrative data to INEP – via the CPA - used in the calculation of the IGC.
Effectiveness: use and effects of the IGC
The IGC score is used by external bodies and the media in reporting about the quality of higher education in Brazil, including in a well-known institutional ranking published annually by the Folha de São Paulo, one of Brazil’s leading newspapers (Folha de S.Paulo, 2018[12]). Notwithstanding its original purpose, the IGC is widely perceived as a visible public signal of institutional quality that that institutions themselves feature in advertising. It likely also bears upon the equity performance of publicly listed for-profit firms. One private university states on its website, for example:
The Universidade Positivo (UP) has been rated, for the sixth time running, the best private university in Paraná State, with a score of 4 in the Índice Geral de Cursos (IGC), which goes from 1 to 5. Revealed last Monday (27) by the Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira (INEP) [and] the Ministry of Education (MEC), the IGC is the official indicator of the quality of higher education institutions in Brazil. (UP, 2018[13])
The real signal value of the IGC as a quality indicator for consumers is, perhaps, quite limited. While IGC scores range, in principle, from one to five (after rounding), scores of one are virtually unknown, and nearly all scores cluster at values of three and four. In 2016, 93% of universities and 96% of university centres received scores of three or four (INEP, 2017[14]). Setting aside the validity or reliability of the IGC, it is clear that its discriminating power for non-college institutions is low.
Although the reputational effects of IGC can be important, it is not an indicator that is likely to have an impact of how institutions understand and manage the quality of the education that they provide. The IGC does not introduce new performance information for institutional leaders. Rather, it is a (weighted) summation of already-existing programme-level information. As such, it replicates the measurement problems of the CPC.
Although the IGC may have been created, like the CPC, as a means for public authorities to identify weak institutions, it is not mentioned in the most recent secondary legislation providing implementation rules for SINAES (Presidência da República, 2017[15]; MEC, 2017[16]). Unlike the CPC, it does not currently play a direct role in the process of regulation of higher education institutions. The relevant legislation states that all institutions must undergo an on-site visit for institutional re-accreditation.
Institutional reviews: re-accreditation
Relevance: rationale and objectives
Entirely separate from the assessment-based IGC – and potentially at odds with it – are the on-site institutional reviews that are periodically undertaken for the re-accreditation of higher education institutions. The same process is also used for changes in the institutional status - transformação de organização acadêmica - of colleges to university centres, or university centres to universities. The process of institutional re-accreditation generates a new institutional score (Conceito Institucional) or CI, that replaces the CI attributed during initial accreditation as the most recent quality score for the institution. The CI score is used by SERES to determine whether the institution is permitted to continue to award recognised degrees, and when the next cycle of reaccreditation must take place. For university institutions the duration of reaccreditation may range between five and ten years, depending upon their CI score. For colleges and university centres, the range is from three to five years, as shown in Table 7.1.
Table 7.1. Duration of accreditation and re-accreditation
Type of institution |
Institutional score required |
Duration of accreditation |
---|---|---|
Colleges and University Centres |
CI 3 |
3 years |
CI 4 |
4 years |
|
CI 5 |
5 years |
|
Universities |
CI 3 |
5 years |
CI 4 |
8 years |
|
CI 5 |
10 years |
Source: MEC (2017) Regulatory Ordinance No 1, 3 January 2017 establishing duration of the validity of regulatory acts for accreditation and re-accreditation of HEIs (MEC, 2017[17])
Effectiveness: division of responsibilities
Re-accreditation, like accreditation, is the joint responsibility of higher education institutions, INEP, and SERES. However, as higher education institutions are fully operational, the institution’s Internal Evaluation Commission (CPA) plays an active role in preparing an institutional evaluation report, based on consultations with academic staff and management. INEP is responsible for organising the on-site review process, and SERES with taking regulatory action of the basis of evaluation information provided by the review.
Effectiveness: quality indicators used and generated
The Institutional Score (CI) that results from the review process is generated based upon the evaluation commission’s scores for up to 50 indicators set out in the relevant INEP evaluation instrument (INEP, 2017[18]). The focus of these indicators, structured into the same five axes as for accreditation, is principally the institutional development plan (30%), the institution’s infrastructure (30%), and its management policies (20%).
Table 7.2. Number and weight of indicators for institutional reaccreditation and change of institutional status
Axis |
Number of indicators |
Weight |
---|---|---|
Planning and institutional evaluation |
5 |
10 |
Institutional development |
7 |
30 |
Academic policies |
12 |
10 |
Management policies |
8 |
20 |
Infrastructure |
18 |
30 |
TOTAL |
50 |
100 |
Source: OECD calculations based upon INEP evaluation instruments. (INEP, 2017[18])
Like the other on-site review processes (such as recognition), the re-accreditation review process is focused on input and process, rather than outputs or performance, and reviewers are responsible for scoring qualitative indicators on a five-point scale. The review team’s categorisation of the indicators in the evaluation instrument is shown in Table 7.3. Given that the process of re-accreditation necessarily focuses on institutions that are already operating, with graduating students and graduates, there is scope to include greater consideration of outputs (graduates and evidence of the learning outcomes) and outcomes (graduate destinations) in the institutional assessments at this stage.
Table 7.3. Type of indicators used in the re-accreditation process
Total number of indicators |
50 |
---|---|
Total input |
35 |
Total process |
15 |
Total output |
0 |
Source: OECD calculations based upon INEP evaluation instrument (INEP, 2017[18])
The current evaluation instrument for institutional re-accreditation devotes comparatively little attention (in terms the number of indicators and judgement criteria) or weight to assessment of the internal evaluation capacity of institutions. The first “axis” of the assessment framework focuses on institutional planning and evaluation activities does contain five indicators relating to internal evaluation process and the quality of the internal evaluation report produced by the CPA. Many of the factors identified in the individual judgement criteria for these indicators would appear highly relevant for the assessment of internal quality management practices, although these factors are not developed and explained in detail. However, as these issues are embedded in few individual indicators in an axes that contributes only 10% of the overall institutional score, the quality of internal evaluation capacity does not currently play a major role in whether an institution is re-accredited or what institutional score they receive.
Effectiveness: use and effects
Owing to the schedules for re-accreditation highlighted above, the CI score is not calculated and reported on an annual basis, but rather with a periodicity that may range from three to ten years. In light of its infrequency, and perhaps because it is not linked to student outcomes as observed in ENADE, the CI score appears to function solely as a regulatory input, and not as a public signal of institutional quality. In the course of stakeholder meetings with the OECD review team, the CI score was not identified as a measure to which institutions managed or adapted their performance. The ongoing monitoring of the Institutional Development Plan through internal evaluation processes (self-evaluation) was cited as an important feature of the re-accreditation process.
However, there was variation in the extent to which it was considered to have generated useful reflection within institutions or contributed to the development of a genuine quality culture. Some CPAs reported that their HEI found the self-evaluation activities to be a compliance activity in which few colleagues wished to participate. Other institutions found the obligation to create development plans and undertake a structured analysis of institutional performance spurred useful quality discussions that would have otherwise not have occurred.
7.3. Key recommendations concerning institutional review
In the view of the OECD Review team, the processes of institutional quality assurance that result from periodic re-accreditation and regular reporting of IGCs need fundamental improvements. Based on the analysis above, the team makes the following recommendations:
1. Reduce the period of re-accreditation for universities and university centres
Universities in some of the best-regarded higher education systems in the world must undergo external institutional reviews every four, five or six years. This is the case in the United Kingdom, the Netherlands and Sweden, for example. The current eight or ten-year accreditation periods for universities and university centres mean these institutions have few incentives to develop robust institutional quality mechanisms and problems in institutional quality management may go undetected for long periods. Instead of the current system, institutions with demonstrated internal quality capacity could be rewarded through dispensation from some or all aspects of programme-level review, subject to successful reaccreditation on a five or six-year cycle (see below).
2. Reduce the weight attached in institutional re-accreditation reviews to input and process indicators that measure basic supply conditions for higher education
There is scope to rebalance the weights attributed to the evaluation indicators used at the stage of institutional re-accreditation, away from inputs and towards processes and outputs. A first aspect of this is to remove indicators relating that measure basic supply conditions for higher education, such as infrastructure and equipment and general management policies. The availability of suitable infrastructure to supply each undergraduate programme is verified through the programme-level recognition and renewal of recognition processes, while the most general institutional policies are unlikely to change – or need to change - considerably over time. It is therefore wasteful to devote resources to re-evaluating and re-scoring these kinds of variable through the re-accreditation review. The inclusion of these indicators also reduces the proportional weight attributed to factors that are important to verify in re-accreditation, such as educational results and institutional performance.
3. Increase the weight attributed to outputs and outcomes in periodic institutional assessment
Evidence about educational results and institutional performance is neglected in the current system of institutional re-accreditation. While processes of accreditation cannot take into account programmatic and institutional performance, re-accreditation can – but does not. Institutions should be able to graduate most students who begin their studies and they should do so in a timely way. Those who graduate should be able to find employment, preferably in fields related to their area of study – and most certainly so if their studies have a career orientation – whether accounting, civil engineering, or nursing.
SINAES was based in a view that quality assurance could proceed through coordinated and complementary processes – including institutional self-assessment, detailed on-site institutional reviews carried out by peer reviewers, and through the implementation of learning assessments (ENADE) and use of related indicators. The processes of evaluation that have evolved are not complementary to one another. On-site reviews and performance indicators generate information about institutional quality that is either incommensurable or contradictory.
Quantitative programme and institutional indicators should ideally focus on the outputs and outcomes of higher education, while on-site reviews conducted by peers would helpfully focus on the inputs and processes that generate the outputs and outcomes observed in indicators. For example, indicators focused on outputs or outcomes, such as graduation rates, would be complemented by an on-site review process that examines the conditions that affect variation in these rates. Thee conditions include student advice and mentoring processes; how institutions identify students at risk of falling behind or dropping out; the social or psychological; and academic support services provided to students at risk.
4. Increase incentives for institutions to take a strategic view of quality
The processes of institutional quality assurance do not encourage institutions to take a truly strategic and institution-wide view of quality. The IGC generates a score that is an aggregation of programme-level results. However, it does not generate a score that has been demonstrated to be useful in differentiating different levels of institutional performance or providing actionable feedback to institutions. Institutional Development Plans (PDIs), in their current form do not appear to provide an opportunity for institutions to take a comprehensive and strategic view of their institution, its profile, and quality of its educational programmes. Universities are, notionally, institutions that provide research-led teaching, and should be able to give an account of where and how undergraduate education is joined up to their research mission. Institutional Development Plans examined by the OECD review team, however, do not show evidence of this. It would be valuable to provide incentives to institutions to develop more meaningful PDIs with a stronger focus on how quality across a range of dimensions can be maintained and enhanced. One way to do this is to make assessment of internal quality policies and practice a much bigger part of the re-accreditation process, through greater weighting in the relevant evaluation instrument for on-site reviews.
5. Move to a system where institutions that can demonstrate strong internal quality assurance capacity and a proven record of delivering quality can accredit (authorise and recognise) their own programmes
Finally, processes for demonstrating institutional quality do not permit higher education institution to demonstrate that they have the capacity to take care of quality, and should be authorised to act as self-accrediting organisations, and should be permitted to create, revise, and eliminate programmes on their own initiative – as happens in other higher education systems in the world. The process of re-accreditation – specifically, the resulting CI score – changes the periodicity of institutional reviews, but it does not alter the level of responsibility that institutions are permitted to exercise. If account for institutional quality is to be joined up to institutional responsibility for the quality of programmes, it will need to be a very different and more robust process than at present. Examples of such differentiated models – where some institutions are subject to programme-level review and others are accorded self-accrediting status on the basis of rigorous institutional review - exist in other systems of higher education and could serve as inspiration for Brazil.
References
[7] A3ES (2018), Acreditação e Auditoria - A3ES, gência de Avaliação e Acreditação do Ensino Superior (A3ES), http://www.a3es.pt/en/node/83 (accessed on 18 November 2018).
[8] CIEES (2018), Evaluación de programas, Comités Interinstitucionales para la Educación Superior, https://ciees.edu.mx/ (accessed on 18 November 2018).
[9] ESG (2015), Standards and Guidelines for Quality Assurance in the European Higher Education Area (ESG).
[12] Folha de S.Paulo (2018), Ranking Universitário Folha - 2017, Folha de S.Paulo, http://ruf.folha.uol.com.br/2017/ranking-de-universidades/ranking-por-ensino/ (accessed on 17 November 2018).
[4] Hegji, A. (2017), An Overview of Accreditation of Higher Education in the United States, Congressional Research Service, Washington, D.C., http://www.crs.gov (accessed on 17 November 2018).
[14] INEP (2017), Inep divulga Conceito Preliminar de Curso e Índice Geral de Curso de 2016 (INEP reveals the Preliminary Course Score and General Course Index for 2016), INEP 24 November 2017, http://portal.inep.gov.br/artigo/-/asset_publisher/B4AQV9zFY7Bv/content/inep-divulga-conceito-preliminar-de-curso-e-indice-geral-de-curso-de-2016/21206 (accessed on 17 November 2018).
[18] INEP (2017), Instrumento de Avaliação Institucional Externa - Presencial e a distância - Recredenciamento Transformação de Organização Acadêmica (External institutional evaluation instrument - Classroom-based and distance - Re-accreditation and change of academic status), Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira, Brasília, http://download.inep.gov.br/educacao_superior/avaliacao_institucional/instrumentos/2017/IES_recredenciamento.pdf (accessed on 11 November 2018).
[10] INEP (2017), Nota Técnica No. 39/2017/CGCQES/DAES (Methodology for calculating the Índice Geral de Cursos - IGC), Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira, Brasília, http://download.inep.gov.br/educacao_superior/enade/notas_tecnicas/2016/nota_tecnica_n39_2017_cgcqes_daes_calculo_igc.pdf (accessed on 18 November 2018).
[16] MEC (2017), Portaria Nº 23 de 21 de dezembro de 2017 - sobre o exercício das funções de regulação, supervisão e avaliação de instituições de educação superior e de cursos superiores (Ordinance 23 of 21 December 2017 regarding the exercise of the functions of regulation, supervision and evaluation of HEIs and programmes), Diário Oficial da União - Imprensa Nacional, http://www.imprensanacional.gov.br/materia/-/asset_publisher/Kujrw0TZC2Mb/content/id/1284670/do1-2017-12-22-portaria-n-23-de-21-de-dezembro-de-2017-1284666-1284666 (accessed on 18 November 2018).
[17] MEC (2017), Portaria Normativa Nº 1, de 3 de janeiro de 2017 Estabelece os prazos de validade para atos regulatórios de credenciamento e recredenciamento das Instituições de Educação Superior. (Ordinance No. of 3 January 2017, establishing duration of validity of regulatory acts for accreditation and re-accreditation of HEIs), http://www.in.gov.br/autenticidade.html, (accessed on 14 November 2018).
[5] NVAO (2016), Assessment framework for the higher education accreditation system of the Netherlands, Nederlands-Vlaamse Accreditatieorganisatie (NVAO), The Hague, https://www.nvao.com/system/files/procedures/Assessment%20Framework%20for%20the%20Higher%20Education%20Accreditation%20System%20of%20the%20Netherlands%202016_0.pdf (accessed on 17 November 2018).
[15] Presidência da República (2017), Decreto Nº 9.235, de 15 de dezembro de 2017 - Dispõe sobre o exercício das funções de regulação, supervisão e avaliação das instituições de educação superior e dos cursos superiores de graduação e de pós-graduação no sistema federal de ensino. (Decree 9235 of 15 December 2017 - concerning exercise of the functions of regulation, supervision and evaluation of higher education institutions and undergraduate and postgraduate courses in the federal education system), http://www.planalto.gov.br/ccivil_03/_Ato2015-2018/2017/Decreto/D9235.htm (accessed on 10 November 2018).
[1] Presidência da República (2004), Lei no 10 861 de 14 de Abril 2004, Institui o Sistema Nacional de Avaliação da Educação Superior – SINAES e dá outras providências (Law 10 861 establishing the National System for the Evaluation of Higher Education - SINAES and other measures), http://www.planalto.gov.br/ccivil_03/_ato2004-2006/2004/lei/l10.861.htm (accessed on 10 November 2018).
[2] QAA (2018), Reviewing Higher Education, The Quality Assurance Agency for Higher Education, https://www.qaa.ac.uk/reviewing-higher-education (accessed on 18 November 2018).
[3] QQI (2018), Institutional Reviews, Quality and Qualifications Ireland, https://www.qqi.ie/Articles/Pages/Institutional-Reviews07.aspx (accessed on 18 November 2018).
[11] Schwartzman, S. (2013), “Uses and abuses of education assessment in Brazil”, PROSPECTS, Vol. 43/3, pp. 269-288, http://dx.doi.org/10.1007/s11125-013-9275-9.
[6] UKÄ (2018), Guidelines for the evaluation of first-and second-cycle programmes, Universitetskanslersämbetet (Swedish Higher Education Authority), Stockholm, http://www.uka.se (accessed on 17 November 2018).
[13] UP (2018), UP é considerada melhor universidade privada do Paraná pelo sexto ano consecutivo (UP considered the best private university in Paraná for the sixth year running), Universidade Positivo, https://www.up.edu.br/institucional/up-e-considerada-melhor-universidade-privada-do-parana-pelo-sexto-ano-consecutivo (accessed on 17 November 2018).