A service of the

Download article as PDF

International competitiveness rankings, of which quite a number exist, enjoy great popularity with the media that, notwithstanding the rankings’ rather shaky theoretical foundations, tend to use them as yardsticks for assessing the achievements or failures of governments. Against this background, the present article asks how consistent major rankings are. Do different rankings give consistent messages both each year and over the course of time so that one can be fairly sure that the results are trustworthy, or is it largely in the eye of the beholder in which ranking he or she should invest more trust, since different rankings lead to by and large different results?

Competitiveness rankings, in particular those with a global focus, receive a lot of attention in the media. Their results are quoted regularly and, depending on the political affiliations, are used to either praise or criticise whichever government (or Commission) is in power. Of course, commentators sometimes read messages into the rankings which reveal more about the commentator’s inability to fully grasp what the results imply and what they do not than about competitiveness as such. For instance, it is not always acknowledged that – in the context of an ordinal ranking – a lower rank compared to previous years does not imply a deterioration. It does not even imply that no improvement occurred. The only message that is correctly conveyed is that another country’s performance has improved more (or deteriorated less) with respect to the criteria used to produce the ranking. Be that as it may, these slips have done nothing to undermine the popularity of rankings.

That is why it is helpful to reiterate what can and what cannot be derived from such rankings. Essentially, ordinal competitiveness rankings attempt to order countries in terms of their relative competitiveness. Thus they usually do not say anything about absolute differences in competitiveness. In fact, the 1st and the 2nd in the ranking may be further apart in terms of their competitiveness than the 100th and the 150th.1 To be sure, some rankings publish underlying scores and to the extent that these scores are in turn based on truly cardinal measures (and not just interviews, for instance), their absolute differences may be meaningful. However, they still do not imply an absolute deterioration or improvement. Concomitantly, one may also compare the global ordering from one year to another in order to glean some information about the relative changes that have occurred. But any of this would again be consistent with either improvements or deteriorations in absolute terms and nothing could of course be deduced in terms of absolute differences between countries provided there is no cardinal measure.

But then, which ranking should be used in the first place given that, for Europe at least, quite a number of competitiveness rankings exist? Are some rankings more reliable or methodologically more sound than others? Or does it not matter which ranking is consulted? These are important questions in light of the political importance of competitiveness rankings. It is not surprising therefore that recent years have witnessed some research on these issues and that the debate on the methodology of rankings has been an intensive (and sometimes heated) one. The key points raised in this literature will therefore be briefly reviewed. The main purpose of the present paper is, however, a more simple or more fundamental one, depending on your point of view. The paper asks how consistent major rankings are. That is, do different rankings give consistent messages both each year and in the course of time so that one can be fairly sure that the results are trustworthy? Or is it – also in view of the methodological problems – largely in the eye of the beholder in which ranking he or she should invest more trust since different rankings lead to by and large different results?

To shed some light on these questions the paper examines the extent to which rankings of the Member States of the European Union are correlated.2 Thus the paper first of all investigates the similarity of rankings, examining both global rankings with respect to their implications for the EU and rankings which focus specifically on the EU. Then the extent to which intertemporal changes are reported consistently by these rankings is examined. Thus the paper investigates how similar the pattern of changes is from year to year. In contrast to earlier studies, that is, the paper does not (only) ask how robust the results of various rankings are for a specific year but whether different rankings indicate improvements/deteriorations consistently. The significance of this question stems of course from the attention year-over-year changes receive in the media where improvements or deteriorations are usually given as much attention as the placement as such.

What Is Competitiveness?

Ranking countries in terms of competitiveness presupposes some understanding of the notion of competitiveness. However, the term “competitiveness” is as widely used as it is elusive. It has sparked lively debates in academic circles, initiated a host of reports and rankings, and continues to attract and concern policymakers across the European Union. Yet, in particular if applied to countries, regions or whole sectors of an economy, the notion also seems to escape a firm and unambiguous analytical grip.3 The practical relevance for economic policymaking has therefore repeatedly been called into question. Renowned economists such as Krugman4 and Lall5 have eloquently argued for abandoning the notion or, at least, for giving it an economically more meaningful interpretation together with sounder theoretical and empirical foundations. In their view, it is meaningful to say that the USA (or Europe) has become “less competitive” in making textiles and “more competitive” in making computers (or cars). But it is meaningless to assert that the USA (or Europe) is becoming more or less competitive as an economy.6

However, this critique has not muted the debate. On the contrary: authors such as Aiginger7 and Grilo and Koopman8 have argued for a broad conceptualisation of national competitiveness couched in terms of welfare, employment and prosperity. In essence, their conceptualisation appears to sever any obvious link to the received microeconomic understanding of competitiveness and its connotations of rivalry, market power and price differences. Concomitantly, their conceptualisation is also no longer subject to the Krugman critique and its emphasis on the need to distinguish clearly between, on the one hand, the competitiveness of a firm with bankruptcy as the baseline and the notion of comparative advantages in international trade on the other. But then the question arises as to how competitiveness so conceived can be distinguished from welfare and productivity and to what extent a much broader understanding of competitiveness can still guide policymakers.9 Is it perhaps, as Krugman10 already put it, just a poetic way of referring to “productivity”?

At the same time and despite the aforementioned ambiguities, political and legal documents continue to refer to competitiveness. Thus Art. 173 of the Treaty on the Functioning of the European Union (TFEU) stipulates that “[t]he Union and the Member States shall ensure that the conditions necessary for the competitiveness of the Union’s industry exist.” In addition, the Communication on the EU2020 Strategy puts forward as one out of three key priorities of this strategy the promotion of a more resource efficient, greener and more competitive economy. Hence there is an obvious need to somehow measure competitiveness and to make comparisons across countries.

On the supply side as it were, a host of institutions – mostly private but at times with support from academic economists – has developed rankings which purportedly measure the relative competitiveness of countries. Some of these do so from a global perspective, while others have been designed to assess progress for the Member States of the European Union against the background of the Lisbon objective to “make Europe, by 2010, the most competitive and the most dynamic knowledge-based economy in the world”.

It is probably fair to say that many of these rankings do without an in-depth discussion of the underlying notion of competitiveness. Thus they mostly adopt a rough-and-ready understanding of competitiveness without attempting to provide deeper analytical foundations. This may of course be criticised as measurement without theory, but that issue will be left aside here. For the purpose of this paper, competitiveness rankings will be taken at face value. That is to say, it will not be questioned whether and to what extent these rankings are based on a theoretically sound notion of competitiveness. Rather it will be assumed that they do so in order to allow a comparative analysis.

Main Critique of Rankings

The popularity of competitiveness rankings is in stark contrast to the criticism that has often been voiced in the academic literature. In this section, some of the key issues will be discussed without claiming, however, that each of the rankings whose results will be examined later in the paper are equally affected by the problems highlighted here.11

A key issue is clearly that rankings are based on the assumption that competitiveness can be measured in a meaningful way. However, as discussed above, the underlying notion of competitiveness is problematic and this points already to a potential weakness of rankings, which purport to measure competitiveness. After all, measuring something the conceptualisation of which is difficult if not meaningless cannot itself lead to meaningful results. It is therefore perhaps not surprising that the empirical relationship between rankings and economic growth is very weak or even negative depending on the chosen ranking.12

What is more, rankings assume that the determinants of competitiveness do not differ between countries. Although the actual performance of a country may vary for each determinant, the set of determinants is not only given, but the weight attached to each of the determinants is considered to be the same across countries. This may be so, yet whether, when and to what extent a variable contributes to competitiveness is all but self-evident. Historical as well as institutional and structural differences between countries may lead to different sets of determinants and different degrees to which each determinant contributes to competitiveness both in time and across space. Just take innovation policy for instance. While innovation is widely acknowledged as an important growth driver, policies to foster innovation may look very different depending on whether the economy is dominated by SMEs or large enterprises. By implication, best practice cannot be transferred easily from one country to another if the institutional setting is different.

Concomittantly, rankings generally assume a linear relationship between variables and competitiveness. But for some variables at least, this assumption is extremely difficult to defend, if not outright nonsensical – neither a tax rate of 0% nor of 100% can be optimal, not to mention the economic feasibility, and the same goes for other variables such as R&D expenditures as a proportion of GDP. All in all, these considerations suggest that rankings by construction cannot have immediate policy implications.

Some rankings try to overcome these problems – and the more general issue of choosing the right variables – by undertaking econometric analyses of the relationship between specific variables and competitiveness (however defined). But while this is helpful, it can also be delusive. Econometric analysis does not replace sound theoretical exploration, nor can econometrics overcome the problem that causality may go both ways. A case in point for this issue is fiscal policy and the mutual impact of budget deficits and growth.

In view of the large number of variables that are used for the construction of rankings, there is thus a clear risk of double-counting the impact of at least some variables on competitiveness. In fact, the independence of variables is all too often tacitly assumed rather than implied. Yet in a general-equilibrium context, it is more than a truism that “everything depends on everything else”.

Other difficulties and problems concern the quality and reliability of data. For instance, survey data may suffer from both a home bias (national point of view) and a perception bias (impact of general attitude among peers). Ask someone about the German tax system or the German labour market and the answer will invariably be that the former is extremely complicated and the latter excessively inflexible. Yet both views may as much reflect a national perspective (because the situation in other countries is by and large unknown) as the prevailing view among peers (if everyone says that the tax system is complicated then that must be so). After all, surveyed people may not all be experts in the field they are questioned about; they are asked to give an absolute assessment of the country and not a comparison with other countries (which would require even greater expertise).

In this context, it should also be noted that rankings published in year x may be based on data from several previous years since not all statistics are available on a yearly and timely basis. This problem is compounded by frequent methodological changes which – while justified in principle – reduce comparability over time even further.

Description of Rankings

For the purpose of this study, altogether six rankings have been examined. These six rankings have been selected because they all purport to measure competitiveness, either implicitly or explicitly, rather than, for instance, the even more elusive notion of “economic freedom”.13

The first three – Doing Business (DB), World Competitiveness Scoreboard (WCS) and Global Competitiveness Report (GCR) – are global rankings. With few exceptions (see Table 1), they cover all 27 Member States of the European Union. Thus, they implicitly also rank EU Member States and the resulting rankings can be compared with studies which rank only the EU (or most members thereof). The latter three comprise the Lisbon Scorecard (LS), the European Growth and Jobs Monitor (EGJM) and the Lisbon Review (LR). These rankings have been developed with a view to measuring relative progress on the Lisbon Strategy and its objective of making Europe the most competitive knowledge-based economic area. Their remit is thus also to measure competitiveness. Nevertheless, there are significant differences in terms of sub-indicators, weights and data sources (statistical vs. survey data for instance) among all rankings (for an overview on the methodology see box on pp. 84-85), and these differences are likely to account for substantial differences between the ranks.

Box: Qualitative Description of Rankings

Global Competitiveness Report (GCR)

The Global Competitiveness Report claims to capture the microeconomic and macroeconomic foundations of national competitiveness where competitiveness is defined as the set of institutions, policies and factors that determine the level of productivity of a country.1 The level of productivity, in turn, is supposed to determine the sustainable level of prosperity that can be earned by an economy. The concept of competitiveness thus involves static and dynamic components: although the productivity of a country determines its ability to sustain its level of income, it is also seen as one of the central determinants of the returns to investment, which is among the key factors explaining an economy’s growth potential.

The determinants of competitiveness are many and complex. The GCR groups them into 12 pillars of competitiveness: (1) institutions, (2) infrastructure, (3) macroeconomic stability, (4) health and primary education, (5) higher education and training, (6) goods markets efficiency, (7) labour market efficiency, (8) financial market sophistication, (9) technical readiness, (10) market size, (11) business sophistication and (12) innovation.

The World Economic Forum draws its data from two sources: international hard data sources and the Executive Opinion Survey. Surveys capture the perception of business executives about the environment in which they operate. Most questions in the Survey follow a structure asking participants to evaluate, on a scale of 1 to 7, one particular aspect of their operating environment; 1 represents the worst possible situation and 7 represents the best.

World Competitiveness Scoreboard (WCS)

The World Competitiveness Scoreboard purports to rank and analyse the ability of nations to create and maintain an environment in which enterprises can compete, assuming that wealth creation takes place primarily at enterprise level (whether private or state-owned).2 However, enterprises are also seen to operate in a national environment which enhances or hinders their ability to compete domestically or internationally.

The methodology of the WCS divides the national environment into four main factors:

  • economic performance
  • government efficiency
  • business efficiency
  • infrastructure.

In turn, each of these factors is divided into five sub-factors which highlight every facet of the areas analysed. The 20 sub-factors comprise more than 300 criteria, although each sub-factor does not necessarily have the same number of criteria. Each sub-factor, independently of the number of criteria it contains, has the same weight in the overall consolidation of results, i.e. 5%. Criteria can be hard data, which analyse competitiveness as it can be measured (e.g. GDP) or soft data, which analyse competitiveness as it can be perceived (e.g. availability of competent managers). Hard criteria have a weight of 2/3 in the overall ranking, whereas the survey data have a weight of 1/3. In addition, some criteria are for background information only, which means that they are not used in calculating the overall competitiveness ranking (e.g. Population under 15). Finally, aggregating the results of the 20 sub-factors creates the total consolidation, which leads to the overall ranking of the WCS.

Doing Business

The indicators presented and analysed in Doing Business attempt to measure business regulation and the protection of property rights.3The Doing Business data are collected in a standardised way using a survey. The survey uses a simple business case to ensure comparability across economies and over time – with assumptions about the legal form of the business, its size, its location and the nature of its operations. Surveys are then administered by local experts. The data from surveys are subjected to numerous tests for robustness, which lead to revisions or expansions of the information collected.

Doing Business publishes 8,967 indicators each year. The ease of doing business index then ranks economies using these indicators. For each economy, the index is calculated as the ranking on the simple average of its percentile rankings on each of the topics covered. The ranking on each topic is again the simple average of the percentile rankings on its component indicators. If an economy has no laws or regulations covering a specific area – for example, bankruptcy – it receives a “no practice” mark. Similarly, an economy receives a “no practice” or “not possible” mark if regulation exists but is never used in practice or if a competing regulation prohibits such practice. Either way, a “no practice” mark puts the economy at the bottom of the ranking on the relevant indicator.

Lisbon Scorecard

The scorecard’s “Lisbon league table” strives to assess individual EU countries’ performances relative to their Lisbon targets, comparing their standings in 2009 with 2000.4 The table is based on the EU short-list of “structural indicators”, which measures Member States’ performance in economic, social and environmental categories – such as employment rates, greenhouse gas emissions, research and development (R&D) spending and so on. The scorecard is not supposed to be a predictor of short-term economic performance. Instead, it aims to point to the capacity of Member States to flourish in a world in which high-cost countries cannot sustain their living standards unless they excel in knowledge-based industries.

Lisbon Review

The assessment of Europe’s competitiveness is based on publicly available hard data from respected institutions (such as Internet penetration rates, unemployment rates, etc.) and data from the World Economic Forum’s Executive Opinion Survey (EOS).5 The EOS is a survey of business leaders, conducted annually in over 130 countries, which provides data for a variety of qualitative issues for which hard data sources are scarce or frequently nonexistent (e.g. the quality of the educational system, the government’s prioritisation of information and communications technologies, etc.). The overall Lisbon scores for each country are calculated as the unweighted average of the individual scores in the eight dimensions. The scores and rankings of the countries covered by the review are extracted from a database covering a total of 133 countries.

European Growth and Jobs Monitor

The European Growth and Jobs Monitor is composed of six sub-indicators based on the Lisbon Agenda targets set by the European Council in 2000.6 For each sub-indicator, a benchmark is set, then the 14 countries in the survey are ranked based on their performance relative to the benchmark. Finally, the six country sub-indicator scores are combined into one overall indicator, each with an equal weighting. A score of one indicates that a country is on track to meet the Lisbon criteria by 2010, the original date for fulfilment of the targets. A score of less than one means that the country will probably miss its goals. A score of above one signals over-fulfilment.

  • 1 http://www3.weforum.org/docs/WEF_GlobalCompetitivenessReport_2010-11.pdf.
  • 2 http://www.imd.org/research/centers/wcc/research_methodology.cfm.
  • 3 http://www.doingbusiness.org/MethodologySurveys/MethodologyNote.aspx.
  • 4 http://www.cer.org.uk/pdf/rp_967.pdf.
  • 5 http://www.weforum.org/pdf/Gcr/LisbonReview/TheLisbonReview2010.pdf.
  • 6 https://www.allianz.com/static-resources/en/press/media/documents/lisbon_council_2009_en.pdf.

For each ranking, the most recent editions have been used with the cut-off date of 31 August 2010. Since publication dates vary considerably, there may nevertheless be a time-lag of several months between rankings published in the same year. Moreover, not all rankings published in any given year refer to that year, while those which do may still be based on data from previous years. However, there is no point in speculating about the “true” reference year. For the purpose of this paper, all rankings have been taken at face value simply because that is what the public does as well. That is, the reference year provided by the authors of the ranking has been used no matter whether it is the year of publication or the year before.

Table 1 gives an overview of the results of all six rankings for the European Union for 2009 or the most recent release. What is clearly striking is the range of ranks for the same country.14 This can be up to nine places as in the case of Malta or seven in the case of the Czech Republic. Overall, ranks differ by four or more places for more than half the Member States. Only the results for Denmark and Sweden do not differ between rankings. Note however that both countries score a lot worse in the European Growth and Jobs Monitor (the coverage of which is also much smaller and could therefore not be taken into account for calculating average rankings and range). Thus there would probably be even more variation if more comprehensive rankings were to be included.

Empirical Results

This section describes the main empirical findings. Figure 1 summarises the main findings graphically. Detailed results can be found in Table 2. Figure 1 depicts average values for both the correlation between ranks and the correlation between year-over-year changes which have been calculated on the basis of the detailed results.15 For rankings which are only published biannually, the correlations have been computed for appropriate pairings. Again, see Table 2 for detailed results.

Figure 1
Overview of Results: Correlation of Rank and Change
Rosenbaum Fig-1.ai

Correlation of Ranks

The results depicted in Figure 1 can be divided into three relatively distinct groups. As was to be expected from earlier work, the WCS and GCR-rankings are highly correlated (≈0.9). Thus, no matter whether one or the other is consulted, the message would be by and large the same: countries which are deemed to be competitive in one ranking mostly rank very high in the other and vice versa. WCS and GCR show also relatively high correlations with both the Lisbon Scorecard (LS) and the Lisbon Review (LR). Of course, a high degree of correlation between GCR and LR is not very surprising since the World Economic Forum prepares both rankings and so very similar results were to be expected.

These broad results hold true also, albeit to a somewhat lesser extent, for the second group, i.e. correlations between LS, LR, WCS and GCR on the one hand and DB on the other. As far as WCS and GCR are concerned, a possible reason for the lower correlation could be that DB focuses more on hard data, whereas both WCS and GCR include survey data, which often cover similar issues. Thus, methodologically WCS and GCR appear to have more in common with each other than with other rankings. For the remaining correlations of this group, however, no such explanation is readily available.

The third group consists of correlations between the European Growth and Jobs Monitor (EGJM) and the remaining rankings. These correlations are consistently below average for both global rankings and rankings with a European focus. While this might still be a conceivable outcome had global rankings not shown such a high degree of consistency amongst one another and with LC and LR, the result is surprising in as far as one would expect prima facie at least a relatively high degree of consistency between rankings which seek to capture the relative competitiveness of European countries.

Last but not least, it should be noted that correlation coefficients do not change significantly across years for any specific pairing, nor does the observed variation show an unambiguous upward or downward trend. The only exceptions are the pairings with EGJM, the correlation of which with the other rankings is not only much smaller but has also deteriorated over time up to the point of being negative for several pairings, and those involving DB, where the relatively high degree of correlation has also somewhat deteriorated over time, albeit to a much lesser extent.

Table 1
Summary of EU Rankings
Country Normalised Rank1 Summary Properties2
Global Competi- tiveness Report World
Competi-
tiveness
Scoreboard
2009
Doing
Business 2009
Lisbon Scorecard Lisbon Review European Growth and Jobs Monitor Release 2008 Average Rank Min. Max. Range
2009/2010 Release 2008 Release 2008
Austria 8 7 9 4 5 10 5.7 4 8 4
Belgium 9 10 6 13 10 8 10.7 9 13 4
Bulgaria 27 18 17 25 27 n.a. 26.3 25 27 2
Cyprus 14 n.a. 15 15 13 n.a. 14 13 15 2
Czech Republic 12 12 23 9 16 n.a. 12.3 9 16 7
Denmark 2 1 1 2 2 11 2 2 2 0
Estonia 15 17 7 11 12 n.a. 12.7 11 15 4
Finland 3 3 4 5 3 1 3.7 3 5 2
France 7 11 13 10 8 12 8.3 7 10 3
Germany 4 6 10 8 6 9 6 4 8 4
Greece 26 23 26 20 23 4 23 20 26 6
Hungary 23 21 16 23 22 n.a. 22.7 22 23 1
Ireland 11 8 3 6 11 13 9.3 6 11 5
Italy 20 22 25 22 24 14 22 20 24 4
Latvia 25 n.a. 12 16 21 n.a. 20.7 16 25 9
Lithuania 22 13 8 17 19 n.a. 19.3 17 22 5
Luxembourg 10 5 21 12 7 n.a. 9.7 7 12 5
Malta 21 n.a. n.a. 27 18 n.a. 22 18 27 9
Netherlands 5 4 11 3 4 3 4 3 5 2
Poland 18 20 24 24 26 2 22.7 18 26 8
Portugal 17 16 19 21 14 n.a. 17.3 14 21 7
Romania 24 24 18 26 25 n.a. 25 24 26 2
Slovak Republic 19 15 14 18 20 n.a. 19 18 20 2
Slovenia 16 14 22 14 15 n.a. 15 14 16 2
Spain 13 19 20 19 17 6 16.3 13 19 6
Sweden 1 2 5 1 1 5 1 1 1 0
United Kingdom 6 9 2 7 9 7 7.3 6 9 3

1 Global rankings have been recalculated for EU Member States or the available subset thereof.
2 Computations are based on complete rankings (shaded).

Correlation of Rank Changes

The above picture changes dramatically once we examine the correlation between intertemporal changes. These changes are only very weakly correlated no matter which pairing is examined and in some cases even negative values have been calculated. The highest value (0.51) can be observed, not surprisingly, for GCR and LR. For the pairing WCS-EGJM not only are the ranks as such negatively correlated for some years, but the same is also true for changes year-over-year. For the pairing DB-LS, which shows a relatively high degree of correlation for the rank, changes are also correlated negatively, whereas the pairings EGJM-LR and WCS-DB exhibit zero correlation for year-over-year changes.

Going beyond averages, no clear trends can be observed for the correlation of year-over-year changes. While for some pairings, values have increased, they have decreased for others. Again the only exception to this general rule would be EGJM, where (almost) all pairings show a significant deterioration over time. Obviously, this deterioration has to be seen in the light of the decreasing correlation between EGJM and the remaining rankings.

However, as is also evident from Figure 1, highly correlated rankings usually go hand in hand with somewhat higher correlations of changes. Thus when ranks are consistent across rankings, then changes are more likely to be consistent as well, even though this relationship remains weak.

Table 2
Correlation Coefficients
World Competitiveness Scoreboard Doing Business Lisbon Scorecard Lisbon Review European Growth and Jobs Monitor
Global Competi- tiveness Report Rank 2006/2007 2006 0.97 2006 0.52
2007/2008 2007 0.88 2007 0.80 2007 0.91 2007 0.20
2008/2009 2008 0.88 2008 0.73 2008 0.91 2008 0.96 2008 0.06
2009/2010 2009 0.90 2009 0.61 2009 0.89
Change 2006/2007- 2007/2008 2006-2007 0.40
2006/2007- 2008/2009 2006-2008 0.51
2007/2008- 2008/2009 2007-2008 0.42 2007- 2008 0.23 2007- 2008 -0.09 2007-2008 0.30
2008/2009- 2009/2010 2008-2009 0.11 2008- 2009 0.07 2008- 2009 0.34
World Competi- tiveness Scoreboard Rank 2006 2006 0.88 2006 0.47
2007 2007 0.82 2007 0.87 2007 0.16
2008 2008 0.73 2008 0.91 2008 0.92 2008 -0.01
2009 2009 0.68 2009 0.93
Change 2006-2007 2006-2007 -0.18
2006-2008 2006-2008 0.39
2007-2008 2007- 2008 -0.04 2007- 2008 0.07 2007-2008 -0.45
2008-2009 2008- 2009 0.03 2008- 2009 0.41
Doing Business Rank 2006 2006 0.75 2006 0.83
2007 2007 0.75 2007 0.38
2008 2008 0.72 2008 0.72 2008 -0.18
2009 2009 0.66
2010 2010 0.61
Change 2006-2007 2006-2007 0.39
2006-2008 2006-2008 0.27
2007-2008 2007- 2008 -0.22 2007-2008 -0.16
2008-2009 2008- 2009 0.15
2008-2010 2008-2010 0.27
Lisbon Scorecard Rank 2006 2006 0.91 2006 0.58
2007 2007 0.21
2008 2008 0.90 2008 0.02
Change 2006-2007 2006-2007 0.08
2006-2008 2006-2008 0.10
2007-2008 2007-2008 0.26
Lisbon Review Rank 2006 2006 0.57
2008 2008 0.11
Change 2006-2008 2006-2008 0.00

Conclusions

With some limitations, rankings deliver a consistent message in terms of the relative placement of countries. If a country ranks high in one ranking, then with few exceptions, it will most likely also rank high in another ranking. A possible explanation for this observation is that, speaking in very broad terms, the methodology and the underlying data of most rankings appear rather similar. Nevertheless, the actual order of countries still differs significantly between rankings. This is all the more true for changes from one year to another. Thus such changes cannot be interpreted in a reliable and meaningful manner, and this clearly limits the usefulness of rankings to track the success of economic policies, in particular at short notice, and makes references to changes in rankings a highly doubtful exercise. Whatever a change of rank may or may not imply in the context of a specific ranking, it has to be taken with a whole trainload of salt.

At the end of the day, the above findings also underpin the scepticism with which many scholars perceive rankings. No matter what the methodological weaknesses of individual rankings might be, if rankings do not give a consistent message then their practical (and pragmatic) usefulness is clearly in doubt.


Eckehard Rosenbaum, European Commission, Directorate General Enterprise and Industry, Brussels, Belgium.

The views expressed are purely those of the author and may not in any circumstances be regarded as stating an official position of the European Commission.

  • 1 Provided a method for measuring competitiveness cardinally exists.
  • 2 The reasons for the focus on European countries are twofold. First, the objective of the Lisbon Strategy was to make Europe the most competitive knowledge-based economic region. Second, the remit of Lisbon led to a natural demand for some measure of competitiveness which in turn led to the development of rankings with specifically European scope. Thus in addition to several global rankings, there are now also various European rankings.
  • 3 E. Siggel: International Competitiveness and Comparative Advantage: A Survey and a Proposal for Measurement, in: Journal of Industry, Competition and Trade, Vol. 6, 2006, No. 2, pp. 137-159.
  • 4 P. Krugman: Competitiveness: A Dangerous Obsession, in: Foreign Affairs, Vol. 73, 1994, No. 2, pp. 28-44; P. Krugman: Making Sense of the Competitiveness Debate, in: Oxford Review of Economic Policy, Vol. 12, 1996, No. 3, pp. 17-25.
  • 5 S. Lall: Competitive Indices and Developing Countries: An Economic Evaluation of the Global Competitiveness Report, in: World Development, Vol. 29, 2001, No. 9, pp. 1501-1525.
  • 6 Ibid., p. 1503.
  • 7 K. Aiginger: Competitiveness: From a Dangerous Obsession to a Welfare Creating Ability with Positive Externalities, in: Journal of Industry, Competition and Trade, Vol. 6, 2006, No. 2, pp. 161-177; K. Aiginger: La Compétitivité des entreprises, des régions et des pays, in: La Vie économique 3, 2008, pp. 19-22.
  • 8 I. Grilo, G.J. Koopman: Productivity and Microeconomic Reforms: Strengthening EU Competitiveness, in: Journal of Industry, Competition and Trade, Vol. 6, 2006, No. 2, pp. 67-84.
  • 9 Implicitly, this is acknowledged by I. Grilo, G.J. Koopman, ibid., who, while championing a broad concept of competitiveness, focus their analysis on the competitiveness of various industrial sectors.
  • 10 P. Krugman: Competitiveness: A Dangerous Obsession..., op. cit.
  • 11 See J. Küter: Länderrankings zur internationalen Wettbewerbsfähigkeit, in: Wirtschaftsdienst, Vol. 89, 2009, No. 10, pp. 691-699 and references therein; U. Heilemann, H. Lehmann, J. Ragnitz: Länder-Rankings – Komplexitätsreduktion oder Zahlenalchemie, in: Wirtschaftsdienst, Vol. 87, 2007, No. 7, pp. 480-488 for a more comprehensive discussion.
  • 12 U. Heilemann et al., ibid.
  • 13 This is what the Fraser Institute and the Heritage Foundation claim to do with their economic freedom indices.
  • 14 Note that only three rankings cover all 27 Member States. For reasons of comparability and consistency, average rank and range have been calculated on the basis of the comprehensive rankings.
  • 15 The Pearson product-moment correlation coefficient has been used here on the assumption that the relationship between two different rankings is linear. This assumption is reasonable insofar as two equivalent competitiveness rankings, R1 and R2, should lead to the same ranks for each country so that the rank of country C in ranking R2 is a linear function of its rank in ranking R1 with intercept 0 and slope 1. The same assumption applies mutatis mutandis to rank changes.


DOI: 10.1007/s10272-011-0368-5