A service of the

Download article as PDF

This article critically discusses the potential of the new behavioural turn in consumer policy. It focuses on methodological and normative aspects, which are not sufficiently discussed in the policy domain, in particular on the lessons that can be learned from randomised control trials and the normative side of the intervention. Some implications for consumer policies are drawn, proposing a new taxonomy of interventions.

Since 2012, the European Commission (EC) has started testing policy options through behavioural experiments.1 This is a clear sign of policy makers’ increasing interest in behavioural economics, especially consumer protection policies. In the same vein, Cass Sunstein, who for the past fifteen years has been at the forefront of behavioural economics, served as the administrator of the White House Office of Information and Regulatory Affairs from 2009 to 2012. Richard Thaler, Sunstein’s Nudge co-author, became an advisor to the UK Behavioural Insights Team, which was set up in 2010 to apply behavioural sciences to public policy.2 The German Chancellery has been recruiting staff with profound knowledge in the area of psychology, anthropology and behavioural economics for the Unit of Policy Planning, Fundamental Questions and Special Issues. Furthermore, some important policy papers have recently been presented to discuss the general guidelines for the implementation of this approach.3

Behavioural economics is the branch of economics that studies deviation from standard assumptions of rational choice (i.e. bounded rationality), grounding instead the understanding of human behaviour in cognitive psychology. It essentially studies the mind as an information processing device, as opposed to alternative approaches based on the metaphor of impulse response. By examining deviation from rational choice, scholars in this domain have been able to identify a series of systematic errors in judgements (biases) and the dependence of choices upon frames. The latter phenomenon helps explain addictions, anomalous consumption and other dynamically inconsistent behaviours (i.e. hyperbolic discounting of future events).

A main implication of behavioural economics is that it is possible to apply insights from bounded rationality theory to correct mistakes by consumers or to induce certain types of conduct in cases in which their behaviour is inconsistent, e.g. in the absence of achieving certain targets by consumers (insufficient retirement saving, failure to quit smoking, etc.). According to this framework, it is possible to nudge consumers towards the (properly and normatively defined) right choice. This paternalism is matched with a statement defending freedom of choice or consumer sovereignty.4 The apparent oxymoron of libertarian paternalism disappears once we realise the existence of framing effect in choice: it is argued that a simple modification of the choice architecture, without altering the set of options (i.e. without restraining the freedom to choose), is able to accomplish the desired result. This claim relies upon the idea that it is possible to counter-bias and de-bias consumers by exploiting the very same mental shortcuts that generate bias in judgement and framing effects.5

An important asset of the “nudge movement” is the large use of randomised control trials (RCTs), which are systematically employed in behavioural economics. The use of methodologies mimicking the hard sciences (essentially the construction of reliable counterfactuals to estimate the impact of an intervention) is defended as a means to provide accountability and implicitly filter out value judgements from the design of policy options, in harmony with the evidence-based-policy mantra.

In this article, we present a critical appraisal of the nudge arguments and a defence of this approach. In particular, our target is not the scientific results of behavioural economics but their translations into the policy domain. In the discussion of the potential of behaviourally informed interventions, we point out two main caveats which we think have not been stressed enough, one methodological and the other theoretical.

At the methodological level, we would like to discuss the limits of causal claims in experimental methodology applied to behaviour. As we will explain, causality is essentially constrained to the theoretical domain and cannot free the policy intervention from political debate. The technocratic idea that we can proceed by testing every intervention on a trial-and-error basis and in this way avoid conflict is scientifically unsound.

Secondly, at a more theoretical level, the statement of value-free interventions based on choice architecture is also flawed. The very logic of behavioural economics prevents a minimal criterion of intervention from being defined, since it should be based on exogenous preferences by individuals, which are contradicted by the very logic of context-dependent preferences at the core of behavioural economics.

Neoclassical experimentalism

The emphasis of RCTs on the stage of policy debate is not a novelty, but rather the return of an agenda proposed from the 1950s onwards under the label of “classical experimentalism”.

The idea of policies informed by RCTs holds a strong appeal to the public and to the policy maker. Social sciences have been traditionally cursed by having to deal with ex post correlation and thus with the lack of a robust basis for causal inference. Ex post correlations cannot solve the traditional issues of omitted variables, simultaneity and measurement errors, and as such in many cases are inconclusive. Technically, the question that scholars are continually dealing with is whether the correlation can be interpreted in a causal way, in the limited sense that cause precedes effect, cause covariates with effect and alternative plausible explanations can be excluded.6 Using data for which the generation process is not under control, i.e. the assignment of independent variables and the measurement of dependent ones is not part of the design, the researcher cannot list all the alternative explanations and as such can only limitedly rule out threats to internal validity.

In recent years, a large effort has been carried out to build simulation techniques or alternative modelling tools to predict the effect of policy interventions. However, these models are plagued with problem of indeterminacy. Even under very rigid assumptions, such as the traditional computational general equilibrium techniques of mainstream economics, they pose very limited restrictions on the aggregate predictions that can be made. As a result, these models are not falsifiable, can reproduce any specific pattern observed, and can be simulated to compute the effect of an intervention, but cannot be validated.7

Thus, the development of controlled experiments has been welcomed by policy makers. By balancing threats to validity across groups and by equating expected values on pre-test outcomes, controlled experiments are valid tools to address the problem of causally interpreting a correlation. By providing full replicability and a relatively simple technical apparatus (if compared with computational general equilibrium techniques), they increase transparency. Nevertheless, there is a sort of immanent risk in evidence-based policy, which can be magnified by RCTs. In other words, due to the logic of the political process, policy makers could perceive the evidence as a tool to overcome critiques and speed up the approval of the intervention instead of rationalising the use of resources and increasing efficiency.

One should be conscious that there are limits to the lessons that we can learn from experiments. Decision makers should understand that causality is essentially constrained to the theoretical domain and cannot free the policy interventions from debate on their context of reference. The idea of black-box estimations of causal effects through experiments allowing conflict-free policy making is a fiction.

Methodologically, one should make a clear-cut distinction between a causal description and a causal explanation. The former is the case in which we accomplish a general statement relating two items. The classical example is the relationship between flicking a light switch and turning on the light. The causal explanation occurs when we are able to account for the fact that the relationship between the light switch and the light may fail in the presence of a burned bulb.8 In this case, we are able to isolate the steps along the causal chain.

Another way to describe it is through the traditional distinction between the effect of causes and the cause of effects, meaning the distinction between identifying ex post causal impact of an intervention and identifying structural parameters that explain the drivers of a certain change of behaviour following an intervention.9

If we aim at intervening on the environment, we need to accomplish explanation and not just description. However, this target is complicated by two main problems. On the one hand, we need both a correct categorical description of the phenomenon and a correct matching between the items measured and the categories. Clearly we need good theory for this (non-overlapping categories with high explanatory power) and good operationalisation. On the other hand, we have to face the issue of contextual causation that often occurs in the form of complex adaptive systems: in the social sciences, we are never able to isolate fundamental causes, and most of the time we are really searching for consequences of non-redundant pieces of non-necessary but sufficient contextual causes. As a result, we always have problems extrapolating to alternative settings, units, descriptively different treatments and observations.10

It is important to stress that the idea of policy interventions as experiments to be evaluated using counterfactual techniques is not a new one. It dates back to the end of the 1950s, when a Popperian programme of “reforms as experiments” was defined in the US as a response to the Soviet planning approach. In this initial plan, the classical experimentalism was rigidly founded on the use of social indicators and a strong foundationalism with a claim on privileged knowledge based on methodological strategic choices.11 This programme was aimed at both maximum accountability and the reduction of social conflicts in a “truth to power” approach.12 At the same time, it accomplished the more secular task of justifying an intervention (or ensuring checks and balances) in the face of any regulation, ultimately understood as the product of rent-seeking by stakeholders.13

Historically, the rigid counterfactualism of classical experimentalism was formulated by Campbell and colleagues,14 based on the epistemological statement that experimental and quasi-experimental research design is considered as the only possible way to recover the causal impact of intervention controlling for all possible confounders and covariates. Concretely, this model remained dominant for only a decade and was never really fully implemented in the practice of policy evaluation. The history of impact evaluation of policy interventions has been largely based on more pragmatic accounts (and more frequently in the EU than in the US, the UK or international organisations).15 Methodologically unsatisfactory, this approach is more appealing to the policy environment, because it leaves room for mediation among different interests in the choice of the indicators and increases bargaining power through the use of hard numbers as rhetorical arguments.

In the case of the European Commission, an example of this distance from the Campbell approach towards a “negotiation plus indicators” framework is the Open Method of Coordination.16 Impact assessment also followed the general rule of awareness of the complexity of the political dimension of policy design and evaluation.

At the methodological level, there has been a large and inconclusive debate in the social sciences over the Campbell approach. In the decades that followed its formulation, alternative paradigms were developed. Already in the 1970s, counterfactualism was challenged by the development of the utilisation-focused evaluation.17 In the simplest way, it can be considered as a version of philosophical pragmatism, wherein knowledge validity rests on usefulness to the political process rather than on any methodological choice justifying a claim on privileged evidence.

Another challenge based on a philosophical stance was present in the constructivist approach to programme evaluation. In this case, the focus is on stakeholders, since the truth here is envisaged in the construction of meanings by the different actors.18

Finally, in terms of concrete implementation, a more widespread approach to evaluation came in the efficientism of the 1990s.19 With a strict focus on value for money, it is a mix of both a hardening and softening of constraints: usually it relies on a hybrid of hard and soft evidence aggregated quantitatively with best of breed tools.20 In other words, while the claim of a cost-benefit analysis is at the core of this approach (i.e. being accountable for the use of public money), it ends up being a sort of pragmatism without disclaimer: results are aggregated without rigid methodology to assess the causal impact, and, similarly, user-defined variables are combined to build indicators of the results.

In this framework, a return to counterfactualism took place at the end of the 1990s and was mainly supported by the increasing interest by policy makers in behavioural economics. We refer to this as neoclassical experimentalism to distinguish it from the original programme.

At the epistemological level, counterfactualism is grounded on a successionist notion of causality derived from Hume.21 In a nutshell, it suggests causality as a relationship between experiences and not facts. It requires temporal and spatial contiguity (of cause and effect), temporal succession (from cause to effect), and conjoint occurrence (of effect if cause is observed).

From a methodological point of view, an experiment is an ideal situation in which the data-generating process of the variable of intervention is subject to exogenous variation (internal validity), the participants of the study are representative of the population object of the intervention (external validity), and finally the behaviour measured in the lab is (on average) identical to the variable of interests that we want to modify in the real environment (construct validity). In practice, specific problems may emerge that distance RCTs from this ideal situation. In reality, experiments provide a very local type of evidence, while the goal desired by policy makers is a general conclusion.

Lack of external validity is an endemic problem: participation can only be voluntary, and most of the time these studies rely on convenience samples. Construct validity itself is subject to a number of trade-offs, namely between, on the one hand, the control over the data-generating process requiring simplicity of the tasks performed to avoid confounding factors that impact the design and, on the other, the fidelity of the task to the aim of the intervention, which requires complexity of the behavioural variable registered. This also makes it clear that an experiment cannot free social scientists (or the policy makers informed by them) from dealing with theory. In order to design the experiment, we need to have some sort of model, i.e. a list of definitions that will guide the identification of the response variables (operationalisation), a sketch of the intervening variables which we need to keep under control to avoid confounding factors (with measurement, assumptions or design), and some sort of assumption over behaviour. Otherwise, we will never be able to predict who will change behaviour once the policy is implemented.

The experimental evidence used in the policy domain usually comes from two different sources: RCTs in labs and natural experiments (or field experiments) in which the policy is first tested on a part of the population and then implemented erga omnes.

It should first be said that the two pieces of evidence have very different features. In the lab, there is more control of the environment and the concrete possibility to master or manipulate confounding factors (but in the end, some assumptions about some of the factors are necessary22). However, the sample is usually self-selected or simply a convenient one. The problem of construct validity is strong. For example, there are situations in which the exact behaviour cannot be replicated and some proxy conduct is measured (e.g. the case of simulated purchase). Alternatively, one may aim to capture complex unobservable conducts such as trust or cooperation. However, human behaviour that entails complex social interactions cannot be generated in a lab context in which at best small group interactions can be tested. Finally, the problem of experimenter demand effect is a serious threat. In many cases, participants rely on any sort of cues to understand what could be socially desirable, and this may confound the results.23

In field experiments, the construct and external validity are increased, but the control of the environment is drastically reduced. Anything going on during the experiment may have strong bandwidth effect,24 altering the cognitive resources used by the participants. In the case of phased intervention, the set of confounding factors is potentially unlimited. Moreover, the internal validity can be seriously threatened by attrition or selection. Finally, contamination by treatment or other forms of spillovers may be possible.

In both cases, experiments rarely deal with medium-run or long-run effects because of participant dropouts or limited budgets, making a longitudinal follow-up or recall impossible.

In a nutshell, RCTs are certainly transparent, and full replicability is a valuable gain that other social science methods tend not to share. Having said that, the generalisation of a tested policy is conditional on the equivalence between the implemented and the tested policy and on assumptions of how agents in different contexts respond to the tested intervention. There is no blueprint for this stage, but surely theoretical guidance is necessary.

Value-free interventions?

Traditional policy making is heavily shaped by the perspective of the homo oeconomicus of standard economic theory.25 Homo oeconomicus can be defined as a subject equipped with a stable system of preferences and the cognitive resources to process information, avoiding systematic mistakes. Behavioural economics and nudging depart from mainstream economic theory and its implications for policy making.

The orthodox view of the economic agent is grounded on a precise mathematical formulation of rational choice theory.26 Under a certain set of assumptions, the choices can be represented as a maximisation of utility, i.e. acting according to a certain preference ordering and given the constraints. Rationality also requires some sort of consistency in the way probability evaluations are done and revised, namely not violating Bayesian rules. Preference ordering is deemed exogenous and not influenced by the specific choice set, and the logic is consequentialist, in that alternatives are ranked on the basis of outcomes. The policy implication for the demand side is represented by the information paradigm: if the above apparatus holds, more information must empower citizens.27

Behavioural scientists have theoretically and empirically shaken this edifice and departed from the above axiomisation, showing that judgement relies on heuristics and choices are reference dependent.28 Empirically, they have found that human behaviour is heavily context dependent, a function of both the person and the situation. There often is no given ordering of preferences at all.29 Instead, the framing of the situation affects the final choice,30 and the ordering is affected by the endowment available at the timing of decision,31 even at the point at which the ordering is reversed.32 Present bias of individuals pushes them to revise their planned choices when temptation materialises, as patently shown by smoking and alcohol use.33

Heuristics and dual process theories are key to understanding both the behavioural critique to standard economics and the main thrust of the nudge approach from a consequentialist perspective.34

On the normative side, the violation of standard axioms of rational choice and the presence of framing effects and context dependence implies the lack of invariance of preference ordering when the set of constraints is modified. As a result, we cannot have a minimal criterion based on the preferences themselves to evaluate two different social allocations (the so-called Pareto criterion). As an example, contrast between dual selves (as in the case of addiction) implies an intra-personal comparison between two different systems of preferences and a choice of one of the two is ultimately a value-loaded decision.35 Of course, we can still think of some sort of criterion based on pairwise coherence, where an option is preferred to an alternative if the latter is never chosen when the latter is available.36 However, this criterion will be moot in precisely those situations in which what we intend to do is not what we do, which is the main object of policy discussion.

Instead, policy intervention has some sort of latent value judgement. Behavioural science is not dealing with that, but rather with another fundamental issue: every policy intervention has some sort of implicit assumption of how the consumer will respond. Behavioural economics helps to revise these assumptions in order to better fit evidence. As it has been argued, behaviourally informed policy should be a delicate mix of normative choices to define the best options, a descriptive account of the behaviour and a prescriptive identification of the gap between the desired and actual outcomes.37

Figure 1
A taxonomy of behaviourally informed interventions
30626.png

Implications for consumer policy

Some lessons can be drawn from the above discussion for consumer protection policy. We can try to map the complexity of the subject into a simple two-dimensional taxonomy (see Figure 1), mirroring the discussion presented above. On the one hand, one could identify a first component related to the high/low external and construct validity of the evidence provided by RCTs. We mean all those interventions for which three conditions are met: (a) it is easy to recruit a representative sample; (b) it is easy to design a choice architecture which is understandable by the subjects and which mimics the real phenomenon being analysed; and (c) the treatments which are tested are valid proxies of the interventions that can be concretely implemented – in other words, it is reasonable to assume that a concrete implementation of the measure will not alter dramatically the perceived benefit and cost of the treatment and thus would shift behaviour in line with the experimental evidence.

On the other hand, there is a dimension related to the normative perspective. A certain intervention can be perceived as highly paternalistic or intrusive or may simply lack support from a majority of the population. At the opposite end, a policy option can be perceived as highly technical and thus irrelevant for the majority of the population, not very intrusive or simply supported by a majority.

In the upper left quadrant, we identify those sub-domains in which it is relatively easy to meet the three requirements for RCT construct and external validity. For example, recent evidence shows that the introduction of simple normative messages in energy bills improves the environmental behaviour of households.38 In this case, it is easy to get the intervention accepted because it is very cheap, since energy efficiency and environmental protection are usually declared as valuable, as confirmed by survey evidence.39 At the same time, the evidence for these interventions has been provided through field experiments, since implementation is pretty straightforward.40

In the upper right quadrant, we isolate those cases where the evidence from RCTs is difficult to contest but objections that the interventions are highly paternalistic are very likely to be raised. Examples are policies against smokers, policies directed towards (medical or other) insurance subscriptions, or policies that aim to increase the pension saving rate. The latter example is actually the first successful implementation of nudging, where the very concept of libertarian paternalism was conceptualised.41 In this case, it is possible to design RCTs with almost perfect external validity, e.g. experiments for cigarette purchases or implementation of pension plans at the company level. However, regulation of health, pension and saving, and individual consumption decisions is usually criticised by pro-market adherents. The US Court of Appeals for the District of Columbia has blocked the introduction of pictorial warnings, ruling that they would violate the prohibition on government-compelled speech.42

In the bottom left quadrant, we place those weakly contested interventions for which RCTs are difficult to design in an effective way. Examples of this domain are those related to information provisions from a behavioural perspective. These cases are not very debated because in the end information provision is the standard policy implication of neoclassical economics. However, in many cases these mechanisms are related to complex purchases (e.g. household-level decisions which can be very expensive and thus not adequately incentivised in the lab) for which RCTs are difficult to design.43

Finally, in the bottom right quadrant, we can identify those subjects for which a behaviourally informed intervention is less likely to be approved. This is the case, for example, of default options for organ donors. This may be contested on religious grounds by part of the population and in general is perceived as violating freedom of choice by libertarian and pro-market thinkers (who typically propose a market solution for organ donations).44 At the same time, RCTs in this case are difficult to design. In fact, the evidence supporting it comes from observational data and from non-incentivised surveys with split ballots for which the risk of social desirability bias in responses is very likely.45

Final remarks

Grounding theories of human behaviour on more realistic assumptions is certainly fundamental to increasing the effectiveness of interventions. Rigid counterfactuals are instrumental to this process but are not a magic bullet. Experiments offer highly localised evidence, while the intervention itself and the behavioural assumptions are very general. Moreover, experiments deal with means, not ends: they provide insights on how to achieve a target but cannot determine which targets to aim for.

Nothing in the behavioural sciences tells us that policy making will become a simpler matter or that it is generally optimal to restrain interventions or to use lean regulation instead of structural reforms.

Policy will remain a domain of contrasts among interests, but providing more robust evidence will improve transparency. Indeed, this is why the behavioural turn should be welcome.


This article is based on our experience in the framework of EAHC/2011/CP/01, Framework Contract with reopening of competition – behavioural studies. We thank G. Gaskell, M. Porta, A. Chakravarty, E. Ciriolo, G. Grimalda, P. Ortoleva, R. Van Bavel for the many discussions and exchanges on this topic. The usual disclaimer applies.

  • 1 R. van Bavel, B. Herrmann, G. Esposito, A. Proestakis: Applying Behavioural Sciences to EU Policy-making, JRC Scientific and Policy Reports, Luxembourg 2013, Publications Office of the European Union.
  • 2 D. Kahneman: Foreword, in: E. Shafir (ed.): The Behavioural Foundations of Public Policy, Princeton, NJ 2013, pp. VII-IX, Princeton University Press.
  • 3 P. Dolan, M. Hallsworth, D. Halpern, D. King, I. Vlaev: MINDSPACE: influencing behaviour through public policy, London 2010, Cabinet Office, Institute of Government; Cabinet Office Behavioural Insights Team: Applying behavioural insight to health, London 2011; J.S. Blumenthal-Barby, H. Burroughs: Seeking Better Health Care Outcomes: The Ethics of Using the “Nudge”, in: The American Journal of Bioethics, Vol. 12, No. 2, 2012, pp. 1-10; O. Oullier, S. Sauneron: Improving public health prevention with behavioural, cognitive and neuroscience, Paris 2010, Centre d’analyse stratégique.
  • 4 R.H. Thaler, C.R. Sunstein: Libertarian Paternalism, in: American Economic Review, Vol. 93, No. 2, 2003, pp. 175-179; R.H. Thaler, C.R. Sunstein: Nudge: improving decisions about health, wealth, and happiness, New Haven 2008, Yale University Press.
  • 5 D. Kahneman: Thinking fast and slow, London 2011, Penguin Books.
  • 6 W.R. Shadish, T.D. Cook, D.T. Campbell: Experimental and Quasi-Experimental Designs for Generalized Causal Inference, Boston, MA 2002, Houghton Mifflin Company.
  • 7 H. Sonnenschein: Market excess-demand functions, in: Econometrica, Vol. 40, No. 3, 1972, pp. 549-563; G. Debreu: Excess-demand functions, in: Journal of Mathematical Economics, Vol. 1, No. 1, 1974, pp. 15-21; R. Mantel: On the characterization of aggregate excess-demand, in: Journal of Economic Theory, Vol. 7, No. 3, 1974, pp. 348-353. See also the introduction in T. Bewley: General Equilibrium, Overlapping Generations Models, and Optimal Growth Theory, Cambridge, MA 2007, Harvard University Press.
  • 8 W.R. Shadish et al., op. cit.
  • 9 J.J. Heckman: Building Bridges Between Structural and Program Evaluation Approaches to Evaluating Policy, in: Journal of Economic Literature, Vol. 48, No. 2, 2010, pp. 356-398.
  • 10 L.J. Cronbach, S.R. Ambron, S.M. Dornbusch, R.D. Hess, R.C. Hornik, D.C. Phillips, D.E. Walker, S.S. Weiner: Toward reform of program evaluation, San Francisco 1980, Jossey-Bass.
  • 11 D. Campbell: Reforms as experiments, in: American Psychologist, Vol. 24, No. 4, 1969, pp. 409-429.
  • 12 The expression “truth to power” was first used in 1979 by Wildavsky, after which it has been often applied to evaluation and impact assessment as methods to ensure accountability of the way public money is spent. See A. Wildavsky: Speaking Truth to Power: The Art and Craft of Policy Analysis, Boston 1979, Little, Brown.
  • 13 W. Niskanen: Bureaucracy and Representative Government, Cheltenham 1971, Edward Elgar; E. Posner: Controlling agencies with cost-benefit analysis: a positive political theory perspective, in: University of Chicago Law Review, Vol. 68, No. 4, 2001, pp. 1137-1199.
  • 14 W. Shadish, T. Cook, L. Leviton: Foundations of Program Evaluation, Beverly Hills, CA 1991, Sage; D. Campbell, J. Stanley: Experimental and Quasi-Experimental Evaluations in Social Research, Chicago 1963, Rand McNally.
  • 15 A. Martini: How counterfactuals got lost on the way to Brussels, in: A. Fouquet, L. Méasson (eds.): L’évaluation Des Politiques Publiques en Europe, Cultures Et Futurs: Policy and Programme Evaluation in Europe, Cultures and Prospects, Paris 2009, l’Harmattan.
  • 16 The EU Open Method of Coordination was launched in 2000. It emphasises evidence-based policy and the role of measurement indicators. For each given policy domain and/or sub-domain, it involves defining a strategy for setting policy guidelines, gathering a set of indicators and periodically revising targets and progress. The Open Method of Coordination was introduced as an aspect of “new, experimental governance”, which is part of the response by the EU to regulatory shortcomings. For references and discussions, see A. Saltelli, B. D’Hombres, J. Jesinghaus, A. Manca, M. Mascherini, M. Nardo, M. Saisana: Indicators for European Union Policies. Business as Usual?, in: Social Indicators Research, Vol. 102, No. 2, 2011, pp. 197-207; and E. Szyszczak: Experimental Governance: The Open Method of Coordination, in: European Law Journal, Vol. 12, No. 4, July 2006, pp. 486-502.
  • 17 M. Patton: Utilization-focused evaluation: The new century text, Thousand Oaks, CA 1997, Sage Publications.
  • 18 R. Pawson, N. Tilley: Realistic evaluation, London 1997, Sage Publications Ltd.
  • 19 R. Visser: Trends in program evaluation literature: The emergence of pragmatism: Texas Center for Adult Literacy, Occasional Research Paper No. 5, 2003, retrieved 29 August 2011, from http://www-tcall.tamu.edu/orp/orp5.htm.
  • 20 C. Codagnone: Measuring eGovernment: Reflections from eGEP Measurement Framework Experience, in: European Review of Political Technologies, Vol. 4, 2007, pp. 89-106.
  • 21 D. Hume: A Treatise of Human Nature, London 1739, John Noon.
  • 22 C.F. Camerer: Behavioral Game Theory, Princeton 2003, Princeton University Press.
  • 23 D.J. Zizzo: Experimenter demand effects in economic experiments, in: Experiemntal Economics, Vol. 13, No. 1, 2010, pp. 75-98; D.J. Zizzo: Claims and confounds in economic experiments, in: Journal of Economic Behavior & Organization, Vol. 93, No. C, 2013, pp. 186-195.
  • 24 S. Mullainathan, E. Shafir: Scarcity: Why Having Too Little Means So Much, New York 2013, Times Book.
  • 25 M. Barr, S. Mullainathan, E. Shafir: Behaviorally Informed Regulation, in: E. Shafir (ed.): The Behavioural Foundations of Public Policy, Princeton, NJ 2013, pp. 441-461, here: p. 441, Princeton University Press.
  • 26 J. Von Neumann, O. Morgenstern: Theory of Games and Economic Behavior, Princeton 1944, Princeton University Press.
  • 27 H.-W. Micklitz, L.A. Reisch, K. Hagen: An Introduction to the Special Issue on “Behavioural Economics, Consumer Policy, and Consumer Law”, in: Journal of Consumer Policy, Vol. 34, No. 3, 2011, pp. 271-276.
  • 28 C.F. Camerer: Individual decision making, in: A.R.J. Kagel (ed.): The Handbook of Experimental Economics, Princeton 1995, pp. 587-704, Princeton University Press; C. Camerer, G. Loewenstein: Behavioral Economics: Past, Present, Future, in: C. Camerer, G. Loewenstein, M. Rabin (ed.): Advances in Behavioural Economics, Princeton 2003, pp. 3-52, Princeton University Press.
  • 29 P. Slovic: The construction of preferences, in: American Psychologist, Vol. 50, No. 5, 1995, pp. 364-371.
  • 30 A. Tversky, D. Kahneman: Judgment under uncertainty: Heuristics and biases, in: Science, Vol. 185, No. 4157, 1974, pp. 1124-1131.
  • 31 R. Thaler: Toward a positive theory of consumer choice, in: Journal of Economic Behavior and Organization, Vol. 1, No. 1, 1980, pp. 36-60.
  • 32 D.M. Grether, C. Plott: Economic theory of choice and the preference reversal phenomenon, in: American Economic Review, Vol. 69, No. 4, 1979, pp. 623-638.
  • 33 G. Loewenstein, D. Prelec: Anomalies in Intertemporal Choice: Evidence and Interpretation, in: Quarterly Journal of Economics, Vol. 107, Issue 2, 1992, pp. 573-597.
  • 34 K.E. Stanovich, R.F. West: Individual differences in reasoning: implications for the rationality debate? [Comparative Study], in: The Behavioral and brain sciences, Vol. 23, No. 5, 2000, pp. 645-665; discussion pp. 665-726; V. Thompson: Dual-process theories: A metacognitive perspective, in: J. Evans, K. Frankish (eds.): In two minds: dual processes and beyond, Oxford 2009, pp. 171-195, Oxford University Press.
  • 35 C. Codagnone, G.A. Veltri, F. Lupianez-Villanueva, F. Bogliacino: The challenges and opportunities of “nudging”, in: Journal of Epidemiology & Community Health, Vol. 68, No. 10, 2014, pp. 909-911.
  • 36 D.B. Bernheim, A. Rangel: Beyond Revealed Preference: Choice-Theoretic Foundations for Behavioral Welfare Economics, in: Quarterly Journal of Economics, Vol. 124, No. 1, 2009, pp. 51-104.
  • 37 B. Fischhoff, S. Eggers: Questions of competence: the duty to inform and the limits to choice, in: E. Shafir (ed.): The behavioural foundations of public policy, Princeton, NJ 2013, pp. 217-230, Princeton University Press.
  • 38 H. Alcott, S. Mullainathan: Behavior and Energy Policy, in: Science, Vol. 327, No. 5970, 2010, pp. 1204-1205.
  • 39 C. Codagnone, F. Bogliacino, G. Veltri: Testing CO2 car labelling options and consumer information, Office for Official Publications of the European Commission, Luxembourg 2013.
  • 40 D.L. Costa, M.E. Kahn: Energy Conservation “Nudges” And Environmentalist Ideology: Evidence From A Randomized Residential Electricity Field Experiment, in: Journal of the European Economic Association, European Economic Association, Vol. 11, No. 3, 2013, pp. 680-702, 06.
  • 41 R.H. Thaler, C.R. Sunstein: Libertarian ..., op. cit.
  • 42 R. Bayer, D. Johns, J. Colgrove: The FDA and Graphic Cigarette-Pack Warnings – Thwarted by the Courts, in: New England Journal of Medicine, Vol. 369, No. 3, 2013, pp. 206-208.
  • 43 C. Codagnone, F. Bogliacino, G. Veltri: Testing CO2 ..., op. cit.
  • 44 G.S. Becker, J.J. Elías: Introducing Incentives in the Market for Live and Cadaveric Organ Donations, in: Journal of Economic Perspectives, Vol. 21, No. 3, 2007, pp. 3-24.
  • 45 E.J. Johnson, D.G. Goldstein: Do Defaults Save Lives?, in: Science, Vol. 302, No. 5649, 2003, pp. 1338-1339.


DOI: 10.1007/s10272-015-0532-4