Introduction

Human behaviour online can be manipulated by platform architecture, including how online user interfaces are designed. We study the need for greater regulation of user interfaces that platforms design specifically to modify users’ behaviour in ways that most users cannot detect. We study the regulation of ‘choice manipulation architecture’ in the context of existing regulations within the European Union, particularly the EU’s data protection law (GDPR) and marketing law (UCPD). It has become common to use the term ‘dark patterns’ (also ‘deceptive practices’) to describe such manipulation in online environments. The term provides a framework for identifying and discussing design practices that businesses and regulators should consider problematic, but the definitions and descriptions are not sufficient in themselves to draw the delicate distinctions between legitimate and lawful persuasion and deceptive and unlawful manipulation, which requires an understanding of users’ free choice, agency and self-determination. It also requires legal interpretation of diverse sets of regulation. The main contribution of this article is to place manipulative design, including ‘dark patterns’, within the frameworks of persuasion within marketing, persuasive technology (captology) and the laws governing privacy and marketing. We advance our understanding of online manipulation through design, in order to better inform regulation and business practices (this article is based on Trzaskowski, 2021a, and Trzaskowski, 2023).

‘Dark patterns’

The term was coined in 2010 by user experience designer Harry Brignull, and it has entered into legislation (e.g. the California Consumer Privacy Act, the Colorado Privacy Act and the Digital Services Act, discussed below), policy documents (e.g. Forbrukerrådet, 2018; OECD, 2022; BEUC, 2022; EDPB, 2022), legal research (e.g. Luguri & Strahilevitz, 2021; Leiser & Caruana, 2021; Jarovsky, 2022) and mainstream media (Nahai, 2015). Dark patterns may be defined as ‘tricks used in websites and apps that make you do things that you didn’t mean to, like buying or signing up for something’ (www.deceptive.design), or as in the California Consumer Privacy Act and the Colorado Privacy Act:

‘Dark pattern’ means a user interface designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision-making, or choice.

In both laws, an agreement obtained through the use of dark patterns does not constitute ‘consent’.

In a recent case (Federal Trade Commission v. Amazon.com, inc., 2023), the US Federal Trade Commission claims that Amazon has (1) ‘used manipulative, coercive, or deceptive user interface designs known as “dark patterns” to trick consumers into enrolling in automatically-renewing Prime subscriptions’ (‘Nonconsensual Enrollment’) and (2) ‘knowingly complicated the cancellation process for Prime subscribers who sought to end their membership’ (‘the Iliad Flow’), i.e. ‘complexity resulted from Amazon’s use of dark patterns—manipulative design elements that trick users into making decisions they would not otherwise have made’. (See also CNIL, 2022 (fining Microsoft Bing €60M for making it more difficult to reject than accept cookies) and FTC 2022a (agreement with Epic Games (Fortnite) on $275M in penalty and $245M in compensation for privacy-invasive default settings and deceptive interfaces for in-game purchases).)

In research, much focus has been on creating taxonomies of dark patterns, which may be grouped into eight categories (and 27 variants) (Luguri & Strahilevitz, 2021, p. 53 with references): nagging, social proof, obstruction, sneaking, interface interference, forced action, scarcity and urgency. Such taxonomies also allow for automated identification and assessment of such design practices (Mathur et al., 2019).

In the EU, several similar taxonomies have been provided, in which dark patterns have been categorised as, for instance, nagging, social proof (endorsements), obstruction, sneaking (information hiding), interface interference, forced (coerced) action, urgency (scarcity) and asymmetric choice (European Commission, 2022; FTC 2022b). In the context of data protection, words like overloading, skipping, stirring, hindering, fickle and left in the dark (EDPB 2023) have been used, as well as categories, including enjoy, seduce, lure, complicate and ban (CNIL, 2019).

The important part in these definitions and descriptions is the extent to which autonomy is impaired, i.e. you are ‘tricked’ or manipulated into doing ‘things that you didn’t mean to’.

We use the terms ‘behaviour modification’ (Bandura, 1969) and ‘influence’ interchangeably and understand them to comprise both persuasion and manipulation. We use persuasion as the legitimate (lawful) form of influencing behaviour and manipulation—which includes deception and coercion—as unlawful influencing (Rushkoff, 1999, p. 270; Wood, 2014, chapter 12). The distinction between persuasion and manipulation is normative and not always easy to establish, as discussed below. When a user interface is designed for unlawful behaviour modification, it constitutes ‘manipulation by design’.

We focus on data protection law where ‘consent’ must reflect the data subject’s genuine and informed choice and marketing law where the aim is to ensure that ‘commercial practices’ do not impair the consumer’s ability to make free and informed decisions. Manipulation by design—including by the use of ‘dark patterns’—is also important in, for instance, contract law and fraud, which are not dealt with here.

Markets and the right to self-determination

Markets and market failures

The overarching idea behind market economies is that efficient competition guided by price signals affected by supply and demand yields better economic outcomes for consumers. Neoclassical economics works with three basic assumptions: People have rational preferences among outcomes that can be identified and associated with an expected value. Individuals maximise utility (as consumers) and firms maximise profit (as producers). People act independently on the basis of full and relevant information.

The rational choice theory rests on the assumption that aggregate social behaviour results from the behaviour of individual actors who make decisions in accordance with their preferences. This rational agent is expected to take into account all relevant information, potential costs and benefits etc. in order to act in accordance with their individual goals, values and preferences. This theory assumes that consumers’ decisions reveal their individual preferences so as to maximise their welfare (Akerlof & Shiller, 2015, p. 170).

There are instances where markets cannot in themselves ensure efficiency. ‘Market failures’ are situations in which the allocation of goods and services is not efficient such as when sellers’ (in the following, we use the term ‘trader’ as is common practice in EU law, and which in the GDPR corresponds to the term ‘data controller’) pursuit of pure self-interest leads to results that are not efficient. Market failures may be corrected by means of market regulation, and the field of consumer protection law seeks to correct the market failure stemming from an asymmetry in power, including information.

The market regulation of the European single market (the internal market)—including consumer protection law, which is to ensure a high level of consumer protection—usually has as an additional purpose of ensuring efficiency in the market by (a) only disturbing it to the extent necessary and (b) removing barriers to inter-state trade created by differences in law (harmonisation).

Marketing law

The Unfair Commercial Practices Directive (UCPD; Directive 2005/29/EC) applies to ‘business-to-consumer commercial practices’, which is a deliberately broad concept (CJEU cases C-59/12 and C-559/11) encompassing practices carried out before, during and after a commercial transaction (Article 3). The concept (referred to as ‘commercial practices’) is defined in Article 2(1)(d) as:

any act, omission, course of conduct or representation, commercial communication including advertising and marketing, by a trader, directly connected with the promotion, sale or supply of a product to consumers.

Article 5(1) prohibits unfair commercial practices, and Article 5(2) sets two cumulative requirements that must be fulfilled for a commercial practice to be deemed unfair. Thus, a commercial practice is unfair if:

  • it is contrary to the requirements of ‘professional diligence’, and

  • it materially distorts or is likely to materially distort the economic behaviour of the average consumer with regard to the product (hereafter referred to as ‘economic distortion’).

The layout of this general prohibition is important in order to understand the structure of the directive, even though most cases are likely to be determined by the more specific prohibitions concerning misleading practices (Articles 6 and 7), aggressive practices (Articles 8 and 9) and blacklisted practices (Article 5(5) and Annex I).

In practice, one must, in order to determine whether a commercial practice is lawful, firstly, consult the items on the blacklist; secondly, consider whether the practice is misleading and/or aggressive; and thirdly, consider whether the practice is otherwise contrary to the requirements of professional diligence.

If the practice is considered to be ‘otherwise contrary to the requirements of professional diligence’, economic distortion must also be considered. If the practice is found to be misleading and/or aggressive, it must only be determined whether that practice causes or is likely to cause the average consumer to take a transactional decision that he would not have taken otherwise (‘economic effect’). Such an analysis of the economic effect/distortion is not required for blacklisted commercial practices.

It is important to emphasise that the UCPD—in order to seek a generally applied, objective standard—makes no reference to the trader’s subjective intention behind a commercial practice (CJEU Case C-388/13, paras 47–48).

Data protection law

As data protection law is also important for business-to-consumer interaction, the General Data Protection Regulation (GDPR; Regulation (EU) 2016/679) must—in addition to providing democratic safeguards—be perceived as an important pillar of consumer protection law. In contrast to marketing law, the respect for privacy and protection of personal data are fundamental rights rooted in the Charter of Fundamental Rights in the European Union (the Charter), Articles 7 and 8, respectively.

The GDPR can be distilled into six overarching principles that require that the data controller must—subject to the principle of proportionality—ensure legitimacy, transparency and security to demonstrate accountability and ensure empowerment of the data subject (Trzaskowski, 2021b). The principles overlap and may, ultimately, be reduced to a mere matter of ‘due diligence’.

In this context, we focus on consent—which constitutes an important part of empowerment—defined in Article 4(11):

“consent” of the data subject means any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her’ (emphasis added).

Interplay between the GDPR and the UCPD

Arguments from consumer protection law may play a role in data protection law, as the latter requires the processing of personal data to be ‘fair’ and ‘lawful’. The reverse is also true, as consumer protection law aims at striking a ‘fair balance’ between traders and consumers, a balance that can be affected by the trader’s processing of personal data (Calo, 2014; Borgesius et al., 2017; Svantesson, 2018; Zarsky, 2019).

The non-binding UCPD Guidance by the Commission suggests that (European Commission 2019, section 1.2.10):

[a] violation of the GDPR or of the ePrivacy Directive will not, in itself, always mean that the practice is also in breach of the UCPD. However, such privacy and data protection violations should be considered when assessing the overall unfairness of commercial practices under the UCPD, particularly in the situation where the trader processes consumer data in violation of privacy and data protection requirements, i.e. for direct marketing purposes or any other commercial purposes like profiling, personal pricing or big data applications.

When it comes to ‘professional diligence’—i.e. the special skill and care which a trader may reasonably be expected to exercise—this is likely to include issues pertaining to privacy. For instance, the United Nations Guidelines for Consumer Protection (2016) (UN, 2015), concerning principles for good business practices, provides that ‘businesses should protect consumers’ privacy through a combination of appropriate control, security, transparency and consent mechanisms relating to the collection and use of their personal data’.

It remains fair to say that data protection law provides for a much tighter, more coherent and more robust framework to further its aims compared to the field of consumer law. The UCPD shares the principles of empowerment, proportionality and transparency with the GDPR, but the UCPD does not contain similar requirements for legitimacy, accountability and security.

Empowerment

From the dawn of the European Union consumer policy, the focus was on enabling ‘consumers, as far as possible, to make better use of their resources, to have a freer choice between the various products or services offered’ (European Council 1975a; European Council 1975b, para 8). One of the main priorities was to ensure protection against ‘forms of advertising which encroach on the individual freedom of consumers’ (European Council 1975b, para 30).

Human agency and the right to self-determination are central concepts in legal theory as well as in consumer protection law, where the regulatory framework is aimed at empowering consumers to act in accordance with their preferences (e.g. European Commission 2007). Empowerment also permeates data protection law, with consent possibly being the clearest example that also elucidates the interplay with transparency.

Empowerment in data protection law does not mean that data subjects have absolute control over what data are being (or can be) processed about them, nor by whom. The processing of personal data must be ‘lawful’, which must require that the activity also be in compliance with the UCPD.

In short, empowerment must entail (1) human agency (some sort of free will, see also Kreps & Rowe, 2021), (2) a sufficient degree of transparency and (3) the absence of manipulation (Trzaskowski, 2021a). In the following, the primary focus will be on the absence of manipulation, and it may be helpful to emphasise that (a) it is widely accepted that human agency is a limited quantity—as recognised in, for instance, ‘bounded rationality’ (Kahneman 2003; Jones 1999) and ‘ego depletion’ (Baumeister & Tierney, 2011)—and (b) information does not equal transparency; i.e. to determine transparency, both the trader’s encoding and the user’s reasonable decoding of information must be determined (Trzaskowski, 2021a, chapters 6 and 7 with references).

Behaviour modification

In data-driven business models that rely on personalised advertising, including targeted advertising, the trader has an economic incentive to increase (1) the number of users, (2) their engagement (amount and nature of attention) and (3) knowledge about individual users (personal data). As we will discuss below, personal data coupled with insights from psychology and technology can be used to (a) increase the value of user experiences (through effective influence) and (b) increase the amount of attention and nature of engagement, including by means of creating addictive technology. Artificial intelligence (AI) plays an important role in optimising behaviour modification for the most profitable impact.

Rational decisions and behavioural insights

As our focus is on the regulation of markets, it is important to emphasise that economic theories underpinning markets—in addition to free choice—assume that consumers are able to make informed/efficient/rational choices. The economic theory usually applies a thin rationality (‘revealed preferences’ as introduced above) that disregards value shaping, adaptive preferences and the interest of future generations (Elster, 2016, p. 36), as well as the fact that people might prefer to do something other than spending time on maximising their economic interests (Taleb, 2010, p. 184).

Behavioural economics (see, e.g. Posner, 1997; Jolls et al., 1998) revolves around the economic consequences of the continuous stream of studies providing ever more fine-grained knowledge about human behaviour in general and human decision-making in particular, with a view to adjusting neoclassical economics for these insights found in behavioural sciences.

Market failures that rest on biased demand, generated by imperfectly rational consumers, have been labelled ‘behavioural market failure’ (Bar-Gill 2012, p. 2 et seq.); and these may lead to consumer loss. Behavioural sciences, as such, are not concerned with such losses or market failures, as they are merely focused on understanding human behaviour. It does not matter which sciences (psychology, neuroscience, sociology etc.) the behavioural understanding or models originate from.

Behaviour modification

Insights into human decision-making are used in marketing (Carnegie, 1936; Cialdini, 2021; Rushkoff, 1999; Godin, 2005a) as well as in regulation, including in the guise of nudges (Thaler & Sunstein 2021; Sunstein, 2013).

In his classic book, Influence, professor of psychology and marketing Robert B. Cialdini has identified seven ‘levers of influence’ (Cialdini, 2021) that are briefly introduced to elucidate some classical tools in the advertiser’s toolbox.

  1. 1.

    Reciprocation. This is a basic norm in human culture that requires one person to try to repay what another person has provided. The rule applies even to uninvited gifts or favours, and the feeling of indebtedness may leverage substantially larger favours in return. The rule also works in situations where a request has been declined, which makes it easier for the requester to successfully ensure compliance with a smaller favour (‘rejection-then-retreat’ tactic). Thus, it may be profitable to give something or ask for a larger favour before asking the consumer for a favour (Cialdini, 2021, pp. 71–72).

  2. 2.

    Liking. We prefer to say yes to individuals we like. In addition to physical attractiveness, likeability can be boosted by compliments and similarity, i.e. to people whom we believe to be like us, or to those we already know (even peripherally). Repetition increases familiarity, and thus likeability. Even association with favourable events or people will increase likeability (Cialdini, 2021, pp. 124–135; Carnegie, 1936).

  3. 3.

    Social proof. This tactic includes making products appear popular or trending. Social proof works best under uncertainty and/or when many people approve of the product. Liking can be used to further increase this effect (Cialdini, 2021, pp. 71–72; Ariely, 2008; Thaler & Sunstein 2021).

  4. 4.

    Authority. Use of an authority—or just symbols such as titles and uniforms—in marketing may work as a mental shortcut for quality, approval and recommendation (Cialdini, 2021, pp. 238–240).

  5. 5.

    Scarcity. When something is less available, we lose freedom (of choice). Due to loss aversion, we assign more value to opportunities that are less available. The tactic is used by stating that availability is limited or by providing deadlines. We are more susceptible to this tactic if we have to compete with others, as in ‘two people from Denmark are also looking at this limited offer’ (Cialdini, 2021, pp. 289–290).

  6. 6.

    Commitment and consistency. We are more willing to agree to requests when we have given an initial commitment. This is because we want to appear consistent within our words, beliefs, attitudes and deeds. A reminder may restore and intensify an initial commitment (Cialdini, 2021, pp. 360–362).

  7. 7.

    Unity. This principle is about establishing a ‘we’-ness in tribes that leads to group solidarity, i.e. increased agreement with and influence from members of this tribe. Shared identities may be based on kinship, geography, tastes etc. This may also include being ‘friends’ on social media (Cialdini, 2021, pp. 435–436; Godin, 2008).

It may be added that successful traders create partnerships with their consumers (Godin, 2008; Turow, 2017), including by offering protection and privilege (Fletcher, 1995), to create loyalty (Wind & Hays, 2016; Godin, 2018). However, it has also been observed that loyalty rewards are increasingly being used for tracking ‘instead of being a straightforward tit for tat based on frequent visits’ (Turow, 2017, p. 22), and that it entails that all consumers pay slightly higher prices, and only members are getting small subsidies from merchants and all other customers (Clemons, 2019, p. 122).

Storytelling and framing effects

The greatest achievement of the human brain may be its ability to imagine things that do not exist (our capacity for ideas), which allows us to anticipate the future (Gilbert, 2006, p. 5; Harari, 2015, p. 117). It is hard to overestimate the role of narratives in this vein. Storytelling is what we tell ourselves and others, and what makes it easier to live in a complicated world (Godin, 2005a, 2005b). Creating such stories is a fallacy that ‘is associated with our vulnerability to over-interpretation and our predilection for compact stories over raw truths’ (Taleb, 2010, p. 63).

Frames are the words and images and interactions that reinforce personal biases (Godin, 2005a, 2005b, p. 51), and framing effects are closely related to storytelling (Schwartz & Sharpe, 2010, p. 61; Akerlof & Schiller, 2015, pp. 41–42), which is important for intersubjectivity (Harari, 2017, p. 150: ‘Sapiens rule the world because only they can weave an intersubjective web of meaning: a web of laws, forces, entities and places that exist purely in their common imagination’.) and is utilised in both political (Lakoff, 2014) and commercial marketing (Rushkoff, 1999, p. 181; Godin, 2005b; Akerlof & Shiller, 2015, pp. 41–42). As observed by Daniel Kahneman (Kahneman, 2003, p. 1459), ‘Framing effects are not a laboratory curiosity, but a ubiquitous reality’. As an example, it has been proven that when identical options are described in different terms, people often shift their choices. For example, if a choice is described in terms of gains, it is often treated differently than if it is described in terms of losses (Kahneman & Tversky, 1986; Jones, 1999). This shift demonstrates the concept and power of framing.

Manipulation

Manipulation is closely related to agency and the right to self-determination. Manipulation can be said to steer the choices of others by morally problematic means, including ‘employing emotional vulnerability or character defects’ (Wood, 2014, chapter 12), ‘in order to make us act against—or, at the very least, without—our better judgment’ (Ruskoff, 1999, p. 270).

In his analysis of manipulation, Cass Sunstein similarly finds that ‘an action does not count as manipulative merely because it is an effort to alter people’s behavior’ and that there is ‘a large difference between persuading people and manipulating them’ (Sunstein, 2016, p. 215). In defining manipulation, Sunstein suggests focusing on whether the effort in question ‘does […] sufficiently engage or appeal to [… the consumer’s] capacity for reflection and deliberation’ (emphasis added), and where ‘the word “sufficiently” leaves a degree of ambiguity and openness, and properly so’ (Sunstein, 2016, pp. 215–216).

This is similar to the fine line that must be drawn between ‘legitimate influence’ and ‘unlawful distortion’ of the average consumer’s behaviour under the UCPD (Trzaskowski, 2011). It could also be argued that to be able to objectively ascertain whether consent is given, there must be a sufficient engagement of or appeal to the data subject’s capacity for reflection and deliberation to assure a ‘genuine choice’.

Sunstein states that ‘no legal system has a general tort called “exploitation of cognitive biases”’ (Sunstein, 2016, p. 234), but both the UCPD and the GDPR may be interpreted as including a prohibition of manipulation within the above-mentioned definition, i.e. focusing on the lack of sufficiently engaging or appealing to the consumer’s capacity for reflection. The UCPD explicitly mentions coercion as an aggressive practice, and the prohibition of misleading commercial practices could be interpreted to mean the absence of deception. It may also be argued that deceptive and coercive practices are not likely to result in unambiguous indications of data subjects’ wishes that signifies agreement to the processing of personal data, which implies that consent is not obtained and that the processing is unlawful.

Choice architecture

As discussed above, insights into human decision-making can be used by traders to influence behaviour.

We don’t make choices in a vacuum. Our physical environment is shaped by nature and culture. Our behaviour and movement are constrained by laws of physics and public infrastructure (e.g. Jacobs, 1961) (physical architecture). In commercial contexts, much of our behaviour and many of our experiences are carefully designed and curated by businesses (see Underhill, 1999, p. 195, about the design of brick-and-mortar shops). For instance, milk is usually shelved in the very back of a grocery store, so as to guide people through the entire store.

In virtual realities, such as ‘cyberspace’, we are also subject to constraints stemming from infrastructure (digital architecture) that define our abilities and shape our experiences. Digital technology can work in tandem with our imaginative capabilities to create experiences that defy real-world physics. It is more cumbersome to redesign the physical architecture than the architecture of virtual realities, where a few clicks are sufficient to establish connections, create fora and delete persons.

Activities and experiences in virtual realities impact real reality—with which it can easily be confused because of its pervasiveness and human intersubjectivity. ‘Augmented reality’ is a hybrid in which elements of virtual reality are used to add to or mask parts of real reality. An illustrative example is the mobile game Pokémon Go, which allows the user to interact with virtual creatures, Pokémon (pocket monsters), which appear in the user’s real reality environment identified by GPS signals. As catching Pokémons requires you to be at particular geographic locations, it has been used to drive real visitors to actual McDonald’s restaurants and other sponsors (Constine, 2017).

Given our widespread usage of and trust in computers (including smartphones), virtual realities and augmented realities can be created that may be difficult to distinguish from real reality. Given the possibilities of influencing (persuading as well as manipulating), it is not difficult to imagine ‘abated reality’ in the guise of ‘real virtuality’ where the real reality decreases in force or intensity.

Digital technology—Distinct advantages

As much of society’s communication takes place online, we will also take a look at how influence is carried out online. Three important characteristics of digital technology are that activities can be automated, scaled and personalised easily. In addition, technology allows for real-time feedback.

The design of human–computer interaction plays a significant role in how consumers are influenced in the context of data-driven business models (Bond et al., 2012; Mik, 2016). BJ Fogg has in the early 2000s identified the following six distinct advantages that computers have compared with traditional media and human persuaders (Fogg, 2003, p. 7):

  • be more persistent than human beings;

  • offer greater [often perceived (author’s addition)] anonymity;

  • manage huge volumes of data;

  • use many modalities to influence;

  • scale easily; and

  • go where humans cannot go or may not be welcome.

Additionally, computers have good memory, as well as the ability to evoke feelings through social cues without getting tired or requiring reciprocity (Carr, 2010, pp. 202–205). With digital technology, Cialdini’s principles of persuasion introduced above can be automated, scaled, personalised and applied in real time.

Friction

Persuasive technology (captology), introduced by BJ Fogg, can be distilled into a matter of dispensing ‘friction’ (Fogg, 2003; McNamee, 2019), which relies on (what Daniel Kahneman has identified as) the preference for ‘cognitive ease’ (Kahneman, 2011, p. 67). By increasing or reducing friction, the user can be nudged in a desired direction by designing a ‘path of least resistance’. The importance of friction in the design of online experiences is hard to overestimate—as, for instance, in the context of cookie consent pop-ups. Reading, thinking, clicking, scrolling, writing and paying all constitute friction.

To appreciate the power of friction, it may be helpful to perceive it as an obstacle to instant gratification, which is closely related to the bounded willpower introduced above. Consider, for instance, liking and sharing content on Facebook/Instagram, absorbing content on TikTok, searching the world wide web through Google, establishing connection on LinkedIn, swiping for dates on Tinder and shopping at Amazon.com. It is all very easy and convenient.

Both Amazon’s recommendations and Google’s sponsored links are designed to guide consumers to specific purchases by reducing the effort or ‘friction’ needed to select these over all others.

Motivation, ability and prompts

Generally speaking, behaviour is affected by motivation, ability and prompts (Fogg, 2020). A nostalgic view of marketing may assume that creating motivation is key, whereas today the main focus—especially in digital marketing—is to increase ability (by removing friction: ‘it’s free!’, ‘click here!’) and to use prompts (‘act now!’, ‘click here!’). As expressed by BJ Fogg:

‘prompts are the invisible drivers of our lives’, ‘no behavior happens without a prompt’, and ‘the prompts coming from digital technology are harder to manage than those from junk mail […] Other than getting off the grid, we may never find a perfect way to stop unwanted prompts from companies with business models that depend on us to click, read, watch, rate, share, or react. This is a difficult problem that pits our human frailties against brilliant designers and powerful computer algorithms.’ (Fogg, 2020, pp. 97 and 105–106).

Law, manipulation and choice architecture

In determining ethical issues with persuasive technology, BJ Fogg has identified three focal points, (a) the intentions of the trader, (b) the methods (practices) used to persuade and (c) the outcomes (effects) of using the technology, and he suggests as a first step ‘to take technology out of the picture’, and ‘simply ask yourself, “If a human were using this strategy to persuade me would it be ethical?”’ (Fogg, 2003, pp. 220–221; similarly Bermejo, 2019, pp. 129–130, suggests focusing on ‘companies, content, and users […] to explore some of the consequences of the advertising model prevalent on the social web’.)

Interpretation of EU law

Teleological interpretation necessitates that ‘every provision of [EU] law must be placed in its context and interpreted in the light of the provisions of [EU] law as a whole, regard being had to the objectives thereof and to its state of evolution at the date on which the provision in question is to be applied’ (CJEU, Case C-283/81, para 20). As noted by the Court of Justice of the European Union (CJEU) in the context of data protection law, the interpretation of a provision of EU, the law must take account of (CJEU, Case C-673/17, para 48 with references):

  • its wording;

  • the objectives it pursues;

  • its legislative context;

  • the provisions of EU law as a whole; and

  • possibly its origins.

As discussed, there is already a legal framework addressing manipulation by design of choice architecture. Both the GDPR and the UCPD are secondary laws, and as such they derive their authority from a higher level of the legal hierarchy, i.e. primary law.

Because privacy, including the protection of personal data, is protected in the Charter, it is a straightforward approach to take a fundamental rights perspective in the interpretation of the ePrivacy Directive (Directive 2002/58/EC) (privacy) and the GDPR (personal data).

Privacy is a matter of balancing legitimate interests, including other fundamental rights—such as the freedom of expression and the right to information—that are also necessary in a democracy and important for human welfare. Human dignity—which has freedom, privacy and non-discrimination at its core—may also play a significant role in the future interpretation of consumer protection (Trzaskowski, 2021a, chapter 10).

Manipulative commercial practices

Under the UCPD, a commercial practice is aggressive if—in its factual context, taking account of all its features and circumstances—by harassment; coercion, including the use of physical force; or undue influence; it is likely to (a) significantly impair the average consumer’s freedom of choice or conduct and (b) cause him to take a transactional decision that he would not have taken otherwise (Article 8 UCPD).

‘Transactional decision’ is broadly defined, and includes (potential) decisions concerning whether or not to buy (or complain about) products and on what terms. This concept also covers decisions directly related to such purchase decisions, including the consumer’s decision to enter a shop (CJEU, Case C-281/12, paras 35–36; CJEU, Case C-391/12), which indicates a relatively low threshold as to the effect or loss inflicted on the consumer (European Commission, 2021, “Interplay between the GDPR and the UCPD” section 2.4).

When a trader exploits ‘a position of power in relation to the consumer’ so as to apply pressure—even without physical force—undue influence exists if this ‘is likely to significantly impair the average consumer’s freedom of choice or conduct’ (Articles 8 and 2(1)(j) UCPD). Account must be taken of inter alia (a) ‘its timing, location, nature or persistence’ as well as (b) ‘the use of threatening or abusive language or behaviour’ (emphasis added) (Article 9 UCPD). From its wording, this list is not exhaustive. It follows from recital 16 UCPD that ‘the provisions on aggressive commercial practices should cover those practices which significantly impair the consumer’s freedom of choice’ (recitals 6 and 14 UCPD).

Traders are—in contrast to consumers—often well-informed about consumers’ biases and heuristics (CJEU, Case C-371/20, para 39 with references).

An illustrative example can be found in several airports where visitors, in order to get to their gates, are forced to wander through a shop immediately after security—sometimes with a somewhat concealed escape route for people with allergies. This is a behaviourally informed—and profitable—‘trick’ to make visitors spend more time and money in the store. This solution allows shoppers, who intend to buy, to get to their gates faster, but it is not unlikely that at least some visitors will take a transactional decision that they would not have taken otherwise.

One could argue that a commercial practice is aggressive, if it—for instance, by means of choice architecture—undermines the user’s capacity for reflection and deliberation, cf. the above-mentioned definition of manipulation.

Undue influence is not necessarily ‘impermissible influence’, but that is the case when conducts apply ‘a certain degree of pressure’ and in the factual context actively entail ‘the forced conditioning of the consumer’s will’ in a way that is likely to significantly impair the average consumer’s freedom of choice or conduct (CJEU, Case C-628/17, paras 33–34).

The CJEU has established that it does not constitute an aggressive commercial practice to ask the consumer to take his final transactional decision without having time to study, at his convenience, the documents delivered to him by a courier if the consumer has been in a position to take cognisance of the standard-form contracts before (CJEU, Case C-628/17, para 45).

However, the CJEU found that certain additional practices with the aim of limiting the consumer’s freedom of choice may lead to the commercial practice being regarded as aggressive. This includes conducts that ‘put pressure on the consumer such that his freedom of choice is significantly impaired’ or establish an attitude that is ‘liable to make that consumer feel uncomfortable’ such as to ‘confuse his thinking in relation to the transactional decision to be taken’ (CJEU, Case C-628/17, paras 33, 46–47).

Such pressure may, for instance, be induced by announcing that less favourable conditions are a consequence of delayed action on the part of the consumer (CJEU, Case C-628/17, para 48), a means of utilising Cialdini’s scarcity lever of influence. Information—including in the guise of storytelling and framing—may also constitute an important part of manipulative commercial practices, and such practices may thus fall under both aggressive and misleading commercial practices under the UCPD.

As discussed in the previous section, the design of human–computer interaction plays a significant role in how consumers are influenced in the context of data-driven business models. Technology can be automated to both observe and shape our behaviour at scale (Zuboff, 2019, p. 8; Yeung, 2017).

The CJEU has assumed that the average consumer is generally aware of how advertising and sales promotions work in a free market economy; that today’s consumers are ‘much more circumspect and informed’ as a result of their experiences with marketing; and also that the protection of consumers is a ‘patronising argument’, which is ‘no longer convincing’ (Trstenjak, 2010, para 104 and footnote 82 with reference). However, it is not impossible that pervasive exposure to personalised marketing, including the use of prompts and the manipulation of emotions, based on online tracking across platforms coupled with (annoying) cookie consent boxes, has resulted in additional confusion, cognitive overload and apathy rather than real education (see also Forbrukerrådet, 2018).

The UCPD Guidance by the Commission suggests that (European Commission 2021, section 4.2.7) ‘In designing their interfaces, traders should follow the principle that unsubscribing from a service should be as easy as subscribing to the service’, and that ‘confirmshaming’ should be avoided. As examples of the latter, the Commission suggests avoiding statements like ‘we’re sorry to see you go’ and ‘here are the benefits you will lose’. The psychological effect of the first statement may be real, but unlikely to be considered unfair, and the second statement may be material information that the consumer needs to take a transactional decision.

Consent and data protection law

Consent requires a ‘freely given, specific, informed and unambiguous indication of the data subject’s wishes’, which ‘signifies agreement to the processing’ (according to the definition in Article 4(1)(11)). Consent under the ePrivacy Directive (cookies and direct marketing) should be understood in the same manner as under the GDPR (CJEU, Case C-673/17, paras 38–43). Generally speaking, consent must constitute a genuine and informed choice. (‘Genuine choice’ can be said to comprise the requirements of ‘freely given’, ‘unambiguous indication’ and ‘signifying agreement’. Similarly, ‘informed choice’ comprises ‘specific’ and ‘informed’ indication.)

To be genuine, consent requires a clear affirmative act, which may include ‘ticking a box’, but silence, pre-ticked boxes, and inactivity cannot constitute consent (recital 32 GDPR). The arguments against pre-ticked boxes include that it would otherwise be ‘impossible in practice to ascertain objectively’ whether consent is given (CJEU, Case C-673/17, paras 49, 52 and 55). It seems straightforward to extend this principle to situations where the choice architecture is designed in a way that does not sufficiently engage or appeal to the user’s capacity for reflection and deliberation.

The legitimacy principle introduced above comprises the overarching fairness principle found in Article 5(1)(a), which requires that ‘personal data shall not be processed in a way that is detrimental, discriminatory, unexpected or misleading to the data subject’ (EDPB, 2023, para 9). This fairness principle is corroborated, inter alia, by the requirements that (i) ‘it shall be as easy to withdraw as to give consent’ (empowerment); (ii) communication must be ‘in a concise, transparent, intelligible and easily accessible form, using clear and plain language’ (transparency); and (iii) ‘data protection by design and by default’ must be ensured (legitimacy) (Articles 7(3), 12 and 25 GDPR, respectively).

The Digital Services Act

In the recently adopted Digital Services Act (DSA; Regulation (EU) 2022/2065), the design (and organisation) of the online interface is addressed in Article 25, which provides that

providers of online platforms shall not design, organise or operate their online interfaces in a way that deceives or manipulates the recipients of their service or in a way that otherwise materially distorts or impairs the ability of the recipients of their service to make free and informed decisions (emphasis added).

This prohibition does not provide clarity as to what is or should be considered deception and/or manipulation, aside from clarifying that the design of online interfaces could constitute such an unlawful practice. In contrast to the UCPD, the provision is not limited to practices that harm the economic interests of consumers, as it covers the users’ ability to ‘make free and informed decisions’ in general.

Besides the fact that the provision only applies to online platforms (defined in Article 3(1)(i))—i.e. excluding all other information society services—it is particularly important to notice that the prohibition does not apply to practices covered by the UCPD and/or the GDPR (Article 25(2); recital 67 provides that ‘legitimate practices, for example in advertising, that are in compliance with Union law should not in themselves be regarded as constituting dark patterns’.), which carves a very significant hole in the scope of application. According to subsection 3 “Behaviour modification”, the Commission may issue guidelines on how the prohibition applies to specific practices, including:

  • giving more prominence to certain choices when asking the recipient of the service for a decision;

  • repeatedly requesting that the recipient of the service make a choice where that choice has already been made, especially by presenting pop-ups that interfere with the user experience; and

  • making the procedure for terminating a service more difficult than subscribing to it.

The term ‘dark pattern’ is explicitly mentioned in recital 67 DSA, which focuses on practices that ‘materially distort or impair, either on purpose or in effect, the ability […] to make autonomous and informed choices or decisions’ (emphasis added). In contrast to the UCPD, the trader’s intention may—according only to the wording of the recital—also play a role, but the trader’s intention is usually more difficult to determine than the architecture (method) and the typical reaction of the average user (effect) (Trzaskowski, 2016).

The making of ‘free and informed decisions’ is at the heart of the provision, as it prohibits the distortion/impairing of ‘decisions’. One particularly important use of ‘dark patterns’ is a design that increases engagement, including by exploiting human biases and heuristics (Kahneman, 2011) or manufacturing addictions (e.g. Schüll, 2012; Eyal, 2019; Alter, 2017). Examples include newsfeeds, likes, streaks and the promotion of more engaging content. It is emphasised in recital 67 that the regulated practices can be used to persuade the user to ‘engage in unwanted behaviours or into undesired decisions which have negative consequences for them’, which could include addictive behaviour. It is, however, unclear whether the behaviour manufactured by these design choices qualifies as a ‘decision’ as used in the provision.

It follows further from the recital that the trader should not use ‘exploitative design choices to direct the recipient to actions that benefit the provider of online platforms, but which may not be in the recipientsinterests, presenting choices in a non-neutral manner, such as giving more prominence to certain choices through visual, auditory, or other components, when asking the recipient of the service for a decision’ (emphasis added). It remains unclear whether this protection is intended to go beyond the user’s revealed preferences, as discussed above; i.e. the trader is obliged to design the choice architecture to support a ‘not negative outcome’ from the (individual) user’s perspective.

In addition, the recital recognises the use of friction (making certain choices more difficult, cumbersome or time-consuming than others) as discussed above. However, it may be difficult—often impossible—to present choices in a neutral manner as envisaged in the above quote, as there may be an inherent preference (bias) for, for instance, clicking on the lower/right-hand option. As expressed by Don Norman, ‘all artificial things are designed’ (Norman, 1988, p. 4), and the same is true for information, as everything we say is framed in some way.

Given its relationship to the UCPD and the GDPR—which are generally interpreted to have a very wide scope of application (European Commission, 2021, section 4.2.7: ‘The UCPD covers the advertising, sales and contract performance stages, including the agreement to the processing of personal data and the use of personal data for delivering personalised content, and the termination of a contractual relationship’.)—the effect of the DSA in this context depends on the interpretation of the former legislations. It cannot be ruled out that the DSA will affect the interpretation of the UCPD and the GDPR, but it does not seem to have been the intention.

It remains fair to say that recital 67 of the DSA appears overoptimistic, considering the wording of the corresponding Article 25. Generally, recitals are supposed to provide the reasons for the main provisions and as such should not contain normative provisions or political exhortations (European Union, 2015, guideline 10; see, however, CJEU, Case C-428/11, para 53, making reference to recital 18 UCPD). It could appear that the scope of the GDPR and the UCPD were carved out as a result of effective lobbying, thus deflating the political wish list represented in the recital. However, we have argued above for how these two legal instruments can be applied effectively to address ‘dark patterns’.

As a matter of anti-circumvention, Article 13(6) of the equally recent Digital Markets Act (DMA; Regulation (EU) 2022/1925; see also recital 37) prohibits making the exercise of particular rights or choices ‘unduly difficult, including by offering choices to the end-user in a non-neutral manner, or by subverting end users’ or business users’ autonomy, decision-making, or free choice via the structure, design, function or manner of operation of a user interface or a part thereof’.

Conclusion and perspective

As suggested above, the legality of design practices may be assessed by considering intention, method and effect, where intention does not play a role in determining the unfairness of a commercial practice or whether a data subject has consented to the processing of personal data. The taxonomies of dark patterns may be helpful in identifying ‘problematic’ design practices (methods), but further analysis of context and (likely) effect is necessary to determine the lawfulness of a particular practice.

We have used ‘choice architecture’ as a neutral term, recognising that choice architecture—online as well as offline—may be designed in ways that to varying degrees engage or appeal to the user’s capacity for reflection and deliberation, cf. our definition of manipulation. We have also shown how the GDPR and the UCPD may be interpreted to address such practices, and how we can better understand human decision-making.

Dark patterns in law

Using the term ‘dark pattern’ does not bring us closer to drawing the fine line between legitimate persuasion and unlawful manipulation. The two offline examples used above (design of grocery stores and airports, respectively) can be said to use ‘behaviour modification through architecture’, i.e. ‘tricks’ that ‘make you do things that you didn’t mean to’, i.e. walking through the entire store and entering a store, respectively, and thus increasing the likelihood of buying things you did not intend to buy. Calling the practice a ‘dark pattern’ does not bring us closer to determining its lawfulness.

The ‘darkness’ of a practice may refer to the traders’ intentions or to the user’s ability to identify the practice (transparency) or its effect. As many ‘dark patterns’ are clearly visible—even though their implications (effect) for individual users may not be apparent—and intentions are not important for the legal analysis, the unlawfulness of these patterns may not be determined by their darkness.

In 2022, Brignull’s website changed URL from <darkpatterns.org> to <deceptive.design> ‘in an effort to be clearer and more inclusive’. Similarly, in version 2.0 of the guidelines on dark patterns in social media platform interfaces, the European Data Protection Board is using ‘the more inclusive and descriptive term “deceptive design pattern” instead of “dark pattern”’ (EDPB, 2023, footnote 3).

The dark pattern taxonomies illustrate patterns of ‘problematic’ practices, but the existence of ‘patterns’ in commercial conducts, including procedures for obtaining consent, is not important to determine whether a concrete commercial practice (method) is unlawful (see for illustration CJEU, Case C-388/13, para 42). The pattern recognition may, however, be of relevance for determining the typical reaction of the average user in a given case, i.e. determining the effect of a practice.

As long as the taxonomies only describe practices without providing guidance on how to make the distinction between legitimate persuasion and unlawful manipulation, it does not seem like the best idea to create in law a list of such practices. This was done with the blacklist of the UCPD, and it has been argued that it—despite the intention to provide clear prohibitions—is not sufficiently precise because of its use of vague notions, which is likely to pose important problems in practice (Stuyck et al., 2006, pp. 131–132).

Education and awareness

‘Dark patterns’ as a term seems to have had a positive effect in creating a broader (popular) understanding of how technology is used to design our experiences and behaviour, including the role of personal data and profiling that allow for the creation of individualised realities (personalisation) that may not only undermine our agency, but is also likely to have negative consequences for social interactions and democracy as such. The case of ‘dark patterns’ underscores the importance of having a vocabulary for such practices with a view to spur conversations that create useful storytelling, which may lead to behavioural change.

Compliance and enforcement

In addition to benefiting awareness, such taxonomies may be helpful for both law enforces and law abiders, in the latter instance as a matter of ensuring compliance under legal uncertainty. However, we argue that so far, taxonomies have been better in creating an understanding of a problem rather than creating clarity as to what is lawful and what is unlawful—hopefully, the CJEU will provide us with further building blocks for a model of what constitutes lawful/unlawful design of choice architecture.

Further research

As historian Melvin Kranzberg pronounced in his first law of technology, ‘Technology is neither good nor bad; nor is it neutral’ (Kranzberg, 1986). It should be obvious by now that choice architecture may be designed for good and for bad, and that a design that works to the benefit for some users may be to the detriment of other users. We must also recognise that ‘dark patterns’ are not new and not unique to the digital environment, let alone online platforms. What is new online is the scale, scope and precision of data-driven predictions, which can also be used to design user experiences.

We should be careful not to use the term ‘dark pattern’ to exclude or overlook the decades of work done and inspired by the likes of Herbert Simon, Daniel Kahneman, Amos Tversky, Robert Cialdini and BJ Fogg. The challenge from a legal perspective is to understand human beings’ ability to make free and informed decisions, how that ability may influenced by information and architecture and how design may help to preserve the user’s right to self-determination.

In designing law in this context, it is important to focus on the architecture rather than the intent of a trader (see also Helberger et al., 2022, pp. 195–196). One could also consider the extent to which legitimacy and accountability, as in the GDPR, could be introduced or emphasised when business models rely on harvesting the value of consumer irrationality, including consumer inertia. Considering the insights from behavioural sciences, including persuasive technology, principles of self-determination by design (market perspective) and human dignity by design (societal perspective) may be more challenging—which in itself is a reason for trying.