1 Introduction

Artificial intelligence (AI) refers to the effect generated by the realisation of human minds through computers. It is becoming increasingly pervasive in our daily life. In the current intelligence age and cognitive era, AI and machine learning (ML), as a sub-field of AI that supports autonomous machines, are deployed to radically change the nature of and improve business practice to promote sustainable development. AI can automatically learn and acquire knowledge from big data and use this knowledge to help humans achieve their practical and technical goals.Footnote 1

AI is a double-edged sword. There are benefits to applying AI, such as advantages brought by big data and new value to the business through authenticity, augmentation and automation. At the same time, organisations and individuals will face the challenge of ‘too much data and not sure what to do with it’.Footnote 2 In a corporate setting, AI can be applied to advance the effectiveness and efficiency of corporate social responsibility (CSR) programmes.Footnote 3 Companies and their stakeholders will enjoy the advantages of AI as it will bring many benefits in terms of economic value and reconnoitring resolutions to promote companies’ resilience to respond to sustainability threats and social challenges. Nevertheless, it is equally important to investigate potential hazard brought by AI and concerns raised by this powerful technology so that the application of AI can be aligned to human values and beliefs.

Although AI applications for sustainability are at their early stage, this trend is already starting to impact corporate sustainability in the application of AI to achieve Sustainable Development Goals (SDG) such as CO2 emissions reductions, or ML applications to improve horticultural products.Footnote 4 The application of AI for the social and environmental good includes formal and informal mechanisms to raise awareness of CSR practice, standardisation and implementation. The practice of AI needs to be responsibly and ethically reinforced by regulatory insight to enable its sustainable development.Footnote 5 Failure to do so could bring gaps in AI’s application against accountable and ethical standards.Footnote 6 AI also generates adverse effects, such as infringing on privacy and bringing additional algorithmic bias. Companies should analyse the algorithm’s data and make fair and ethical predictions and offer similar options. We propose using the regulatory framework to promote more socially responsible AI by monitoring and mitigating the risks associated with it. Thus, AI could have a broader impact on many sectors, which is already demonstrated by its effect in promoting the SDGs.Footnote 7

In a perfect world, companies and AI users want AI systems that are transparent, explainable, ethical, adequately trained with appropriate data, and free of bias. In business decisions, these responsibilities are translated into questions about AI and business ethics, strategic management, stakeholder policies and CSR. Effective corporate governance is based on incorporating the principles of stakeholders’ communications, participation and scrutiny into decision-making, and the application of AI may consolidate these principles. The integration of AI and corporate decisions comes in two branches, including AI for sustainability and sustainability of AI. As for the first branch, companies applying AI will fully integrate ethical norms and accountable AI. The lack of a legal system regarding the application and design of AI may lead to the rise of private standardisation, as voluntary adjustment enables the steady establishment of the future legal framework.Footnote 8 As for the second branch, AI and big data will also help companies adopt ideal corporate governance principles, such as accountability, transparency and appropriate cooperation with stakeholders. Moreover, Al will help establish management systems that effectively mitigate CSR risks so as to achieve economic and social benefits based on big data.

Existing literature reveals a lack of consensus about if and how AI will change the current practices or even foundations of corporations, ranging from approaches that envisage a new paradigm of autonomous corporations to others sustaining that no relevant change will happen. Some authors claim that AI will reduce the need for human management and the associated costs, improving at the same time the accuracy and efficiency of corporative actions.Footnote 9 Some even predict that boards will become ‘virtual networks of people’ or will be completely replaced by AI-based solutions under the influence of digitalisation.Footnote 10 However, others remain sceptical about the capacity of technologies to alter fundamental normative issues and reduce the need for human management.Footnote 11 In their opinion, it is excessively optimistic to predict AI’s capabilities and to hold a simplistic conception of the board’s functions.Footnote 12 We believe that AI can change the corporate law and CSR framework, moving towards a more sustainable model of corporate governance.

The development of AI has raised questions regarding AI users’ moral and ethical responsibilities and the contribution or hazard brought by AI. This article aims to examine AI’s role in promoting more socially responsible companies, and associated legal challenges by exploring the interplay between AI, CSR and the regulatory framework and by focusing on the potential benefits that AI could bring to the boardroom in terms of ethical and socially responsible AI.Footnote 13 It enriches the ongoing debate on embracing technology to drive CSR and effective corporate governance. The following interrelated research questions will be investigated: Will AI imply a high-tech boost to sustainable decisions in companies? What are the risks for AI-advanced sustainable decisions? Will a risk-based regulatory framework, which includes approaches from hard law, soft law and voluntary guidelines that enable the use of AI for social and environmental goods, be able to achieve accountable AI and promote the common good?

The article proceeds as follows. Section 1 provides a comprehensive review of the existing doctrinal explanations of CSR, corporate law and more sustainable decisions. Section 2 offers a critical analysis of AI and corporate decisions. Section 3 situates AI within the corporate environment and builds links between AI and sustainable decisions. Section 4 critically evaluates the regulation of AI for the common good towards a harmonised and risk-based approach. Finally, there will be some concluding remarks.

2 CSR, Corporate Law and More Sustainable Decisions

2.1 CSR and Corporate Law

CSR encompasses sustainability development, corporate governance development and corporate objectives, stakeholder protection and socially responsible investments. It is a concept that covers many initiatives, and is based on a commitment to maintain high standards in everything that companies have to deal with. The term involves the process by which companies identify and neutralise the harmful effect their corporate actions and operations may have on society.Footnote 14 The popularity of CSR has been demonstrated through the connection between ‘good behaviour towards stakeholders to whom no legal duty is owed and fulfilment of the shareholder primacy obligation required in corporate law and the role the courts have played in guiding the way’.Footnote 15 CSR is the obligation of directors to act in ways that benefit the organisation’s interests and those of society as a whole. Social, environmental and human rights issues are core elements for sustainable corporate operations.Footnote 16 CSR has been recognised and promoted through company law approaches and corporate governance mechanisms, mainly executed through information discourse and directors’ duties, as a vehicle for incorporating social and environmental concerns into the business decision-making process.Footnote 17

CSR is an umbrella term for various terms such as sustainability. Academic, practitioner and governmental institutions have not agreed on a coherent account of the nature and scope. There is no firm consensus on a definition for CSR because the expectations and demands of various stakeholders in corporate practices are constantly adjusting due to rapid changes in the business world.Footnote 18 Since the 1990s, CSR has become a broader subject that is widely discussed and researched.Footnote 19 We have investigated the definitions from different sources and contextualised a few core characteristics to support our discussions on a regulatory framework for more sustainable AI. First, the goal of CSR is to balance the interests of stakeholders and shareholders beyond a narrow focus on profit-making. Second, as for the scope of the term, CSR aims to address a wide range of challenges, primarily regarding environmental and human rights concerns so as to improve quality of life and community harmonisation, working towards a more sustainable society at large through the contribution and performance of corporations. Lastly, CSR has been shaped along a trajectory of becoming a commitment or obligation to maintain the legitimacy of corporate actions and address sustainability challenges.

In order to address different sustainability challenges, corporate law should make a substantial contribution so that the mandatory consideration of sustainable decisions will go beyond mere incentives. First, corporate decisions are made under mandatory legal rules embodied in external laws or regulations that protect various stakeholders, such as employment law, consumer protection law, environmental law or insolvency law. The duties to comply with these laws are inseparable from corporate law and corporate governance. As a result, directors will find ‘their decision tree considerably trimmed and their discretion decidedly diminished by mandatory legal rules enacted in the name of protecting stakeholders’.Footnote 20 Second, existing legislative approaches in company law allow for the protection of the vulnerable. In order to mitigate, ameliorate and compensate for vulnerability in the domain of corporate law, assets should be provided in the form of benefits or coping mechanisms.Footnote 21 The ‘duty to promote the success of the company’ embodied in Section 172 of the UK Companies Act 2006, whereby directors are required to consider the long-term interests of the corporation and also have regard to suppliers, employees and communities, is an example of a legally mandated coping mechanism. Third, it is often difficult to establish a direct causal link between corporate misconduct and social, environmental or human rights damage, and it is usually almost impossible to identify a single perpetrator. It is therefore necessary to rationalise the need to protect vulnerable parties with the highest dependency in a preventive as well as a compensatory manner. This preventive approach, starting from an internal influence on corporate behaviours and boards’ decisions, also focuses board members’ attention on a more active involvement in ethical initiatives before irreversible damage is done.

Looking at CSR development, CSR 1.0 focuses on reducing the negative impact on society and philanthropic responsibility. In order to address some of the critiques of CSR being incremental and image-driven, CSR 2.0 takes a more strategic and collaborative approach based on five principles, i.e., creativity, scalability, responsiveness, glocality and circularity.Footnote 22 AI will assist companies in identifying a new paradigm or pattern of thinking for formulating CSR policies and implementing CSR, particularly in terms of emphasis on creativity. In the sense of sustainable entrepreneurship, CSR 3.0 focuses on the networked value.Footnote 23 It emphasises risk management and innovation and offers solutions to address sustainability challenges through public and private partnerships with corporations, stakeholders, NGOs and governments.Footnote 24 CSR 4.0 maps onto Globalization 4.0 within an intensely transformed systems approach for creating converted value supported by innovation and a resilient economy.Footnote 25 The development facilitates new dimensions and new weights for evaluating CSR. It reflects the trajectory of inclusiveness and stakeholder involvement to promote sustainability. At the same time, technologies such as AI, including the application of big data, ML and robotics, will play a key role in long-term enhancement to society and achieving the common good. This development is in alignment with companies’ business strategy by optimising CSR strategy and mitigating CSR risks. Advanced AI that simulates a human brain will facilitate technological innovations that empower real-time big data collection and data-informed reporting, which are new means for stakeholder communication to help with innovative CSR.

2.2 CSR and Sustainable Decisions

CSR is a key element to promote sustainable development.Footnote 26 It helps to build trust, raise awareness and encourage social change.Footnote 27 The concept of sustainability is very broad and encompasses multiple facets.Footnote 28 It has been defined as ‘the result of the growing awareness of the global links between mounting environmental problems, socio-economic issues to do with poverty and inequality and concerns about a healthy future for humanity’.Footnote 29 This term comprises three dimensions—economic, environmental and social—that are complementary and interlinked.Footnote 30

The different dimensions of sustainability were first mentioned in the Brundtland Report in 1987. This report referred to sustainable development as meeting ‘the needs of the present without compromising the ability of future generations to meet their own needs’.Footnote 31 On this basis, the United Nations developed Agenda 2030 and a set of 17 SDGs, which integrate and balance these objectives.Footnote 32 It is worth noting that this notion is not static; it needs to be constantly reconsidered to refine its content and adapt it to new social and environmental challenges.Footnote 33 Otherwise, it would become ‘an all-encompassing concept, if not a mantra’Footnote 34 that would facilitate non-sustainable production and consumption patterns.

In 2010, the European Union formulated the Europe 2020 strategy for smart, sustainable and inclusive growth,Footnote 35 advocating for an economy based on sustainability, knowledge and innovation. The relevance of this strategy was confirmed in 2019 when the European Commission presented the European Green Deal as an opportunity to improve the economic model to attain climate neutrality by 2050.Footnote 36 The Commission approved the Sustainable Europe Investment Plan (SEIP) to achieve this goal, stressing that digital technologies are essential to creating smart, innovative and tailored solutions to tackle climate-related concerns.Footnote 37

When taking a corporate decision, directors have to strike a balance between what is good for society and what is beneficial for the company and its shareholders. An excellent example of a legislative approach promoting CSR is Section 172 of the UK Companies Act 2006. It gives legitimacyFootnote 38 to directors to consider and include the interests of non-shareholder stakeholders when they fulfil their duties. The Section effectively indicates that directors may consider and act on the legitimate interests of stakeholders other than shareholders to the extent that these interests are relevant to the company.Footnote 39 It is in line with the nature of business judgement rulesFootnote 40 and the subjective nature of directors’ fiduciary duties.Footnote 41 This approach integrates social and environmental concerns in decision making, which leads to an internalisation of externalities.Footnote 42

This approach confirms the power possessed by stakeholders, including the legitimacy of stakeholder relationships, the power to influence companies, and the urgency of stakeholders’ claims on the firm.Footnote 43 Apart from attempts in statutes, the US Supreme Court decision in Burwell v Hobby LobbyFootnote 44 claimed that ‘modern corporate law does not require for-profit corporations to pursue profit at the expense of everything else, and many do not do so’.Footnote 45 In the same vein, the Supreme Court of Canada stated that

in determining whether they are acting with a view to the best interests of the corporation it may be legitimate … for the board of directors to consider, inter alia, the interests of shareholders, employees, suppliers, creditors, consumers, governments and the environment.Footnote 46

Studies have shown that integrating sustainability aspects in the organisation is beneficial in terms of reputation,Footnote 47 productivityFootnote 48 and access to financial resources.Footnote 49 Companies’ efforts in this direction will positively impact the value of the brand and corporate financial performance. These companies will be able to obtain better resources, better employees and better opportunities.Footnote 50 Many corporations do not perceive this path as a burden but rather as a prospect to enhance their long-term interests and foster relationships with suppliers, employees and communities.

However, companies must develop coherent and coordinated CSR strategies to maximise the positive impact on the environment and society.Footnote 51 When building the case for CSR, directors should choose the objectives to be pursued by taking into account—among many other factors—the peculiarities of the sector, the demands of consumers and other social organisations, and the potential costs and benefits of the different solutions. They might also look for innovative solutions to effectively align the interests of the company and stakeholders. It requires having vast amounts of updated, high-quality and reliable data. On the other hand, directors should have the capacity and knowledge to analyse that information and decide, among the different options, which course of action is in the companies’ best interests.

The problem is that most directors do not have the necessary knowledge and skills in terms of sustainability to make a well-informed and diligent decision.Footnote 52 This ‘knowledge gap’ prevents them from posing the right questions and, consequently, obtaining the most accurate information—in retrospective and prospective terms—to deal with the complex trade-offs and dilemmas inherent to this type of decision.Footnote 53 Suppose directors do not have sufficient sustainability credentials. In that case, it is reasonable to expect them to rely on a person or group of persons who have the proper knowledge and expertise. Otherwise, they will be at risk of violating the duty of care and open themselves to litigation.Footnote 54

The challenge will be to deploy suitable mechanisms to constantly gather and process massive volumes of data (trends in the market, consumers’ preferences, past experiences in promoting sustainability, etc.), identify patterns of conduct and make predictions to support the board’s strategy. In the last decade, the rapid development of digital technologies has shown that AI solutions are particularly apt for performing this task. Their presence in the boardroom is becoming more and more common. The following section will discuss the possibilities and challenges of using this technology to support companies in the decision-making process.

3 AI and Corporate Decisions

In the current intelligence age and cognitive era, AI has enormous potential to change the nature of corporate governance practices and radically improve CSR practice by suggesting options, solving complex problems in an informed manner, or taking proper actions to achieve specific corporate goals. It encompasses a large variety of subfields, including ML, which can learn and acquire experience from data and use this knowledge to help boards of directors achieve their corporate goals. With AI in corporate governance, board decisions can be based on analysis of corporate patterns and industry trends rather than on gut feelings.

In a corporate setting, AI can be applied to enhance the effectiveness and efficiency of CSR programmes. The different roles of AI in this environment open up a world of possibilities for companies and their stakeholders in terms of economic value, enhancement of companies’ long-term interests, or improvement in response to social, environmental and human rights challenges. At the same time, its use also poses several risks and challenges that need to be addressed. In particular, the success of applying AI and ML to CSR-oriented challenges relies on compliance with hard laws, soft laws, guidance and standardisation.

Before analysing these two sides of the same coin, it is necessary first to approach the concept of AI and outline the requirements for its application. Since data is the fuel of AI systems and determines its outcomes, we will claim that the correct performance of this technology is subjected to the existence of high-quality data architecture. This precondition contributes to smart and well-informed decisions and mitigates some of the hazards that might arise when using digital technologies in the boardroom.

3.1 Definition of AI

The idea of using computed-based artificial intelligence to replicate human behaviour was first proposed by Alan Turing in 1950 when he developed the so-called ‘Turing test’ to respond the following question: Can a computer communicate well enough to persuade a human that it, too, is human?Footnote 55 Shortly after, John McCarthy introduced ‘artificial intelligence’ in 1956 to explore how machines could intelligently think.Footnote 56 It was defined as ‘the science and engineering of making intelligent machines, especially intelligent computer programs’.Footnote 57 Since then, countless definitions based on the notion of intelligence have been proposed, but there is no clear consensus about it.Footnote 58

In general terms, the Organisation for Economic Co-operation and Development (OECD) defined artificial intelligence as ‘a general-purpose technology that has the potential to improve the welfare and well-being of people, to contribute to positive sustainable global economic activity, to increase innovation and productivity, and to help respond to key global challenge.’Footnote 59 More specifically, the European Commission stated that it refers to ‘systems that display intelligent behaviour by analysing their environment and taking actions—with some degree of autonomy—to achieve specific goals’.Footnote 60 It also clarified that

AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications).

From a functional perspective, the OECD’s AI Experts Group (AIGO) describes AI as

a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. It uses machine and/or human-based inputs to perceive real and/or virtual environments; abstract such perceptions into models (in an automated manner e.g. with ML or manually); and use model inference to formulate options for information or action.Footnote 61

The same posture has been adopted by Eliasy and Przychodzen, who refer to this technology as an ‘algorithm that is capable of learning and thinking’.Footnote 62 Understanding learning as ‘the ability to update coefficients and parameters of an algorithm to enable it to recognise the pattern between input and output data’,Footnote 63 these authors refer to ML as the most recent and extended development in the AI field. By actively learning from data and past experiences, this technology can easily identify patterns and generate predictions, thus efficiently contributing to decision-making processes.Footnote 64 According to Turner, the great advantage of such AI is that it does not approach matters in the same way that humans do. This ability not just to think, but to think differently from us, is potentially one of the most beneficial features of AI.Footnote 65

The difficulty to provide a standard and universally accepted definition of AI is inherent to its dynamic nature. The approach to this technology is constantly evolving, and so are the different settings where it can be applied to generate value. Indeed, new technologies are emerging over time while others that were initially deemed ‘intelligent’ become normalised and lose this status.Footnote 66 To some extent, it would not be desirable to come up with a rigid definition of AI since this vagueness is one of the factors that have contributed to its growth and rapid advance.Footnote 67 In the corporate governance arena, AI can be understood as the use of computers to assist, support, collaborate or even duplicate directors’ behaviour so that the company can function competently, successfully and with foresight in its business environment.Footnote 68 What is clear, though, is that AI applications need to be fed with large volumes of data (big data) to perform the stipulated functions or achieve specific goals.

3.2 Big Data As an Enabler of AI

AI is essential for establishing high-quality data architecture. The possibility of harnessing the potential of AI—especially in the case of ML—crucially depends on the availability of high-quality big data.Footnote 69 When designing this system, the solution to an optimisation problem is not coded in advance but is derived from data analysis. Therefore, ‘instead of deriving answers from rules and data, rules are developed from data and answers’.Footnote 70 Once the data extracted from various sources has been stored, it will be analysed using algorithms to find correlations and construct an optimal predictive model.Footnote 71

In this context, big data can be defined as ‘high-volume, high-velocity and/or high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation’.Footnote 72

Data is crucial to building a model of corporate governance that relies on AI. In an environment where an increasing number of companies’ decisions are based on data, ensuring that the information used is adequate should be a priority.Footnote 73 Poor data quality is ‘enemy number one’ to the use of ML.Footnote 74 The reason is that this technology uses historical data to develop predictive models and new data to make future decisions. In order to train the programme properly, historical data must be correct and meet high-quality standards.Footnote 75 Companies must entrust the collection, storage and preparation of data to a team or individual with deep knowledge of the topic, as well as obtain an independent assessment of the quality of the programme to detect and correct any possible data inconsistency.

Ethical and legitimate application of data can provide companies with valuable evidence and options that enable them to make more informed decisions and obtain a competitive ‘data advantage’ over rivals.Footnote 76 The capacity to gather a vast volume of varied and reliable data is critical to success in the market. Big data analytics are often considered to be deployed in large companies because of the cost of collecting and storing vast quantities of data. However, SMEs can also use this mechanism to understand their customers better and improve revenues.Footnote 77

In terms of corporate decision-making, the primary duty of the board of directors is to make strategic decisions that shape the company’s general direction.Footnote 78 The increasing availability of big data, together with the technologies to collect and process the data in real-time, can revolutionise the senior management of organisations.Footnote 79 The information extracted from data can be transformed into knowledge that will help individuals to make the correct movement at the right time.Footnote 80 In a world where millions of data are processed per minute, corporate decisions should respond to evidence rather than intuition.Footnote 81

Big data provides the opportunity to make better decisions, i.e., well-informed and based on trustworthy information, and bypass possible gaps concerning knowledge of the market or other factors that are decisive for the success of the action. In a scenario of uncertainty, big data can be used to discard lines of action that a priori seemed to be feasible and/or avoid incorrect or unnecessarily risky decisions. At the same time, as Randy Bean said, ‘the ability to make informed decisions based on the latest up-to-the-moment information is rapidly becoming the mainstream norm’.Footnote 82 Therefore, directors should get in the habit of asking themselves what data says and questioning the origin, quality and reliability of the data they are using. By putting together the information and possible lines of action, the company will reach the optimal solution in each case.

Companies that adopt data-driven decisions are in a solid position to enhance their visibility and reach a high level of corporate performance.Footnote 83 It is estimated that companies that use data-driving decision-making are, on average, 6% more profitable and 5% more productive than their competitors.Footnote 84 By relying on data, metrics and statistics, directors can gather all relevant information to align strategic business decisions with their goals and objectives. The conflict of interests that these decisions usually involve is limited to a minimum. To generate value from data, decision-makers should have experience in interpreting the outcomes and their implications for the company and other stakeholders, which might require bringing together multiple actors with different skills.Footnote 85

3.3 Using AI in the Boardroom

The potentialities of AI to enhance decision-making in the boardroom seem to be infinite. When properly fed with adequate and high-quality big data, AI can help board members to unveil hidden insights and valuable knowledge, resulting in an improvement of the efficacy and quality of the decision-making processes.Footnote 86 It contributes to anticipating future needs and risks, predicting better solutions, making more efficient use of resources and increasing profits, as well as evaluating companies’ performance and ensuring continuous improvement.Footnote 87 However, AI can also take autonomous decisions, become a member of the board, or even replace board directors.Footnote 88 The range of functions that AI can effectively perform will depend on the level of maturity of these disruptive technologies.

Generally, AI can assume three roles in corporate management: assisted AI, advisory or augmented AI, and autonomous AI,Footnote 89 depending on the specific level of independence. On this basis, Hilb suggested five scenarios of artificial corporate governance, i.e., assisted, augmented, amplified, autonomous and autopoietic intelligence.Footnote 90 Given the utopian scenario of self-driving corporations and the legal problems it would involve,Footnote 91 especially in terms of liability for their decisions, we will discuss the three roles previously mentioned from the point of view of collaboration between humans and machines. The current development of ML requires human intervention to provide input information and interpret the outputs.

AI can be an assistant. At the lowest level, AI may perform simple administrative tasks with very little autonomy or no autonomy, so the decision rights exclusively belong to human beings. This role may allow directors to delegate time-consuming tasks such as analysing and monitoring information flows, enabling them to concentrate on strategic business decisions and operational management. This approach focuses on the availability, selection and analysis of data, which has become the most valuable asset for companies.Footnote 92 The supply of relevant data samples in real-time will result in better knowledge, better predictions and ultimately better decisions.

AI can be an advisor. In this role, it will support decision-making in more complex issues by—for instance—asking and answering the right questions, identifying opportunities, detecting irregularities and mitigating risks. It may recommend particular courses of action by taking into account the outcome information. Still, the board will make the final decision, or at least on a co-responsibility basis. AI can help directors navigate options, correct error-prone humans, and comment on proposed strategies at senior-level meetings. This model is already working on the board of the software company Salesforce, which relies on a robot called Einstein to improve corporate plans and decision-making in general.Footnote 93

The term ‘augmentation’ refers to a combination of AI and human intelligence. Instead of replacing human intelligence, AI improves it by providing certain information or advice that otherwise would be difficult to obtain.Footnote 94 In particular, AI can support corporations and boards in situations of uncertainty, complexity or equivocality. When there is a lack of information about alternative decisions and/or their consequences, AI can overcome this uncertainty with predictive analysis, for example, by generating new ideas through probability and data-driven statistical inference approaches.Footnote 95 Similarly, it can play an essential role in complex situations that require big data processing at speed ‘beyond the cognitive capabilities of even the smartest human decision-makers’.Footnote 96 The use of AI would tackle problems such as ‘choice overload’Footnote 97 or ‘analysis paralysis’,Footnote 98 where the excess of available information is overwhelming due to a large number of potential outcomes and the inherent risks and directors cannot give a swift response.Footnote 99 Finally, in situations of equivocality, where there are different interpretations of a decision domain due to the conflict of various interests, cooperation between AI and the board members would be advisable to satisfy the needs and objectives of multiple parties.Footnote 100

The advantages brought by AI in analysing information quickly and suggesting lines of action for directors will be crucial for them to make smart decisions. In any case, directors have a duty to deploy the most appropriate AI system, ensure the accuracy and trustworthiness of the data and monitor the performance of its application. In order to perform their oversight duty, directors should be reasonably familiar with data governance, basic algorithmic logic and the roles that AI could play in the boardroom. Armour and Eidenmüller have suggested that the inclusion of AI in the boardroom involves significant changes in the skills and training of managers.Footnote 101 Relevant technical and analytic expertise will soon become essential for board members. It is becoming increasingly decisive for boards to have the expertise to engage in adequate oversight of data governance,Footnote 102 starting with ‘self-driving subsidiaries’ which are more amenable to automated management when AI makes better decisions than humans.Footnote 103 Directors need to be more responsive to changes in technology, and programmers, employees and directors will need to have the technical expertise and become more familiar with the technology in terms of developing and applying AI in the boardroom. Along the trajectory towards establishing an enforceable regulatory framework for AI, training costs seem legitimate, necessary and reasonable, and it would seem to be good practice to build capability in this area.

AI can be a decision-maker. At the highest level, AI will own decision rights due to human trust and delegation. It will proactively and autonomously evaluate options and make business judgements without human input by analysing information from the actual business environment and perceiving patterns in data.Footnote 104 Although algorithms can learn on their own, humans still have to decide how they are deployed and integrated into the decision-making process. Thus, it can be claimed that ML algorithms are autonomous ‘only in the sense that they can run continuously and have the potential to translate their outputs automatically into regulatory actions’.Footnote 105 How this role will develop in the future is still uncertain. It is still plausible that in the medium term AI will assume more managerial tasks and make routine decisions on behalf of the company.Footnote 106 In any case, directors should have the final word when validating or not the machine’s decision.

Ideally, algorithms would be capable of managing corporations and even substitute human directors in the boardroom. The first step in this direction was the creation, in 2014, of VITAL (Validating Investment Tool for Advancing Life Science), an ML programme capable of making investment recommendations. Deep Knowledge Ventures, a venture capital firm from Hong Kong, appointed VITAL as a member of the board with observer statusFootnote 107 due to its ability to ‘automate due diligence and use historical data-sets to uncover trends that are not immediately obvious to humans surveying top-line data’.Footnote 108 Although it is an impressive advance, the truth is that its role, far from being autonomous, is limited to providing advice and supporting human directors’ decisions. There is still much that is not understood about human brains and how to replicate their internal connections. Until then, the concept of general human-level intelligence (also known as general AI) remains in the realm of science fiction.Footnote 109 Moreover, it is worth noting that current company law legislation may need to be amended in order to enable the autonomy of AI in the boardroom. For example, Section 155(1) of the UK Companies Act 2006 provides that a company must have at least one natural director, which makes it difficult for the AI to be appointed as the only board member.

3.4 Challenges of Using AI in the Boardroom: Risks, Uncertainty and Lack of Regulation

As with any new technology, the deployment of AI can entail both opportunities and risks.Footnote 110 In the previous section, we have discussed how AI systems might assist directors and enhance the decision-making process. We will focus now on the risks or negative consequences that their use can entail for the company and even society. As the speed of business transformation and data-driven decisions accelerates, this new reality fuels anxieties and ethical concerns. Boards and executives struggle to understand how these technologies will impact their companies and the collateral effects on third parties. The uncertainty and complexity associated with AI and the lack of skills and expertise in the field have prevented many companies from embracing digital technologies. Bearing in mind that the use of these technologies is in companies’ best interest, the board might be falling short in meeting its duty of care if, considering a case’s specific circumstances, it would be reasonable to implement them to improve directors’ actions.

It is essential to identify the principal risks that AI could bring to attain the right balance and avoid disproportionate harm, i.e., harm that cannot be compensated given the beneficial outcomes. The board of directors should strive for that balance when deciding about adopting new technology and how it will be used in the boardroom. These risks will vary depending on the specific technology, its level of maturity and the company’s features. Despite this dynamic nature, it is possible to contextualise a few common hazards that may jeopardise the effective deployment of new technologies in this area.

First, one of the most often recognised hazards is the one brought by data bias. Although data-driven decisions are expected to be objective and to overcome human subjectivity, the reality shows that this assumption is a myth.Footnote 111 On the contrary, data reflects the existing social and cultural biases and can even perpetuate them, leading to discriminatory or unethical decisions. The reason is that data-driven technologies, such as AI, are inherently past-oriented and can reproduce and reinforce the patterns of inequality and discrimination that exist in societies.Footnote 112 If data samples are not sufficiently representative of the different populations and social groups, then the system will be flawed from the outset and the results will be necessarily biased. On the other hand, data can also mirror the preconceptions and biases of their designers, who might want to favour their clients’ interests or steer the decision in a particular direction. Accordingly, bias will appear when the decision-maker considers irrelevant consideration or fails to consider relevant consideration.Footnote 113 As ML systems become more powerful, being capable of incorporating new algorithms and modifying the features of the programme autonomously, new biases could also be created. Practice has confirmed that AI applications can develop prejudices against women,Footnote 114 black peopleFootnote 115 and minority communities.Footnote 116 A good example of this dangerous trend is Microsoft’s AI bot ‘Tay’, taken offline in 2016 after developing racist behaviour by learning from Twitter users’ statements.Footnote 117

The primary step to correct this deficiency is to develop standards to detect prejudices and eradicate them. This needs to be achieved from the data collection stage for AI’s operation, ensuring that it provides a reliable and representative picture of the relevant environment. Currently, there is no regulation governing the data used to train algorithms, whereas its owners can easily manipulate data. It would be advisable to create shared (and regulated) databases that are not owned by a concrete entity but can be used by all. Furthermore, it could be necessary to adopt positive actions to counteract existing biases so that designers take into account the interests of minorities and other disadvantaged groups. Proper monitoring of the AI system’s activity should be carried out to avoid the potential creation of new biases during the learning process, on the one hand, and to assess the objectivity of the outcomes, on the other.

Second, a significant deficiency that compromises the correct performance of AI systems is the lack of transparency in the decision-making process. Smart algorithms can analyse variables and relationships extracted from big data, but the way it occurs is not always clear for users and even programme developers. The difficulty of explaining why a specific decision or solution has been adopted may conflict with the duty to act on an informed basis and motivate board decisions,Footnote 118 resulting in legal disputes inside the company or with third parties. In addition, the opacity of these systems (so-called ‘black-box’ systems) makes the detection of bias and errors extremely difficult.

When articulating transparent AI, it is vital to consider two dimensions, i.e., the transparency of both the outcome and the process. The former scenario involves ‘the ability to know how and why a model performed the way it did in a specific context and therefore to understand the rationale behind its decision or behaviour’.Footnote 119 This means that the board should be able to communicate the outcome understandably so that diverse stakeholders can understand its content and implications. In the latter case, the board should also justify the design and implementation of a specific process that has led to a particular decision, demonstrating that it is safe, non-discriminatory and trustworthy. In this respect, it would be advisable to follow a predefined catalogue of good practices and implement auditable measures, ensuring that its activities are constantly monitored.

Thus, the explicability of AI is the ability to make explicit the meaning of the algorithmic model’s result.Footnote 120 Given that these applications are not perfect, directors should put special effort into understanding their decisions in order to assess how to incorporate them into the board’s judgement. In this regard, Robbins has claimed that

getting algorithms to provide us with explanations about how a particular decision was made allows us to keep ‘meaningful human control’ over the decision. That is, knowing why a particular decision was reached by an algorithm allows us to accept, disregard, challenge, or overrule that decision.Footnote 121

Likewise, Floridi et al. favour developing a framework that allows individuals to obtain a factual, direct and clear explanation of the decision-making process, especially in the event of unwanted consequences.Footnote 122 This process, however, requires a deep knowledge of computing science since it focuses on documentation intelligence sense-making and the review and validation of the logic details.Footnote 123

Third, monopolisation of data and expertise is another hazard for AI’s application. The implementation of AI solutions usually demands significant financial investments that are not affordable for every company, as it involves creating an adequate data infrastructure and the acquisition, or even development, of specific technology. The imbalances in market power concerning the access and use of data have been identified by the European Commission as one of the main issues that prevent the European Union from realising its potential in the data economy. As the Commission highlights in its Communication ‘A European Strategy for Data’, ‘a case in point comes from large online platforms, where a small number of players may accumulate large amounts of data, gathering important insights and competitive advantages from the richness and variety of the data they hold’.Footnote 124 This market power might allow large players to ‘set the rules on the platform and unilaterally impose conditions for access and use of data or, indeed, allow leveraging of such “power advantage” when developing new services and expanding towards new markets’.Footnote 125

On the other hand, the relevant knowledge extracted from data leads to a competition to create powerful and innovative AI solutions. The problem is that the vast demand for AI-related jobs clashes with a shortage of highly qualified professionals. According to McKinsey, big tech companies—such as Alibaba, Amazon and Google—are securing qualified talent to develop their AI strategy by buying start-ups and hiring many of the available experts in the market.Footnote 126 However, this important advantage also involves a high level of corporate responsibility and a duty to ensure that the resultant AI technologies are correctly deployed. Even though it is expected that AI will be more accessible to SMEs as its maturity level increases, if the situation remains as it is today, these companies will be in a second division due to the limitations on accessing data and expertise.

Considering the uncertainties brought by AI, there is a strong case for sustainable and regulated AI. The ethical and legal risks inherent to the use of AI and the lack of clear responses to address them create uncertainty and even some fear in using this technology in the heart of the company. Given that the implementation of AI-based solutions in the boardroom depends on an atmosphere of trust and certainty, it is necessary to lay the ground so that businesses feel comfortable enough to invest in and develop them. In this regard, we should see AI as a means rather than an end itself;Footnote 127 it is a promising instrument to increase human well-being, bringing progress and innovation, and achieve sustainable goals that benefit society as a whole.Footnote 128

Generally, there is agreement amongst stakeholders on the need to build trust and design a regulatory framework grounded on sustainable development and respect for human rights.Footnote 129 It would contribute to setting the ‘game rules’ and ensuring that AI systems are lawful, ethical and robust, i.e., comply with the existing law and regulations, meet a set of ethical standards, and are capable of avoiding unintended consequences.Footnote 130 In other words, companies and other stakeholders could rely upon some basic safeguards when using or being affected by these systems.Footnote 131 The problem, however, is that an uncoordinated approach may result in conflicting obligations and over-regulation.

The creation of a regulatory framework should ensure an adequate level of harmonisation, technological neutrality and proportionality. Furthermore, it would be preferable to opt for ‘de minimis’ and risk-based regulation that provides flexibility and accommodates all the existing (and even future) technologies. The need for harmonisation is particularly relevant since many companies—notably tech companies—operate at an international level and usually have to comply with the rules of various legal systems. A fragmentary approach is likely to have a chilling effect on the different actors and discourage companies from digitalising the decision-making process, given the divergent paths to liability and other legal requirements.

In the following pages, we will develop our arguments in favour of a sustainable-oriented and regulated environment for AI systems. Bringing together the importance of preventing and neutralising the adverse effects of corporate actions on society through CSR programmes, on the one hand, and the urgency of setting standard rules for AI, on the other, we will reach the starting point towards a trustworthy and consensual AI environment.

4 AI’s Contribution Towards More Sustainable Decisions

The groundbreaking nature of AI tends to be seen as optimistically kind. From an innovation-driven perspective, implementing AI-based solutions has the potential for a win-win across business and society. However, in addition to the challenge of achieving absolute transparency and accountability of AI decisions, designers and users should consider the transformative and long-term effects that these technologies may have on individuals and society.Footnote 132 Given the urgency of orienting corporate action towards sustainable development, the three pillars of this concept (environment, society and economy) must be placed at the core of AI ethics. A consensus is that effectively embracing AI and other advanced technologies will require cooperation from multiple stakeholders, especially directors and the public sector.Footnote 133 Indeed, the European Commission has suggested that AI engineers should be accountable for social, environmental and human health impacts imposed by AI decisions.Footnote 134 In this section, we will tackle how to make AI decisions work in a manner that is ethical and sustainable to promote the interests of companies and other social actors, while mitigating the associated costs.

The implications that stem from the relation between AI and sustainability have not been fully considered yet. However, there is no doubt that the huge potential of AI should be used to promote such an important goal for the well-being of society. Evidence has shown that AI can act as an enabler on 134 targets (79%) across all SDGs.Footnote 135 The concept of sustainable AI ‘deals not exclusively with the implementation or use of AI but should address the entire life cycle of AI, the sustainability of the: design, training, development, validation, re-tuning, implementation and use of AI’.Footnote 136

When implementing a specific AI system, the board of directors should consider this technology’s impact on the environment. The training and development of algorithms generate a substantial portion of greenhouse gas emissions. Strubell has estimated that training one model of natural language processing can result in more than 600,000 lb of CO2 emissions.Footnote 137 Considering that this process can last for months or even years, it seems clear that such an environmental footprint should be justified by the advantages of the intended application (for example, because its performance can generate significant developments or promote environmental actions that counteract the CO2 emissions). It would not be reasonable to invest effort and resources in using AI to design environmentally friendly policies within and outside the company without, at the same time, addressing the effects that developing a particular system might have on the planet. Accordingly, companies should allocate resources and budget cost allowances for AI’s application to environmentally friendly policies to ensure the sustainability of the data sources, power supplies and infrastructures used to train or tune algorithms.

Once the company has a sustainable AI infrastructure, it is time to explore how AI can promote a broader range of environmental and socio-economic goals. This is achievable since basic algorithms can be programmed to drive AI towards more ethical and sustainable corporate actions. AI driven by data science for social good will enable AI to address societal challenges and use AI methods to tackle unsolved societal challenges in a measurable manner.Footnote 138 ‘Data Science for Social Good’ embraces ‘attempts to solve complex social problems through the use of increasingly available, increasingly combinable, and increasingly computable digital data’.Footnote 139 Supported by big data and complemented by directors’ supervision, AI will be able to contribute to data-driven decisions in directors’ business judgement and strategic management policies to help promote sustainability.

The collaboration between AI and board members to promote more sustainable companies can be sought through the following channels. First, AI will encourage transparency, which is regarded as a core value and critical approach to enhance sustainability.Footnote 140 AI can also measure disclosure against standards that may be legally required nationally and internationally to ensure compliance with regulations or voluntary standards.Footnote 141

Second, AI will be able to recommend sustainable policies and make sustainable decisions. Complementing human directors’ capacities, AI can understand companies’ ability to generate positive outcomes by organising ethical goals using a smart system. To this effect, the development of sustainability screens or indexes could reduce the burden of understanding data analytics by facilitating the generation of synthetic data visualisations.Footnote 142 Algorithms could be trained to predict the effectiveness of sustainable-oriented corporate policies with an impressive level of precision, helping in the formulation and optimisation of the CSR programme to achieve distributive justice. It can also make recommendations on integrating the CSR strategy and policy with the overall business strategy.

This role can also contribute to automating the process of planning the ESG investment strategy and some complementary tasks such as identification of stakeholder network, assessment of variables, and measurements following the strategy clarification part. The algorithms from this automated process will identify the assets that meet the thresholds and constantly compose the portfolio return to give the company and director the best possible portfolio options. The company could manage both the risk of an asset from a financial performance perspective and the risk from the ESG perspective.Footnote 143

Third, AI can also play a preventive role by providing a barrier to corporate damage to society and the environment. Preventive measures could be achieved through smart technology that identifies discrimination, fraud, or conflicts of interest, with AI playing a vital role in providing more predictive and preventive measures to mitigate social and environmental risks. The most effective mitigation approach is to establish alignment between AI applications and the decision-making process of boards in terms of their risk management strategy. Boards of directors could use preventive measures designed by AI to formulate the most suitable CSR plan and strategic policies for the long-term interests of companies.

Fourth, AI, which uses big data to run algorithms, can provide boards with opportunities to enhance their adaptive capabilities and shape their ability to address environmental changes rapidly.Footnote 144 A wave of new AI tools, designed for functions such as document processing or responding to shareholders’ or stakeholders’ queries, will enhance the efficiency of boards’ decision-making processes.Footnote 145 AI may also support or replace humans in situations where technology is more likely to make a better and more informed decision, where decisions have to be made quickly or where the process is complicated and requires a large amount of data that humans are simply unable to process. If AI-enhanced decisions can be recognised in a legal context, one subset of corporate law that to date has attracted a considerable amount of attention in the corporate law literature, namely directors’ duties and their enforcement, may be a useful angle. AI could have a role to play in generating preventive and deterrent interactions, such as the possibility of imposing a directors’ duty to consult AI in order to satisfy subjective and objective tests of directors’ duties of care.

Finally, AI can be programmed to act in a way that aligns with the organisation’s core values. This may be implemented in the primary phase of the AI decision-making process, namely the goal-setting phase when the controllers of companies decide on the goals of AI and how to balance the different interests in the company, as well as the features and data that are available to draw inferences from. Along with the data dependency and bias problems mentioned before, AI-enabled technology can conflict with human ethics and exacerbate inequalities in society. Given that AI systems usually consider the needs and values of the regions in which they are designed, it might benefit developed countries while discriminating against developing countries and minorities.Footnote 146 In addition, the use of AI can involve conflicts of interest that show the partial nature of algorithms.Footnote 147

Only if these risks or costs are managed will it be possible for the board of directors to use AI to promote sustainability effectively. The regulatory inertia in this arena can be detrimental and may even constitute a threat to sustainability since companies can produce and commercialise technologies without adherence to international principles or other ethical standards.Footnote 148 As Truby has pointed out, ‘any irresponsible development of AI software leaves the utility of the technology exposed to the immense risk of negative consequences’, and it may entail damage for humans and sustainable development.Footnote 149 In this context, there is a need for regulation to ensure a responsible design and deployment of AI-based solutions, on the one hand, and due consideration of public values and interests, on the other. Creating a policy framework that contributes to a reasonable standard of transparency and increases trust in AI decision-making is essential.

A first step to elaborate a regulatory framework is ensuring that policy-makers have a sufficient understanding of the risks and challenges of AI. Otherwise, the oversight policy ‘is likely to be ineffective at best and counterproductive at worst’.Footnote 150 Thus, the expertise and professionalisation of regulators is as important as the regulatory instrument to be implemented. In this regard, it can be claimed that the type of regulation should efficiently achieve the mentioned objectives but without dissuading software developers from innovating and investing. Over-regulation of AI may involve intolerable bureaucratic requirements and harm innovation. Besides, an excessively detailed framework would make it unsuitable for future AI developments.

5 Regulating AI for the Common Good: The Need for a Harmonised and Risk-based Approach

AI offers a fantastic opportunity for directors and companies to promote their CSR portfolio and manage the CSR programme in active collaboration with internal and external stakeholders informed by big data. AI continues to ‘gain in complexity and sophistication’,Footnote 151 offering tremendous benefits in terms of efficiency and innovation. However, it also comes with the responsibility to monitor data collection, data quality, and how data impacts social justice, addresses vulnerabilities and builds resilience. Big data will help the board of directors induce predictive CSR policy that fits the stakeholder network and stakeholder priorities for companies’ business model, enabling both informed and supportive predictions and improving trust with constituents. Cultivating trust with stakeholders, particularly indirect stakeholders such as extraterritorial local communities, requires regulating data quality and data governance, including the processes for gathering, cooperating and scrutinising data through the regulatory framework to be applied to assist companies more effectively.

Over the last few years, numerous national and international organisations have developed a range of ethical guidelines related to AI. More than 84 initiatives describing principles and values must be followed when developing and using AI-based solutions, coming not only from governments or inter-governmental organisations but also from the private sector, civil society and other stakeholders.Footnote 152 Several expert committees have been created to this end, such as the High-Level Expert Group on Artificial Intelligence appointed by the European Commission, the Expert Group on AI in Society of the OECD, and the Select Committee on Artificial Intelligence of the UK’s House of Lords.Footnote 153 There is consensus among different actors about laying down a set of rules to discipline this new reality, even though divergences about how to do it and who should do it also exist.Footnote 154

One of the pioneers in this field was the Future of Life Institute, which developed 23 ‘Asilomar AI Principles’ in productivity, ethics and security in 2017.Footnote 155 In line with the willingness to include social and environmental considerations in corporate decisions, some of these principles focus on sustainability issues such as respecting human rights, including social purposes, and the significance of shared benefits and prosperity to achieve ‘common good’ through AI.Footnote 156 It is particularly stimulating to observe that the shared benefits and achieving the ‘common good’ through AI are suggested as principles for designing, programming, utilising and distributing AI. The common good is defined as ‘the sum total of social conditions which allow people, either as groups or as individuals, to reach their fulfilment more fully and more easily’.Footnote 157 This concept has been used to achieve goals of promoting more ethical companies and the importance of protecting vulnerable stakeholders. Using AI responsibly and ethically is a crucial component of ‘global commons’ and a prerequisite for the ‘common good’ in the global business environment.

As the societal use of and dependency on AI and ML increases, it is crucial to identify what needs exist from a regulatory perspective.Footnote 158 The current scenario is characterised by the existence of fragmentary and inconsistent approaches. There is no unanimity on the principles or guidelines that should govern AI and, even when the different initiatives agree on one or more principles, there are considerable differences in how to interpret and implement them.Footnote 159 Instead of providing certainty to designers, users and even courts in case of disputes about AI-based decisions, the high number of approaches and proposals creates the opposite effect. Hence the need to reach a reasonable level of harmonisation in this area by establishing a set of minimum requirements that AI should meet or a system of red flags as to principles that under no circumstances should be violated. It would give legitimacy to the decisions made in the boardroom and facilitate an eventual review by the courts so that directors will know what rules to follow in the decision-making process, and the judges will have a yardstick to evaluate that performance. Like other countries, such as the US and the UK, the European Union has actively worked on a harmonised framework to regulate AI. As we will explain below, the result of such efforts was a proposal for a regulation launched on 24 April 2021.

5.1 The Need for Harmonised AI Regulation

Due to the various forms of manifestation of CSR-related performance, the regulations governing CSR also come in multiple shapes. They are drafted and enforced by regulatory bodies at different levels. At the most fundamental level, government regulations are generally formal and binding in law, or some recommendations have guiding effects but no legal standing. Local government bodies issue public regulations that are regional, national or supra-national,Footnote 160 based on delegated state, government or international powersFootnote 161 founded on each country’s membership.Footnote 162 Meanwhile, globalisation has further increased the complexity of the legal environment by exposing corporations to international law and the laws of foreign nations.Footnote 163 Progressive advocates who are engaged in promoting more sustainable businesses, more environmentally friendly companies and companies focused on human rights will also drive corporations to embrace more socially responsible ethical codes and guidelines for conduct, the adoption of which is mainly voluntarily.

The situation is similar when applying AI-based solutions in the company to promote CSR goals and sustainable development. The problem, however, is that we cannot pretend to regulate such a complex and immature technology with a fragmentary regulatory context. On the contrary, an all-encompassing and coordinated strategy is needed to find the right balance between a stronger focus on technological details of the various AI applications, which aims to build bridges between abstract values and technical solutions, and the increasing relevance of social and personality-related aspects.Footnote 164 As Martin Rees wisely put it, ‘we need to think globally, we need to think rationally. We need to think long term; empowered by 21st-century technology but guided by values that science alone can’t provide’.Footnote 165

Many arguments reinforce the need for harmonisation in this field from a legal, technical and socio-economic perspective. Taking the legal approach as a starting point, we can identify three issues that demonstrate the insufficiency of the current model and the urgency to take action. The first problem, as mentioned above, is the uncertainty derived from the existence of numerous bodies of ethical principles with different approaches. If a company wants to voluntarily follow some guidelines to ensure that its AI-based decisions respect social values and human rights, it must decide first what principles are more accurate or what organisation it should trust. Will the principles produced by a governmental organisation provide more legitimacy to the company’s operations, or will those designed by the private sector fit its interests better? What principles have been adopted by other market companies? On the one hand, this scenario creates uncertainty among directors and might even discourage them from digitalising some tasks in the boardroom due to the unpredictable legal consequences. On the other hand, it provides a low standard of protection to stakeholders and society since their privacy, fundamental rights and well-being can be in danger due to AI.

The second problem, and probably one of the main reasons why there is no harmonised regulation yet, is the vast influence of the private sector. It has been suggested that the efforts of big companies—such as Google, Facebook and SAP—in developing ethical guidelines and investing in research on the subject respond to the intention of shaping AI ethics in a manner that meets their interests or priorities.Footnote 166 Besides, the design of high-level guidelines could project an image of false ethics to potential customers and investors or even convince society that there is no need for binding regulation or new legislation to tackle the technological risks of AI.Footnote 167 A uniform approach would ensure that the use of AI does not violate the interests and rights of the citizens and that compliance with a certain standard of ethical behaviour is real, not merely a marketing strategy.

Finally, the third problem concerns the lack of enforcement mechanisms. The non-binding character of the existing bodies of principles that aim to discipline AI design and deployment means that, in practice, deviating from the ethical guidelines has no consequences for companies beyond reputational losses in case of misconduct or abuse.Footnote 168 The good intentions written down on paper have no actual effectiveness in practice. Thus, it is crucial to implement suitable enforcement measures that compel the different actors to observe the relevant principles. Public authorities have a pivotal role to play in this regard.

Since 2018, when announcing the strategy for artificial intelligence for Europe, the European Commission has taken a clear position in favour of regulating AI.Footnote 169 Good evidence of this is the appointment of the High-Level Expert Group on AI to provide advice on investments and ethical governance issues. In February 2020, the Commission launched a White Paper on AI entitled ‘A European approach to excellence and trust’, aiming to build a coordinated European plan for a trustworthy AI environment.Footnote 170 At the same time, the European Parliament and the European Council have repeatedly demanded legislative action to ensure that both benefits and risks of AI are adequately addressed and to facilitate the enforcement of rules.Footnote 171

To develop the said trust ecosystem by creating a legal framework, the European Commission approved a proposal to lay down harmonised rules on AI on 24 April 2021. As stated in the explanatory memorandum, the proposal is

based on EU values and fundamental rights and aims to give people and other users the confidence to embrace AI-based solutions while encouraging businesses to develop them. AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human well-being.Footnote 172

At the same time, it aims to establish a set of transparent, predictable and proportionate obligations to ensure legal certainty and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems.Footnote 173 This proposal is based on the concept of trustworthy AI developed by the above-mentioned High-Level Expert Group on AI. The full potential of AI will only be realised if human beings and communities have confidence in it. It is imperative to design a clear and comprehensive framework.Footnote 174

In the UK, governmental bodies such as the UK’s House of Lords Select Committee on AI and the All-Party Parliamentary Group on AI have been created to address the economic, social and ethical implications of developing and implementing artificial intelligence. In the report ‘AI in the UK: ready, willing and able?’, the former recommends that the government work with government-sponsored AI organisations in other leading AI countries and convene a global summit to establish international norms for the design, development, regulation and deployment of artificial intelligence.Footnote 175 But it also states that blanket AI-specific regulation would be inappropriate at that stage. Existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation that may be needed.Footnote 176 Given the government’s inactivity in this regard, the UK’s House of Lords Liaison Committee published, in December 2020, the report ‘AI in the UK: no room for complacency’, which claims that

the challenges posed by the development and deployment of AI cannot currently be tackled by cross-cutting regulation. The understanding by users and policy-makers needs to be developed through a better understanding of risk and how it can be assessed and mitigated.Footnote 177

There is a long way to go in building harmonised regulation on AI. However, the initiative of the European Commission—if successfully implemented—will lead this journey and might stimulate the development of new proposals. In any case, the regulatory model should be built on the basis of a risk-based assessment and contemplate different levels of regulation depending on the potential harms, in order to comply with the principle of proportionality.

5.2 Risk-based Regulatory Approach

Having built the case for a consistent legal framework for AI, we should now consider which is the better way to do it. AI encompasses various technologies with different features and potential risks, and there is no one-size-fits-all solution. Besides, the technology evolves by leaps and bounds, and the traditional regulatory instruments are not suitable for immediate response to the new reality. For that reason, we believe that it is time for a new approach based on the higher or lower risk of the specific solution, which outlines a set of general principles and minimum standards that should be met in each situation. This ‘de minimis’ regulation would allow creating a uniform model to be applied to the different actors in the market while, at the same time, not hindering technological development and innovation.

Accordingly, we propose a risk-based approach for regulating AI. This approach is extensively used in aspects as diverse as the environment, finance, food, and legal services.Footnote 178 Risk-based regulation, as a particular strategy or set of techniques used by regulators, may involve developing decision-making frameworks to prioritise regulatory activities and risks assessment.Footnote 179 It typically takes the identification of risks as the starting point, features the elements of those risks—such as their nature, type, level and likelihood—and creates a ranking of risks based upon these assessments.Footnote 180 We do not have space to explore this in depth in this article. However, we believe this approach will help companies develop AI in a safe and beneficial direction, particularly suitable for regulators with a mission to address risks from AI-associated accidents (safety) or misuse of AI (security).Footnote 181 As claimed before, mitigating risks and achieving global AI for the common good will require international cooperation and present a unique governance opportunity for regulators. The harmonisation of law may help regulators define their approach to AI’s risk clearly and consistently. The advantages of risk-based regulation will accelerate companies’ commitment to incorporating rigorous analysis of potential risks into corporate decisions. Ultimately, risk-based regulation ‘facilitates robust governance, contributing to efficient and effective use of regulatory resources and delivering interventions in proportion to risk’.Footnote 182

The proposed approach relies on the observance of the principle of proportionality since it balances regulatory intervention against the burden it creates for companies, especially SMEs. The specific measures to regulate AI systems will be different depending on the risk of causing harmful and unwanted consequences. When the risk is non-existent or low, a flexible approach could be enough. For instance, the company might want to prepare a voluntary ethical code of conduct or follow a set of international AI principles. In a second tier, when the risk is medium to high, it would be necessary to implement business standards or guidelines with clear disclosure and compliance mechanisms. Finally, in cases of high-risk systems, comprehensive regulation might be introduced. Therefore, the higher the probability of causing harm, the more intense the regulator’s intervention.

This model has recently been endorsed by the European Commission in its proposal for a Regulation for AI. Given the adverse effects that the use of AI can entail for stakeholders, workers and other individuals, the proposed regulatory framework aims to balance the different objectives and interests of the parties involved and to avoid potential violations of fundamental rights. In order to protect privacy, personal data and other sensitive information, it is closely related to the Open Data Directive,Footnote 183 the Proposal for a Regulation on European Data GovernanceFootnote 184 and the proposed Data Act,Footnote 185 and complements the General Data Protection RegulationFootnote 186 (GDPR) as well as legislation on consumer protection, non-discrimination and environmental protection. Although it imposes some restrictions on the freedom to conduct business, this consequence is consistent with the objective to ensure that only safe products find their way to the market, and it is justified by overriding reasons of public interest. The level of restriction should be assessed on a case-by-case basis to ensure that it does not go beyond what is necessary to prevent and mitigate serious safety risks and infringements of fundamental rights. Furthermore, to ensure consistency, avoid duplication and minimise additional burdens, the regulatory framework will be integrated into the existing sectoral safety legislation.Footnote 187

The proposal imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety. It distinguishes between AI systems that create an unacceptable risk, a high risk, and a low or minimal risk.Footnote 188 Those that create an unacceptable risk, for example, by violating fundamental rights, will be prohibited. For AI systems that can result in a high risk for the health and safety or fundamental rights of natural persons, the proposal includes specific rules that should be observed as a condition to operate on the European market—such as requirements of high-quality data, documentation and traceability, transparency, human oversight, accuracy and robustness—and an ex-ante conformity assessment.Footnote 189 The AI systems that generate a high risk or are likely to do so in the future are listed in Annex III, based on the area in which they will be applied and their specific purpose. For other AI solutions, the proposal only imposes very limited transparency obligations that will apply if the system interacts with humans, is used to detect emotions or to determine association with social categories based on biometric data, or generates or manipulates content (so-called ‘deep fakes’).Footnote 190 At the same time, providers of non-high-risk AI systems are encouraged to create codes of conduct on a voluntary basis to apply the mandatory requirements for high-risk schemes. As stated in the explanatory memorandum, those codes may also include commitments related to environmental sustainability, accessibility for persons with disabilities, stakeholders’ participation in the design and development of AI systems, and diversity of development teams.Footnote 191

However, all these regulatory efforts would be wasted in the absence of effective enforcement mechanisms. A robust monitoring and evaluation scheme is essential to ensure the successful and uniform implementation of this Regulation. The European Commission suggests establishing a ‘European Artificial Intelligence Board’ to coordinate and assist the competent national authorities in charge of ensuring the application and implementation of the European Regulation. It means that each Member State has to designate a national supervisory authority among the already existing structures. As established in the proposal,

AI providers will be obliged to inform national competent authorities about serious incidents or malfunctioning that constitute a breach of fundamental rights obligations as soon as they become aware of them, as well as any recalls or withdrawals of AI systems from the market.

This ex-post enforcement will complement companies’ ex-ante conformity assessment through internal checks and auditing by third parties. In this regard, AI providers will have to provide meaningful information about their systems and the conformity assessments carried out on those systems. The combination of both mechanisms would facilitate early intervention and avoidance of foreseeable potential harms.

In a corporate setting, the most suitable regulatory framework built on a risk-based approach requires governments to design and deliver AI regulation throughout the policy cycle, with an emphasis on the participation and contribution of stakeholders in policy mixes. This would be helpful for AI regulation, especially considering the significance of reducing the administrative burden of the formal consultation process and clarifying and simplifying existing regulation. The involvement of stakeholders who understand the needs and risks associated with AI applications, particularly those with knowledge of AI, big data and robotics, would help to make the regulation accessible and understandable. The involvement of both stakeholders and data scientists will help to ensure that data is labeled appropriately. Therefore, the regulation of AI needs a team effort with multi-disciplinary input delineating the features of the roles associated with the deployment of AI in the boardroom. AI has become an intrinsic part of almost every digital experience, and smart regulation paves the way for that; such an approach can be tailored to satisfy the imperatives of specific social, environmental and human rights issues.

5.3 The Ultimate Goal: AI for the Common Good

Along with the risks and challenges associated with AI, the regulatory framework must consider the needs and well-being of society and the safeguarding of the environment.Footnote 192 When using expressions like ‘common good’ or ‘the commons’, we are referring to the ‘public good’ for ‘social equity and livelihoods’.Footnote 193 They always designate the fact of granting individuals equal and unrestricted access to communal resources. These notions validate concepts such as ‘cooperation’, ‘collaboration’, and ‘coordination’, which are seen as more or less synonymous terms.Footnote 194 The goal of society is not an independent one and ‘the commons’ and ‘common good’ should belong to all social beings, designating the good of both society and its members.Footnote 195 These notions are closely related to our arguments on AI and sustainability. In this vein, the UK Parliament has suggested that the first overarching principle for an AI Code is that AI ‘should be developed for the common good and benefit of humanity’.Footnote 196

We also employ the concept of ‘common good’ with the emphasis on promoting more sustainable companies and applying AI. Therefore, ‘the commons’ is employed as the goal or the rationale to drive corporate sustainability forward, whereas the ‘common good’ is the ultimate goal for using AI. All stakeholders, including shareholders, share the ‘commons pool resources’ in a company, and every one of them should have voices, rights or even obligations in pushing the company towards sustainability. AI-assisted and enhanced decisions should benefit all constituencies that may legitimately enjoy ‘the commons pool resources’, and corporations should apply AI to make positive contributions to society so as to achieve a ‘common good’.

Following this reasoning, it can be argued that AI not only contributes to achieving social and environmental goals but can also be considered as a ‘common good’ itself since its use entails significant benefits for society. While the First Industrial Revolution used water and steam for production, the Second electric power and the Third information and technology, AI now leads the Fourth Industrial Revolution to a fusion of technologies that is likely to redefine the most valuable human skills.Footnote 197 Given that it is an essential asset for the progress of society, it should observe shared ethical ideals and convergent values. The will of all must have a central position in this process and prevail over the influence of big tech companies and the media, which greatly influence how this technology is used. As we have explained above, this goal requires a consistent and harmonised regulatory framework that pays special attention to human values and mitigates the potential harm of AI deployment. The logic flow is demonstrated and explained in Figure 1 below.

Figure 1.
figure 1

Sustainable decisions and regulation of AI as a common good

6 Conclusion

AI has a significant impact in most social and economic sectors, and this effect is expected to grow in the near future. In corporate governance, companies can benefit from the use of AI in different ways, obtaining important gains in terms of efficiency and enhancing the long-term interests of the corporation while taking into account the interests of shareholders and other stakeholders. It can contribute to the realisation of CSR goals by enabling boards of directors to analyse vast amounts of data in real-time and predict what is the best plan of action. Ensuring that corporate decisions are well-informed and based on trustworthy information will optimise the decision-making process and increase the success rate of sustainability policies.

When it comes to corporate sustainability challenges, it has been found that artificial intelligence is a double-edged sword. AI can make significant progress on the most complicated environmental and social problems faced by humans. On the other hand, the efficiencies and innovations generated by AI may also bring new risks, such as automated bias and conflicts with human ethics. In this article we argued that both companies and governments should develop corporate policies and regulatory frameworks to address sustainability challenges and risks brought by AI.

Instead of promoting sustainability, unregulated AI would be a threat to it because it would not be possible to effectively monitor its effects on the economy, society and the environment. Given the rapidly evolving nature of this technology, we propose a proactive, harmonised and risk-based approach to deal with the potential problems brought by AI so as to enable the application of AI effectively and ethically to achieve the common good. Ensuring an adequate level of technological neutrality and proportionality of the regulation is the key to mitigating the wide range of potential risks inherent to the use of AI. Such a suitable regulatory framework would not only create a consensus concerning the risks to avoid and how to do so, but also include enforcement mechanisms to ensure a trustworthy and ethical use of AI in the boardroom. Once this objective is achieved, it will be possible to refer to this technological development as a common good that constitutes an essential asset to human development.