Barriers to adopting automated organisational decision-making through the use of artificial intelligence

Dawid Booyse (Gordon Institute of Business Science, University of Pretoria, Johannesburg, South Africa)
Caren Brenda Scheepers (Gordon Institute of Business Science, University of Pretoria, Johannesburg, South Africa)

Management Research Review

ISSN: 2040-8269

Article publication date: 27 June 2023

Issue publication date: 2 January 2024

4659

Abstract

Purpose

While artificial intelligence (AI) has shown its promise in assisting human decision, there exist barriers to adopting AI for decision-making. This study aims to identify barriers in the adoption of AI for automated organisational decision-making. AI plays a key role, not only by automating routine tasks but also by moving into the realm of automating decisions traditionally made by knowledge or skilled workers. The study, therefore, selected respondents who experienced the adoption of AI for decision-making.

Design/methodology/approach

The study applied an interpretive paradigm and conducted exploratory research through qualitative interviews with 13 senior managers in South Africa from organisations involved in AI adoption to identify potential barriers to using AI in automated decision-making processes. A thematic analysis was conducted, and AI coding of transcripts was conducted and compared to the manual thematic coding of transcripts with insights into computer vs human-generated coding. A conceptual framework was created based on the findings.

Findings

Barriers to AI adoption in decision-making include human social dynamics, restrictive regulations, creative work environments, lack of trust and transparency, dynamic business environments, loss of power and control, as well as ethical considerations.

Originality/value

The study uniquely applied the adaptive structuration theory (AST) model to AI decision-making adoption, illustrated the dimensions relevant to AI implementations and made recommendations to overcome barriers to AI adoption. The AST offered a deeper understanding of the dynamic interaction between technological and social dimensions.

Keywords

Citation

Booyse, D. and Scheepers, C.B. (2024), "Barriers to adopting automated organisational decision-making through the use of artificial intelligence", Management Research Review, Vol. 47 No. 1, pp. 64-85. https://doi.org/10.1108/MRR-09-2021-0701

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Dawid Booyse and Caren Brenda Scheepers.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

While there is no commonly accepted definition of artificial intelligence (AI), Duana et al. (2019, p. 63) note that “It is normally referred to as the ability of a machine to learn from experience, adjust to new inputs and perform human-like tasks”. Organisational decision-making that requires highly cognitive skills and traditionally performed by knowledge workers can be automated (Chong et al., 2022). However, there are barriers associated with AI adoption, which require more research (Moser et al., 2021), and this paper’s objective is to investigate these barriers associated with AI adoption.

AI broadly describes a field of computer science dedicated to creating software or machines that exhibit or mimic intelligence or intelligent behaviour (Leopold et al., 2016). Shneiderman’s (2020) definition includes automated/autonomous systems using technologies such as machine learning, neural nets, statistical methods, recommenders, adaptive systems, and speech, facial, image and pattern recognition. While large language models like Chat Generative Pre-training Transformer are currently disrupting education, consumer advice and internet searches, experts are pointing out the dangers of inaccurate information in plausible convincing answers (Pierani and Bruggeman, 2023).

Although some studies (Parry et al., 2016) have been done on the benefits and concerns of using AI in an organisational context, limited research has been conducted on which barriers affect adoption of AI in the organisation specifically for decision-making.

The contribution of the current paper is revealing the perceptions and fears around AI adoption and how organisations could manage these perceptions and mitigate the risks associated with AI adoption.

Literature review

Automated decision-making properties of artificial intelligence

Kshetri (2021, p. 970) advises, “Artificial intelligence (AI) is a potentially transformative force that is likely to change the role of management and organizational practices”. Machines can match or outperform humans in work activities, which require high cognitive capabilities due to new processing hardware, more powerful algorithms and vast amounts of data in the smart machine age (Autor and Dorn, 2013; Manyika et al., 2017).

Davenport and Kirby (2016) argue that there exists a global trend in automation that involves machines capable of making autonomous decisions in a more complex and less structured data environment. This argument has been substantiated by studies that showed that early automation was mainly focused on routine tasks and decisions performed by low- and medium-skilled workers – compared to current automation advances, which are capable of automating tasks and decisions performed by knowledge workers that possess high cognitive skills, which highlights the danger of the “machine for human” substitution in organisations (Autor and Dorn, 2013; Frey and Osborne, 2013; Loebbecke and Picot, 2015).

A case study by Davenport and Kirby (2016) listed an overall three-year return on investment of between 650% and 800%, while Bank of America Merrill–Lynch predicted that by 2025 the impact of AI could be between $14tn and $33tn, which will include a $9tn reduction in employment costs (The Economist, 2016). It is, therefore, no surprise that in 2015 alone $8.5bn was spent on AI companies, four times as much as in 2010 (The Economist, 2016).

Different types of AI, such as neural networks, swarm intelligence, genetic algorithms and fuzzy logic, can be used to solve different real-world problems (Autor, 2015). This research paper concentrates mainly on the use of neural networks linked to the concept of machine learning or deep learning.

The current study focuses on neural networks due to their ability to learn and improve decision-making performance (Duana et al., 2019). The learning ability of neural networks sets this kind of AI apart from the automation using rule-based or expert system decisions, which can easily be codified. An example of a rule-based decision will be if this condition is met, then do this action (Davenport and Kirby, 2016).

Neural networks inspired by biological models simulate connected neural units modelling how neurons in the brain interact (Duana et al., 2019). The simulated neurons in the network either fire or remain static depending on the weighted sum of their inputs. Learning occurs via a process of adjusting the weights until such time that the action-computing performance is acceptable (Nilson, 1998).

Several previous studies have shown that decision-making that requires highly cognitive skills, traditionally performed by knowledge workers, can be automated in an organisational context (Manyika et al., 2017; McAfee and Brynjolfsson, 2014). This study determines the factors that impede organisations from adopting AI for automated decision-making. AI has limitations in dealing with emotional and social states to draw conclusions and take context-appropriate decisions (Manyika et al., 2017). Paschen et al. (2020) emphasise that AI systems still have a very narrow focus. Areas that currently remain more suited to human decision-making include social interaction and comparison, conflict management, responsibility, teamwork and ethical judgement (Tambe et al., 2019).

Parry et al. (2016) argue that an AI-based system could break the bond between leader and followers, thereby reducing its desirability. In addition, Newell and Marabelli (2015) note that because data is needed to make decisions, organisations would have to monitor employees much more closely to collect sufficient data. This could make employees feel they have lost autonomy, and as a result, AI could significantly impact motivational levels. Paschen et al. (2020) also emphasised the importance of change management processes when introducing new technology like AI since employees might feel threatened by the possibility of AI making their roles obsolete.

Adaptive structuration theory

Rogers (1995), a widely used theory on the adoption of technology, seeks to explain how, why and at what rate new ideas and technologies spread. His focus is, therefore, on the diffusion of such innovations (Detjen et al., 2021). This theory is limited in terms of specific aspects of AI, which include automated decision-making. This characteristic of AI takes the initial function of technology, that of enabling and assisting humans, much further: into functioning independently from humans once it has been programmed to learn. We reviewed the literature on adaptive structuration theory (AST) to deepen our understanding of the barriers to AI adoption.

The AST model developed by DeSanctis and Poole (1994) comprehensively shows the antecedents, processes and outcomes of the recursive relationship between technology and social action, each iteratively shaping the other. DeSanctis and Poole (1994) emphasise that the impact of advanced information technologies depends on how well social and technological structures are jointly optimised. They used the example of videoconferencing and how this technology influences how people interact in the business environment. The AST has not been applied in the AI decision-making environment, and the current study aims to fill that gap. Cortellazzo et al. (2019) warn against a technocentric view of technology shaping human cognition and behaviour; instead, the AST points to human cognitive interpretive schemes, or social construction of technology, which influence technology appropriation. Aligned with Cortellazzo et al. (2019), the current study argues that a technocentric view of AI is rather limited and that the AST perspective is promising since it considers the recursive relationship between technology and social action. The literature review revealed that the theoretical foundation of AI is limited, and several recent articles in high-level journals did not discuss a theory of AI (Chong et al., 2022; Duana et al., 2019), especially in relation to the management science around and barriers to adoption of AI in decision-making. An interesting aspect of AST, highlighted by DeSanctis and Poole (1994), is the “spirit” of technology. Examples of dimensions that characterise the spirit of technology due to its influence on social structure include decision process, leadership, efficiency, conflict management and atmosphere.

The decision process involves the type of decision process that is being promoted, such as consensus, rational, political or individualistic.

The leadership dimension involves the likelihood of leadership emerging when the technology is used or whether there is equal participation versus domination by some members.

Efficiency emphasises the time compression or whether interaction periods are shorter or longer than those if technology is not used.

Conflict management is concerned with whether interactions are orderly or chaotic or emphasise conflict awareness or resolution.

Finally, the atmosphere dimension involves the relative formality or informal nature of the interaction or whether the interaction is structured or unstructured.

As antecedents to the adoption of AI, the spirit of the technology could be determined by applying the properties of AI to the DeSanctis and Poole (1994) AST model, as follows:

The spirit of the AI technology has a high level of sophistication and thus does not allow several participants; only highly skilled AI software engineers are able to interact with the computers to feed in the large amounts of data and teach the computer deep learning (Autor, 2015). As a result, decision-making is highly specialised and confined to computers actually taking over decision-making on behalf of humans.

This property of AI reflects the spirit of the technology, which leads to a lack of equal participation and domination by the decision makers who bring in the technology. Computers, in a way, then dominate, as humans cannot even understand or justify the decisions the AI technology comes up with (Newell and Marabelli, 2015).

The lack of openness about what goes on in the proverbial “black box” leads to distrust of AI (Chong et al., 2022) and an inability to resolve conflict when a computer’s decision is different from that which the human brain would produce (Pee et al., 2019).

The organisational environment within which AI operates is exclusive, and only the most talented software engineers interact with it since most employees are not involved or unable to understand the AI technicalities and nuances (Ransbotham et al., 2018). In this regard, Shneiderman (2020) advocates for explainability as a principle of a human-centred approach to AI, as well as responsibility and fairness. Given the literature review, the current study, therefore, explored the following research question:

RQ1.

Which barriers affect adoption of AI in an organisation specifically for decision-making?

Method

The interpretivism paradigm was chosen for this research because limited research has been reported on the barriers that affect adoption of AI in an organisation, specifically for decision-making. AI decision-making is very much an interplay between humans and machines and creates a unique social phenomenon. To understand and draw knowledge from a phenomenon, researchers must study the actors (humans and machines), and thus, interpretivism was appropriate (Saunders and Lewis, 2012).

The choice to conduct a cross-sectional study was made partly because of the rapid advancement of AI and technologies like big data in recent years (Manyika et al., 2017).

Data collection and sampling

The research used qualitative data collection through semi-structured interviews. The research population was organisations that have adopted AI and those that have investigated AI in depth but declined to implement it for automated decision-making. The population also included start-up companies that sell AI products aimed at automation in organisations; they were included because they could also provide valuable insights into factors affecting adoption.

The researchers chose to focus on financial services and high-tech industries, which currently lead the adoption of AI. The sample frame (list of entire population) was unknown to the researchers, and non-probability sampling, purposive sampling, in particular, was therefore used. The first criterion used for the selection of interview candidates was the list of industries currently leading the adoption of AI.

The second sampling selection criterion was organisations that have either already adopted AI or have investigated, but declined to implement, AI. This was determined using a qualifying question in the discussion guide during the interview. Respondents could speak from their experience, which made the data more relevant to the study.

The third selection criterion was that of job title. As the research required candidates with a strategic view of the company, the ideal candidates were C-level executives, preferably the chief information officer (CIO) or chief technology officer (CTO). For smaller companies, which did not necessarily have the CIO or CTO role, the chief executive officers (CEOs) were also considered.

The fourth criterion was companies or individuals specifically focused on AI development and products. To achieve data saturation and a representative sample size, Guest et al. (2006) recommend a minimum sample size of 12 for structured or in-depth interviews. The researchers managed to complete 13 in-depth interviews.

An overview of the respondents interviewed is provided in Table 1 which gives a breakdown of position held, educational background, the industry their company was in, the size and age of the companies, interview length and transcription word count. Respondents were taken from two broad industries: financial services (including retail banking, investment banking, insurance and wealth management) and technology (including technology start-up and telecommunications). The respondents’ positions range broadly from C-level executives (including CEO, CTO and CIO) to heads of department and one senior manager. A total of 613 min, or 10.2 h, was recorded during interviews, with an average interview length of 47 min. In total, 87,672 words were transcribed, with an average of 6,262 words per transcript. The researchers used a recommended transcription service.

The analysis was performed using ATLAS.ti, software specifically designed to perform qualitative data analysis. Two passes at coding were done by the researchers to ensure the research questions were indeed answered. Respondents’ words were taken at face value during coding, and no attempt was made to look for hidden meaning behind words or phrases.

Reliability and validity of the data

The researchers achieved code saturation as the rate of new codes that were identified declined, and by the end of the 13th interview, the researchers were satisfied that all themes and categories were covered and that conducting more interviews would add limited value to the research. Credibility of data was established by means of respondent validation, as most respondents either supported or contradicted the themes drawn from the literature.

Results

Seven major barriers to AI adoption emerged as main themes in this study. In Table 2, the seven themes are displayed and the code structure. First-order codes developed into second-order codes, then were aggregated into the seven themes. Each of the seven themes is discussed below with quotes to offer evidence of the coding scheme.

The schema used for coding is demonstrated in Table 3. Codes that were defined deductively have no prefix. Codes that were defined inductively have a “*” as a prefix. Themes that the researchers found to have emerged as coding progressed but that did not form part of the initial research questions are prefixed with “explore” to draw additional insights. The codes were also checked for co-occurrence, and Table 3 offers the number of co-occurrences per each of the seven themes. The researchers also used Atlas.ti to conduct a content analysis of the transcripts, and Figure 2 shows the word cloud of that analysis.

Theme 1: The need for social interactions and norms in the workplace can be a barrier to AI adoption.

From the interviews, it emerged that the need for social interactions and social dynamics, such as team motivation and leader-follower bonds, can be limiting factors in the adoption of AI:

R6: “How do you have a one on one with a machine? […] I have a relationship with my boss, I need to bounce off some ideas it is going to be really difficult sitting in front of a machine spitting out options based on emotion risk facts”.

AI algorithms would also be unable to make correct decisions given more unquantifiable elements, like human empathy and emotion, as they would not have all the parameters and data:

R8: “I think if you bring AI into the equation that’s not going to motivate them, you know why do you come to work every day or why is your leader your leader”.

Respondent 8 thus also supports the proposition, emphasising that team dynamics and the employee-leader bond could be severely impacted by having AI make team decisions:

R12: “I don’t personally see us working for digital bosses. I see us working for human beings that are able, that are strategic positioning, that is executed by digital decision-making engines”.

Respondent 12 emphasises that, in a human social environment, AI would be better suited to augment decision-making rather than replace a human boss.

The table shows the links discussed above. In a human social work environment, the need for human decision-making is high, as elements like empathy and emotions are hard for AI to interpret, given current technology. Respondent 3 emphasises that people might struggle to trust and understand non-human decision makers:

R3: “[…] it comes down to this whole thing of trust, can you really trust a technology that does not have a heart, mind, soul […] to actually do things on your behalf […]”

The social setting could turn negative or counterproductive if human workers in the team do not view the AI decision maker as part of the team.

Theme 2: Regulatory and liability concerns can be a barrier to AI adoption.

The study reveals that restrictive regulatory environments could limit the adoption of AI. Governments could be forced to introduce taxes to prevent increased unemployment, and in struggling economies, there is the ethical dilemma of organisations having to choose between increased revenue from the use of AI and increased unemployment with its negative social impact on communities:

R7: “[…] so you may see an increase in unemployment in the short term. Provided the labour law and the actual government of this country can either regulate that industry, you are going to start seeing job losses, especially when things are automated”.

Respondent 7 raises the very relevant spectre of job losses and unemployment due to AI implementation:

R10: “[…] it’s going to be the approach for most governments […] what it really means is that the more AI you employ the more tax you are going to pay and that tax is going to kind of fund people who are unemployed”.

Respondent 10 suggests that one avenue the government could pursue is that of taxes. These taxes could then be redirected towards communities impacted by automation or possibly used to reskill people. Another theme to emerge was that of ethics:

R6: “[…] our corporates have a solution that can replace an entire call centre in their business, but because it is morally incorrect to now fire or make 50 0000 people redundant is not right until we have an alternative for these people”.

The two most co-occurring codes were those of economic impact and ethical considerations. This supports the discussion above; respondents talk most about these issues in combination with regulation:

R10: “[…] if you employ tens of thousands of people and you half that over five years for example […] I think the government is going to make you pay for that”.

Respondent 10 points out that the impact on the South African market could be significant and that the government could penalise businesses that use AI.

Theme 3: Environments requiring creativity, spontaneity and intuition can be barriers to AI adoption.

Respondents believe that creative working environments could be a limiting factor in the adoption of AI. Another theme was that organisations might need to consider reskilling employees more towards creative and people-based skills rather than data processing:

R7: “[…] the good thing about humans we understand multiple debates. We can literally take three data points and extrapolate it […] we can extrapolate reasonable judgment very quickly […] it is really around you know can a machine actually be creative without having to learn how to be and in my mind I don’t think so…”

Respondent 7 also notes that humans are capable of thinking up new knowledge even if there is no previous base from which to start. A human developer is still required to write the code that teaches the machine. Respondent 12 echoes this view:

R12: “[…] human beings I think, can come up with brand new thinking […] whereas AI, which I think is very powerful, it, it can replicate. It can make the right decision based on, well based on the data that is available in the moment […]”

R13: “I do think that the people are going to suffer, the people who are not creative okay and the kind of people who do routine based and can’t do anything but routine jobs […]”

Respondent 13 suggests organisations might need to consider reskilling employees who currently deal with routine tasks as those individuals will tend to be replaced by automation technologies like AI. Table 3 shows the co-occurrence or link between creative work environments and the need for human beings.

Theme 4: The need for transparency and trust in decision-making can be a barrier to AI adoption.

It emerged that a lack of transparency could lead to a lack of trust and act as a limiting factor in the adoption of AI. As a result, the theme of strong quality control measures and AI algorithm audits to prevent discrimination emerged. Another theme was that, in some instances, organisations have a right or are even bound by legislation not to disclose the reasoning behind decisions:

R1: “[…] it is not always that easy to explain why AI makes the type of decision that it makes because it has taken into account so many different things that unpacking it is very difficult […]”

R2: “[…] My experience is that these high-end systems are not possible to represent to somebody who understands 3 + 1 dimensions. I haven’t succeeded, not even convincing myself […]”

Respondents 1 and 2 emphasise that, because of the way AI works, it would actually be very difficult to describe how it reached a particular decision. What is also evident from the respondents was that the lack of transparency leads to a lack of trust, and regarding decisions with negative implications could also lead to actual resistance and hostility towards AI:

R13: “[…] AI is not politically correct, […] for example in your job you are a female and now AI must decide who must get the job and it might work out the scores is not as much as a boy’s […] but how do you now tell them they don’t get the job […]”

Respondent 13 raises the important point that, due to a lack of transparency, AI could make decisions detrimental to a company.

Respondent 11 notes that, in some scenarios, organisations are not allowed to tell customers how decisions were made because of legislation or in an attempt to protect their competitive intellectual property.

The co-occurrence in Table 3 shows that a lack of trust could result from the lack of transparency. Respondents were concerned that a lack of transparency could potentially lead to unethical behaviour. They talked about the need for corrective or audit steps in the process of AI implementation.

Theme 5: Dynamic and constantly changing business environments can be barriers to AI adoption.

The study revealed that dynamic or fast-changing business environments could be limited when trying to adopt AI. It emerged that current-day AI is mostly designed to solve very specific problems and is trained on well-prepared data sets. In a fast-paced, dynamic environment, AI could potentially be unaware of all the relevant data and variables needed for accurate decisions. The occurrence of random events could also render AI decisions inaccurate:

R4: “At the current point in time AI is quite specific…they train the AI models on games and on Space Invader and all that kind of thing and then they feed it some real problem that has got nothing to do with it, and it’s able to solve it, like categorising cat in a You Tube video”.

Respondent 4 notes that AI is not yet capable of adapting to very dynamic environments and part of the implementation is to train AI on specific data relating to the problem. Respondent 4 also highlights that advances are being made in this field through, for example, the use of deep learning algorithms:

R1: “that is a classic case of where you still need that human intuition, I think at this point in time AI is much more operational in nature than strategic […]”

Respondent 1 believes humans still maintain an advantage in dynamic environments because of their ability to think on their feet. Factors like background, experience and intuition give humans a far bigger advantage in these kinds of environments:

R6: “[…] what happens when someone comes and completely disrupts the market, they come like Uber and they introduce a product that completely destroys the taxi industry, no machine would have predicted that. You need to almost have to re-programme that machine”

Respondent 6 notes that random events within the market can occur at any time. AI could not make correct decisions if it were unaware of or did not have access to all the relevant data that could influence the decision.

Table 3, with the co-occurrences, shows that dynamic environments link to the capability of AI to adapt and learn. However, the majority of respondents said there was still a need for humans in these kind of fluid environments as AI would not always be aware of all the variables that influence decisions.

Theme 6: The loss of control and power for current decision makers can be barriers to AI adoption.

The respondents believe the loss of control for current decision makers could limit the adoption of AI. It emerged that organisations need strong change management and organisational transformation practices, combined with programmes to reskill affected employees, to avoid resistance and hostility towards implementing AI:

R3: “[…] it comes down to the change management, you actually need people to be aware of it, understand it and buy into the vision. So nobody is meant to be happy with a situation where an AI or a bot or whatever is going to come in and take their job […]”

R4: “I think they would try and sabotage it unless they are getting some different role that facilitates the AI role […] but if it is like opening up a completely different industry, then why not?”

Respondents 3 and 4 note that if people could potentially lose jobs because of AI, they would try to prevent its implementation. Two themes to emerge during the interviews were the importance of change management and the reskilling of those affected.

Table 3 shows that respondents also talk about human resistance when discussing loss of control. Most people dislike being out of control and not having any say as machines make decisions that impact their lives.

The second theme was that of reskilling people affected by the change to AI. It is apparent that when people are reskilled and given alternative jobs, even if these involve augmenting and maintaining AI implementations, they will accept AI implementations.

Theme 7: The need to be ethical and non-discriminatory can be a barrier to AI adoption.

The interviews reveal that the need to be ethical and non-discriminatory could be a limiting factor or barrier to adopting AI.

Respondent 4 raises the ethical question of what would happen if AI had to make a life-or-death decision involving human beings, highlighting the case of self-driving cars. How would the car know which choice to take, and based on what values would that decision be made?:

R4: “an example of like a self-driving car, and it’s going to have an accident and it is trying to figure out whether it is better to kill the old lady on the left, or the three children on the right […] that is again where humans out-perform, we have this human nature”.

The second area of ethical concern involves AI itself learning unethical behaviour, which could result in severe negative effects to the company and the community in which it operates:

R3: “[…] so it comes back to this whole situation of garbage in, garbage out […] if a bot looks at human […], it’s going to pick it up and it’s going to learn from that, which is why we need to be very cognisant about the parameters in which we let these bots learn and operate”.

The final area of ethical concern is whether it is ethical to replace humans with AI when those affected have no other means of earning an income. Respondent 11 notes that AI goes beyond normal automation to also automate the work of knowledge or skilled workers, whereas previously, it focused more on replacing manual, repetitive, physical work:

R11: “[…] it might become an ethical conversation as there is some potential pitfalls in the adoption of AI […] use of technology to automate work, automating work has consequences on employment and social impact”.

Table 3 shows that the ethical discussions revolved around being unethical or discriminatory and the theme that emerged from all respondents was that, although the progress of the technology could not be stopped, the community of AI practitioners has a social and ethical responsibility to make sure people are not, either directly or indirectly, discriminated against.

Respondent 6 said that current-day algorithms are programmed to do what humans want them to do and that organisations need to control and monitor their AI implementations. It is critical for the AI community to ensure it does not discriminate against other humans.

The researchers also used Atlas.ti to conduct a content analysis of the transcripts. The results of this analysis in the form of a keyword extraction are shown in Figure 1 in the image of a word cloud. The words which respondents used with the highest frequencies are larger and in the centre of the figure, whereas words used with lower frequency are smaller in size in the figure and on the periphery of the figure. This word cloud was then analysed by considering the frequency with which this word was used by respondents, as well as a deeper analysis of the context within which this word was used.

This analysis revealed that the most frequently used word over all the interviews was “think”, mentioned 1,239 times. In some instances, the word was used to indicate what the respondent was thinking, for example, “I do think it’s very important that we parameterize or calibrate whatever you want the machine to” (R1) and “I think those things need to be in place” (R3). In other instances, the respondent would indicate a general sense of what others would think, for example, “How often do you think you need to adapt or tweak the algorithm and what do you think the advantage of AI is over like normal expert systems?” (R1) or “It is going to happen whether we think it is the right thing or not” (R7).

The second most used word was “people”, with 374 mentions. The context of this word included comments, such as Respondent 12: “The highest customer satisfaction is because of the human interaction and people know that these humans are the face of the company” and Respondent 11: “Imagine if it was an AI who made that decision, people wouldn’t like that” and Respondent 8: “It is how we as people use this tool”. The word “human” was also one of the five most used words (139 mentions), and the reference to human and people could be perceived in the same light. For example, looking deeper into the context of the use of the word “human” revealed the following: Respondent 5, “If a human can do it then ultimately a computer will be able to do it” and Respondent 8, “I don’t think any human would be happy working with the machine or for a machine”.

The word “data” was used 346 times and the context of the word revealed, Respondent 1 talked about big data, “We now have this thing called big data and all the data that gets captured is available […]” and Respondent 6 used the word as a prerequisite for AI to work, “Machines can only make better decisions if they have the right volume of data or very good dimensions in the data base”. Respondent 9 mentioned, “[…] ironically with all the amount of data, we still have glitches in data […]”

The word “decision” was also mentioned 265 times, and for Respondent 4, there was a qualifier when discussing decision-making, “It depends on what the kind of decision is […] human intuition can’t be replaced in certain areas”. Respondent 6 also noted, “There are different levels of decision making”.

The word “machine” was used 248 times, and it seemed that AI and machine learning were used interchangeably. Respondent 7 noted that “Machine learning can’t replace human feeling or understanding of ethics”. Respondent 10 also referred to ethics, “Can a machine make an ethical decision?” The content analysis through the keyword extraction confirmed the patterns identified through the manual thematic analysis.

We applied AI to compare the manual human thematic coding of the 13 transcripts with computer-generated top coding using Atlas.ti AI functions. This triangulation process offered insights as Figure 2 indicates that the codes which were most prominent included technology and uncertainty, ethics and automation.

The AI-generated top co-occurring codes were technology and uncertainty, as well as technology and ethics. Figure 3 illustrates these co-occurring codes.

Compared to the human manual thematic coding, the thematic coding offered more information with regard to the context within which the code was offered. Nonetheless, the comparison highlighted the uncertainty that was associated with technology and pointed to the importance of companies to offer more information on the impact of AI for employees.

Discussion

As the theoretical model developed by DeSanctis and Poole (1994) shows, there are social dynamics, like team motivation and leader-follower bonds, which can be limiting factors in the adoption of AI. Our results show that in social work environments, AI could potentially play more of an augmentation role to human decision makers as workers identify better with human managers. The seven themes that emerged in the current study reveal that senior management has concerns around the AI adoption. These revolve around the consequences for the specific organisation, how teams are led, how decision-making happens in the proverbial black box and the impact of AI on society, such as unemployment.

Figure 4 illustrates the application of the theoretical model developed by DeSanctis and Poole (1994) to the adoption of AI. We show that the outcomes of AI can be categorised into consequences for the organisation and for society at large. Respondents commented on the potential negative impact on employment, which could be a major barrier to the adoption of AI.

The respondents mentioned that governments could be forced to introduce taxes to prevent increased unemployment and, in struggling economies like South Africa, there is an ethical dilemma; organisations would have to choose between increased revenue from the use of AI and increased unemployment with its negative social impact on communities. The work of Davenport and Harris (2005) is thus supported by our research. Respondents highlight that people would try to hamper AI implementation if they thought they would lose their jobs.

Human creativity is unique, and although machines can mimic creativity, this is merely a recombination of what humans have created; human decision-making remains superior in unique scenarios requiring creativity (Schoemaker and Tetlock, 2017). Parry et al. (2016) also note that AI could prioritise quantitative over the more qualitative decision-making elements. Respondents advised that organisations might need to consider reskilling employees with creative and people-based, rather than data processing skills.

The black box effect can make the rationale behind AI decisions difficult to explain, leading to deep mistrust and even hostility if people are negatively impacted. This finding is in line with Pee et al. (2019) and Tambe et al. (2019) studies’ results. As a result, respondents noted the need for strong quality control measures and audits of AI algorithms to prevent discrimination. It emerged that, in some instances, organisations had the right or were even bound by legislation not to disclose the reasoning behind decisions, which aligns with the findings of Parry et al. (2016).

Respondents pointed out that AI today is mostly designed to solve very specific problems and is trained on well-prepared data sets, which corresponds with the earlier findings of Brynjolfsson and McAfee (2012). In fast-paced dynamic environments, AI could potentially be unaware of all relevant data and variables needed for accurate decisions.

Organisations need strong change management and organisational transformation practices, combined with the reskilling of affected employees to ensure successful adoption and avoid resistance and hostility towards implementing AI (Ransbotham et al., 2018).

Three main areas of ethical concerns regarding the adoption of AI emerged: the impact of AI making life-or-death decisions and on what principles these would be based; potential discrimination against certain groups if AI was trained on biased data or deliberately coded that way; and whether it would be ethically responsible to replace humans with machines if humans had no other means of supporting themselves. These concerns are supported by the findings of several scholars (Tambe et al., 2019).

While current literature emphasises that large amounts of data or big data, is required for AI, the respondents in the current study did not discuss this antecedent to AI adoption in particular. The huge capital investment required to acquire Web space, AI software programming, server space and talented AI programmers was also not discussed. Respondents may have considered these aspects but choose to focus on the ethical and human aspects or they may have perceived these more technology-driven barriers as a given. Whatever the reason respondents did not focus on these technology-driven barriers, it illustrates that the AST model, which emphasises the interaction between technology and social structure dimensions, is relevant to the current study. In Figure 4, we demonstrated the application of the AST to the findings of the current study and made recommendations based on the literature and the findings of the current study. The following section focuses on these implications for management.

Practical implications

Firstly, and most important is that managers should be aware that AI is not a silver bullet. AI algorithms today are designed to solve very specific problems or to automate a specific task. The AI algorithms require data very specific to the problem domain to achieve a high rate of accuracy. General AI that is self-aware does not exist yet and will not exist in the near future. That being said, AI algorithms today can be trained to perform any task based on data, as well as or better than humans can perform.

Managers considering adopting AI into their organisations need to make sure there is a sufficient need and willingness to adopt AI. From the research interviews, it was clearly highlighted by respondents that without a need and sufficient top-level management support, AI implementations failed. Managers also need to be aware that AI implementations are costly as the human capital required to do a successful implementation is expensive.

Managers should be aware that a successful implementation would require a huge amount of data. AI feeds and learns from structured data, and such organisations that have a data management policy and a data-driven decision-making culture would be more successful than organisations that do not (Ransbotham et al., 2020).

Managers thinking of adopting AI to replace human workers need to be aware of the regulatory environment in which they operate. In a South African context where the unemployment rate is almost 30%, it could lead to an increase in unemployment to introduce automation technologies. One of the respondents raised this concern by commenting that they had the technology to replace a whole call centre using AI, but the impact to the local community would have been devastating, and as a result, the technology was shelved and never implemented.

Managers must be careful that the implementation of AI is done under supervised conditions and, if possible, follow procedures of quality control followed in a normal software development lifecycle. When companies are thinking of using AI, they need to be sure that the data used to train the AI is not skewed or biased, as it could potentially have significant negative impacts on the organisation. Current-day technology cannot distinguish between right and wrong from an ethical and moral viewpoint. In this regard, Shneiderman (2020) advises designers to produce human-centred AI by integrating AI algorithms with user interface designs in ways that amplify, augment, enhance and empower people.

Organisations must augment management teams with AI bots to balance human and machine decision-making. From the respondents, it was highlighted that AI could make better, more consistent and less biased decisions than humans can; however, there was a fear of loss of control of the current decision makers. It is, therefore, beneficial to include AI decision-making combined with human decision-making at higher levels of the organisation.

Theoretical implications

Existing AI literature is inconclusive on theoretical models to use in analysing the technology’s adoption. AST, therefore, shows promise as an important lens to analyse the barriers to AI adoption of decision-making in organisations. The findings in this study show that a technocentric view of AI shaping human cognition and behaviour is limited, and instead, the AST of DeSanctis and Poole (1994) points to human cognitive interpretive schemes, or social construction of technology, which influence technology appropriation. The main premise of the current study is that AI and social systems influence each other reciprocally. Aligned with the work of Cortellazzo et al. (2019), we point, therefore, to the recursive relationship between AI and the organisational setting.

Table 3, on the seven themes derived from the thematic analysis of the current study, contains an extra column to show the link between the coding structure and the dimensions that characterise the spirit of AI technology. The first theme on social interactions and norms that are needed in the workplace, which is a barrier to AI adoption, relates to the atmosphere dimension, namely, that impersonal and formal structured interactions with AI characterised the technology adoption. AI requires a high level of sophistication and thus does not allow several participants; only highly skilled AI software engineers are able to interact with the technology (Autor, 2015).

The second theme on regulatory and liability concerns as a barrier to adoption of AI related to the AST dimension of conflict management dimension. The resolution of conflict of AI with regulations like reducing jobs is difficult to achieve. The lack of openness about what goes on in the proverbial “black box” leads to distrust of AI and inability to resolve conflict.

The third theme on creativity, spontaneity and intuition that are required in environments which create barriers for AI adoption, relates to the efficiency dimension of AST because time is compressed due to the amount of data that can be processed fast, but creativity is lacking.

The fourth theme as a barrier to AI adoption is transparency and trust needed in decision-making. The AST dimension that was most related to this theme was the decision-making process dimension. The type of decision-making process, which is being promoted through the use of AI, is that the decision-making process is being dominated by AI due to automation that is one-sided in its decision-making.

The fifth theme as barriers to AI adoption was dynamic and constantly changing business environments. This theme related to the AST dimension on efficiency, since time is compressed with AI, which makes it efficient; however, AI could how limited adaptation to dynamic environments.

The sixth theme revealed that one of the barriers to AI adoption is the loss of control and power for current decision makers. This theme relates to the AST leadership dimension as AI operates in an exclusive manner, where only the most talented software engineers interact with it due to AI technicalities and nuances.

The final theme in the current study was the need to be ethical and non-discriminatory, which could pose a barrier to AI adoption. The AST dimension most relevant to this theme was the decision-making process dimension because there is a danger of automated AI decisions being unethical and discriminatory due to the lack of transparency.

Limitations and recommendations for future research

The sample was chosen from industries that scored the highest in AI adoption, namely, financial services and high-tech (including telecommunications). The industries that scored the worst could potentially also be researched to reveal additional barriers. The study was an exploratory, qualitative study using semi-structured interviews. The small sample size of 13 respondents could be limiting in terms of reaching a comprehensive conclusion. A quantitative study using barriers identified in this study could be conducted using a larger sample to reach a more satisfactory statistical conclusion.

The study focused on senior management. A more comprehensive study could be conducted into multiple layers of employees to see if there are differences in perceptions about the factors affecting AI adoption.

Conclusion

This study uniquely applied the AST model to AI adoption. We thus contributed by extending the AST model and illustrating the dimensions relevant to AI implementations, and made recommendations to overcome barriers of AI adoption.

Figures

Word cloud of keyword extraction based on content analysis

Figure 1.

Word cloud of keyword extraction based on content analysis

AI-generated top applied codes from 929 codes with Atlas.ti

Figure 2.

AI-generated top applied codes from 929 codes with Atlas.ti

AI generated top cooccurring codes from 929 codes with Atlas.ti

Figure 3.

AI generated top cooccurring codes from 929 codes with Atlas.ti

Conceptual model based on findings of the current study

Figure 4.

Conceptual model based on findings of the current study

Sample data with respondents’ position, education and companies’ age, size and industry

No. Position in current company Education background Industry of company Company size
(employees)
Company
age (years)
Length of
interview
Wordcount
1 CEO MBA Financial 11,400 105 53.33 7,109
2 CTO PHD Electrical Engineer Technology start-up 50 7 30.07 3,273
3 Head of analytics department Finance (Hons) Financial 10,100 89 43.04 7,260
4 CTO MSc Biomedical/Medical Engineering Technology start-up 48 10 32.31 4,109
5 CEO BCom Financial Management Technology start-up 11 8 66.41 8,509
6 Head of AI department BSc Honours Computer Science Financial 81,000 333 40.47 4,996
7 Head of digital online and self-service BA FA Art Telecoms 7,500 30 42.13 6,137
8 Senior manager Postgraduate Marketing Financial 1,200 21 48.56 5,734
9 Executive PhD Artificial Intelligence Financial 4,200 185 37.58 5,366
10 General manager MBA Technology start-up 81 8 33.10 5,719
11 CEO BCom Accounting Technology start-up <25 8 73.13 7,321
12 Chief architect PhD Ethics Financial 85,000 158 66.41 8,554
13 CEO MA in Industrial Psychology Technology start-up <25 12 47.28 7,323
Average 4,722 6,262
Total 61,382 87,672

Source: Created by authors

Thematic analysis results showing themes derived from code structure

First-order codes Secondary order codes Seven aggregated themes Linking AST to the themes DeSanctis and Poole (1994)
The need for a human in interaction
Lack of trust
Human resistance
Human augmentation
Reskilling of employees
Mitigating agency problem
Leader-follower bonds
Team motivation
Social interactions and norms needed in the workplace (1) Atmosphere dimension
An increase in unemployment in the short term
The more AI you use, the more tax you are going to pay to fund unemployment
Morally incorrect to contribute to unemployment
Automation reducing jobs
Restrictive regulatory environments
Automation reducing jobs
Regulatory and liability concerns (2) Conflict management dimension
The human mind can extrapolate reasonable judgement and understand multiple debates
A machine cannot be creative
AI makes decisions based on existing data
Employees conducting routine jobs must be reskilled
Humans are creative, and machines not
AI relies on existing data
Creativity, spontaneity and intuition required in environments (3) Efficiency dimension
Lack of trust due to unknown
AI algorithm audits required
Prevent discrimination
Disclose reasoning behind decisions
High-end systems cannot be explained
Protection of intellectual property
Reputation damage
Difficult to explain how AI decision is made
Lack of transparency
Transparency and trust needed in decision-making (4) Decision-making process dimension
AI is designed to solve very specific problems
Random events can render AI decisions inaccurate
Advances are being made like with deep learning algorithms
AI does not have access to all relevant data that could influence decision
AI cannot adapt to dynamic environments
Humans have an advantage in changing environments
Dynamic and constantly changing business environments (5) Efficiency dimension
Hostility against implementing AI
Employees will try to sabotage it
Reskilling employees affected by automation
Nobody is happy with AI taking over jobs
Need strong change management and organisational transformation practices
Human resistance to loss of control
Not having any say when AI takes over decisions that impact their lives
Not being involved or having power over decisions
Reskilling required of those impacted by AI
The loss of control and power for current decision makers (6) Leadership dimension
Decisions must be non-discriminatory
AI can make life-or-death decisions, like self-driving cars
AI learning unethical behaviour
AI automates the work of knowledge and skilled workers
AI practitioners must ensure people are not discriminated against
Fear of losing skilled worker jobs
The need to be ethical and non-discriminatory (7) Decision-making process dimension

Source: Created by authors

Co-occurrences for specific codes

No. Codes Co-occurrence count
1 Human social dynamics
*The need for a human 41
*Human augmentation 18
*Human resistance 18
*Lack of trust 15
Mitigate agency problem 14
*Reskilling of employees 11
Explore: implementation 11
*Better customer service 10
2 Restrictive regulation
*Economic impact 27
Ethical considerations 11
Explore: the fourth industrial revolution 10
*The need for a human 8
*Human resistance 6
Being unethical or discriminative 6
Lack of transparency 6
3 Creative work environment
*The need for a human 19
*Reskilling of employees 5
*Culture of innovation 4
*Human intuition 4
*Human resistance 4
Human social dynamics 4
4 Lack of transparency
Being unethical or discriminative 24
*Lack of trust 21
Ethical considerations 11
*Human resistance 9
Explore: implementation 9
*The need for a human 8
Restrictive regulation 6
5 Dynamic business environment
Adaptability through training 22
*The need for a human 13
*Decision complexity and ambiguity 8
Explore: implementation 8
*Economic impact 6
*Human augmentation 6
*Remaining competitive 6
6 Loss of control
*Human resistance 20
*Reskilling of employees 11
Mitigate agency problem 11
*Economic impact 10
Explore: the fourth industrial revolution 10
*The need for a human 9
Explore: implementation 6
*Human augmentation 5
*Lack of trust 5
7 Ethical considerations
Being unethical or discriminative 58
*The need for a human 43
*Lack of trust 17
*Human resistance 16
*Economic impact 15
Explore: implementation 13
Lack of transparency 11
Restrictive regulation 11
*Data quality 10
Note:

Codes that were defined deductively have no prefix. Codes that were defined inductively have a “*” as a prefix

Source: Created by authors

References

Autor, D.H., (2015), “Why are there still so many jobs? The history and future of workplace automation”, Journal of Economic Perspectives, Vol. 29 No. 3, pp. 3-30.

Autor, D.H. and Dorn, D. (2013), “The growth of low-skill service jobs and the polarization of the US labor market”, American Economic Review, Vol. 103 No. 5, pp. 1553-1597.

Brynjolfsson, E. and McAfee, A. (2012), “Winning the race with ever-smarter machines”, MIT Sloan Management Review, Vol. 53 No. 2, pp. 53-60.

Chong, L., Zhang, G., Goucher-Lambert, K., Kotovsky, K. and Cagan, J. (2022), “Human confidence in artificial intelligence and in themselves: the evolution and impact of confidence on adoption of AI advice”, Computers in Human Behavior, Vol. 127, p. 107018, doi: 10.1016/j.chb.2021.107018.

Cortellazzo, L., Bruni, E. and Zampieri, R. (2019), “The role of leadership in a digitalized world: a review”, Frontiers in Psychology, Vol. 10 No. 1938, pp. 1-21, doi: 10.3389/fpsyg.2019.01938.

Davenport, T.H. and Harris, J.G. (2005), “Automated decision making comes of age”, MIT Sloan Management Review, Vol. 46 No. 4, pp. 83-89.

Davenport, T.H. and Kirby, J. (2016), “Just how smart are smart machines?”, MIT Sloan Management Review, Vol. 57 No. 3, pp. 21-25.

DeSanctis, G. and Poole, M.S. (1994), “Capturing the complexity in advanced technology use: Adaptive structuration theory”, Organization Science, Vol. 5 No. 2, pp. 121-147.

Detjen, H., Faltaous, S., Pfleging, B., Geisler, S. and Schneegass, S. (2021), “How to increase automated vehicles’ acceptance through in-vehicle interaction design: a review”, International Journal of Human–Computer Interaction, Vol. 37 No. 4, pp. 308-330.

Duana, Y., Edwards, J.S. and Dwivedic, Y.K. (2019), “Artificial intelligence for decision making in the era of big data – evolution, challenges and research agenda”, International Journal of Information Management, Vol. 48, pp. 63-71.

Frey, C.B. and Osborne, M.A. (2013), “The future of employment: how susceptible are jobs to computerisation?”, Technological Forecasting and Social Change, Vol. 114, pp. 254-280.

Guest, G., Bunce, A. and Johnson, L. (2006), “How many interviews are enough? An experiment with data saturation and variability”, Field Methods, Vol. 18 No. 1, pp. 59-82.

Kshetri, N. (2021), “Evolving uses of artificial intelligence in human resource management in emerging economies in the global South: some preliminary evidence”, Management Research Review, Vol. 44 No. 7, pp. 970-990.

Leopold, T.A., Zahidi, S. and Ratcheva, V. (2016), “Global challenge insight report: the future of jobs”, World Economic Forum, available at: www3.weforum.org/docs/WEF_Future_of_Jobs.pdf

Loebbecke, C. and Picot, A. (2015), “Reflections on societal and business model transformation arising from digitization and big data analytics: a research agenda”, The Journal of Strategic Information Systems, Vol. 24 No. 3, pp. 149-157.

McAfee, A. and Brynjolfsson, E. (2014), The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, WW Norton and Company, New York, NY.

Manyika, J., Chui, M., Miremadi, M., Bughin, J., George, K., Willmott, P. and Dewhurst, M. (2017), “A future that works: automation, employment and productivity”, McKinsey Global Institute, available at: www.mckinsey.com/mgi

Moser, C., den Hond, F. and Lindebaum, D. (2021), “Morality in the age of artificially intelligent algorithms”, Academy of Management Learning and Education, Vol. 21 No. 1, published online 7 April 2021, doi: 10.5465/amle.2020.0287.

Newell, S. and Marabelli, M. (2015), “Strategic opportunities (and challenges) of algorithmic decision making: a call for action on the long-term social effects of 'Datification”, The Journal of Strategic Information Systems, Vol. 24 No. 1, pp. 3-14.

Nilson, N.J. (1998), Artificial Intelligence: A New Synthesis, Morgan Kaufmann Publishers, San Francisco.

Parry, K., Cohen, M. and Bhattacharya, S. (2016), “Rise of the machines: a critical consideration of automated leadership decision making in organizations”, Group and Organization Management, Vol. 41 No. 5, pp. 571-594.

Paschen, J., Wilson, M. and Ferreira, J.J. (2020), “Collaborative intelligence: how human and artificial intelligence create value along the B2B sales funnel”, Business Horizons, Vol. 63 No. 3, pp. 403-414.

Pee, L.G., Pan, S.L. and Cui, L. (2019), “Artificial intelligence in healthcare robots: a social informatics study of knowledge embodiment”, Journal of the Association for Information Science and Technology, Vol. 70 No. 4, pp. 351-369.

Pierani, M. and Bruggeman, E. (2023), “Are AI-based programmes like ChatGPT bringing useful change or unknown chaos?”, Euronews, 13 March 2023, available at: www.euronews.com/2023/03/13/are-ai-based-programmes-like-chatgpt-bringing-useful-change-or-unknown-chaos (accessed 16 March 2023)

Ransbotham, S., Gerbert, P., Reeves, M., Kiron, D. and Spira, M. (2018), “Artificial intelligence in business gets real”, MIT Sloan Management Review and Boston Consulting Group, available at: https://sloanreview.mit.edu/projects/artificial-intelligence-in-business-gets-real/

Ransbotham, S., Khodabandeh, S., Kiron, D., Candelon, F. and LaFountain, B. (2020), “Expanding AI's impact with organizational learning”, MIT Sloan Management Review and Boston Consulting Group, available at: https://sloanreview.mit.edu/projects/expanding-ais-impact-with-organizational-learning/

Rogers, E. (1995), Diffusion of Innovations: Modifications of a Model for Telecommunications, Simon and Schuster, New York, NY, NY.

Saunders, M. and Lewis, P. (2012), Doing Research in Business and Management, Pearson, Edinburgh Gate.

Schoemaker, P.J. and Tetlock, P.E. (2017), “Building a more intelligent enterprise”, MIT Sloan Management Review, Vol. 58, pp. 28-37.

Shneiderman, B. (2020), “Human-Centered artificial intelligence: reliable, safe and trustworthy”, International Journal of Human–Computer Interaction, Vol. 36 No. 6, pp. 495-504.

Tambe, P., Cappelli, P. and Yakubovich, V. (2019), “Artificial intelligence in human resources management: challenges and a path forward”, California Management Review, Vol. 61 No. 4, pp. 15-42.

The Economist (2016), “The return of the machinery question”, www.economist.com, available at: www.economist.com/news/special-report/21700761-after-many-false-starts-artificial-intelligence-has-taken-will-it-cause-mass

Further reading

Atlas.ti (2023), “AI generated top codes and AI generated co-occurring codes from 13 interview transcripts”, Version 23.1.1.0 with AI features.

Kirilenko, A.A. and Lo, A.W. (2013), “Moore’s law versus Murphy’s law: algorithmic trading and its discontents”, Journal of Economic Perspectives, Vol. 27 No. 2, pp. 51-72.

Larson, L. and DeChurch, L.A. (2020), “Leading teams in the digital age: four perspectives on technology and what they mean for leading teams”, The Leadership Quarterly, Vol. 31 No. 1, p. 101377.

Corresponding author

Caren Brenda Scheepers can be contacted at: scheepersc@gibs.co.za

Related articles