BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

KPMG Global Study Confirms Trusting AI Remains A Major Employee Confidence Gap

Following

Artificial intelligence (AI) is increasingly used to invent new products and services, enhance productivity, improve decision making and reduce costs, including automating administrative tasks and improving cyber security.

However, integrating AI into the everyday workplace still creates challenges, especially without a clear policy and communication process to ensure employees trust the technology methods, approaches, and societal reasons for deploying the new practices and solutions.

AI poses unique challenges in employee minds, including the big question – Can they trust the technology?

Considerations in strategic change management, and internal/ external policy alignments are important in terms of how employees will view the risks and benefits of AI. It is critical that leadership communicates what is expected for AI to be trusted and puts in place clear quality and risk management practices to increase employee trust.

Shedding light on these important questions, KPMG and The University of Queensland recently released a global study of over 17,000 people in 17 countries (including: Australia, Brazil, Canada, China, Estonia, Finland, France, Germany, India, Israel, Japan, the Netherlands, Singapore, South Africa, South Korea, the United Kingdom, and the United States of America). These countries are leaders in AI activity and readiness within each global region. Unfortunately this report left out AI innovators like Vietnam that are making major advancements, especially with companies like FPT Software in Vietnam investing in major AI enablements.

The survey asked respondents about trust and attitudes towards AI systems in general, as well as AI use in the context of four domains where AI is rapidly being deployed and likely to impact many people: in healthcare, public safety and security, human resources, and consumer recommender applications.

The major finding is that over 50% of the sample population confirmed they did not trust AI at work.

Hence only one in two employees are willing to trust AI at work. Employee attitudes depends on their role, what country they live in, and what the AI is used for. However, people across the globe are nearly unanimous in their expectations of what needs to be in place for AI to be trusted which is a very positive sign, especially for aligning international standards.

Insights were provided in key areas, including: who is trusted to develop, use, and govern AI, the perceived benefits and risks of AI use, community expectations of the development, regulation, and governance of AI, and how organizations can support trust in their AI use.

The major research also provided many insights on how people feel about the use of AI at work, public understanding and awareness of AI, the key drivers of trust in AI systems, and how trust and attitudes to AI have changed over time.

The value of this research report insights is that it helps validate the importance of establishing clear AI strategy, operational practices, policy formulation, and international standards etc.

The research confirmed there was less confidence in AI in western countries versus in Brazil, India, China, South Africa. Not a surprise the research found hat the younger generations and those who are university educated and in senior management roles are more confident in the value / embracing Ai and are more supportive of experimentation and recognizing the economic implications of not pursuing AI but underscore the importance of applying AI in responsible and ethical ways.

Some of the key findings are summarized below.

To what extent do people trust AI systems?

Three out of five people (61%) are either ambivalent or unwilling to trust AI. However, trust and acceptance depend on the AI application. For example, AI use in healthcare is more trusted than AI use for Human Resource purposes. People tend to have faith in the capability and helpfulness of AI systems, but are more sceptical of their safety, security, and fairness. Many people feel ambivalent about the use of AI, reporting optimism and excitement, coupled with fear and worry.

How do people perceive the benefits and risks of AI?

Most people (85%) believe AI will deliver a range of benefits, but only half believe the benefits of AI outweigh the risks. Three out of four people (73%) are concerned about the risks associated with AI, with cyber security rated as the top risk globally. Other risks of concern to the majority include: loss of privacy, manipulation and harmful use, job loss and deskilling (especially in India and South Africa), system failure (particularly in Japan), erosion of human rights, inaccurate outcomes and bias.

Who is trusted to develop, use, and govern AI?

People have the most confidence in their national universities, research institutions and defence organizations to develop, use and govern AI in the best interests of the public (76-82%). People have the least confidence in governments and commercial organizations, with a third reporting low or no confidence in these entities to develop, use or govern AI. This is problematic given the increasing use of AI by government and business.

What do people expect of AI management, governance, and regulation?

There is strong global endorsement for the principles of trustworthy AI originally: 97% of people globally view these principles and the practices that underpin them as important for trust. These principles and practices provide a blueprint to organizations on what is required to secure trust in their use of AI. Most people (71%) believe AI regulation is necessary, with a majority believing this to be the case in all countries except India. People expect some form of external, independent oversight, yet only 39% believe current governance, regulations and laws are sufficient to protect people and make AI use safe.

How do people feel about AI at work?

Most people (55%) are comfortable with the use of AI at work to augment and automate tasks and inform managerial decision-making, as long as it is not used for human resource and people management purposes. People actually prefer AI involvement to sole human decision-making, but they want humans to retain control. Except in China and India, most people believe AI will remove more jobs than it creates.

How well do people understand AI?

Most people (82%) have heard of AI, yet about half (49%) are unclear about how and when it is being used. However, most (82%) want to learn more. What’s more, 68% of people report using common AI applications, but 41% are unaware AI is a key component in those applications.

What are the key drivers of trust?

The research report highlighted that trust is central to the acceptance of AI and highlighted four pathways to strengthen public trust in AI:

1. An institutional pathway consisting of safeguards, regulations, and laws to make AI use safe, and confidence in government and commercial organiations to develop, use and govern AI.

2. A motivational pathway reflecting the perceived benefits of AI use.

3. An uncertainty reduction pathway reflecting the need to address concerns and risks associated with AI.

4. A knowledge pathway reflecting people’s understanding of AI use and efficacy in using digital technologies.

Of these drivers, the institutional pathway has the strongest influence on trust, followed by the motivational pathway. These pathways hold for all countries surveyed.

How have attitudes changed over time?

The research also examined how attitudes towards AI have changed since 2020 in Australia, the UK, USA, Canada, and Germany. Trust in AI, as well as awareness of AI and its use in common applications, increased in each of these countries. However, there has been no change in the perceived adequacy of regulations, laws and safeguards to protect people from the risks of AI, nor in people’s confidence in entities to develop, use and govern AI

Summary:

People have more faith in the ability of AI systems to produce reliable output and provide helpful services, than the safety, security and fairness of these systems, and the extent to which they uphold privacy rights.

However, trust is contextual and depends on the AI’s purpose, most people are comfortable with the use of AI at work to augment and automate tasks and help employees, but they are less comfortable when AI is used for human resources, performance management, or monitoring purposes.

Most employees view AI use in managerial decision-making as acceptable, and actually prefer AI involvement to sole human decision-making. However, the preferred option is to have humans retain more control than the AI system, or at least the same amount.

While nearly half of the people surveyed believe AI will enhance their competence and autonomy at work, less than one in three (29%) believe AI will create more jobs than it will eliminate.

This reflects a prominent fear: 77% of people report feeling concerned about job loss, and 73% say they are concerned about losing important skills due to AI.

However, managers are more likely to believe that AI will create jobs and are less concerned about its risks than other occupations. This reflects a broader trend of managers being more comfortable, trusting and supportive of AI use at work than other employee groups.

Given managers are typically the drivers of AI adoption at work, these differing views may cause tensions in organizations implementing AI tools.

In addition, younger generations and those with a university education are also more trusting and comfortable with AI, and more likely to use it in their work. Over time this may escalate divisions in employment.

There are also important differences among countries in our findings. For example, people in western countries are among the least trusting of AI use at work, whereas those in emerging economies: (China, India, Brazil and South Africa) are more trusting and comfortable. This difference partially reflects the fact a minority of people in western countries believe the benefits of AI outweigh the risks, in contrast to the large majority of people in emerging economies.

Making AI Trustworthy is a Business Imperative for Board Directors and C-Suite Leaders

The good news is that the research findings show people are united on the principles and practices they expect to be in place in order to trust AI.

On average, 97% of people report that each of these are important for their trust in AI. People also stated that they would trust AI more when oversight tools are in place, such as monitoring the AI for accuracy and reliability, AI “codes of conduct”, independent AI ethical review boards, and adherence to international AI standards.

The strong endorsement for the trustworthy AI principles and practices across all countries provides a blueprint for how organizations can design, use and govern AI in a way that advances and secures trust in AI.

In conclusion, corporate purpose needs to be front and center in building AI Trust and board directors and C- suite leaders have a duty of care responsibility to ensure Trusted AI is a core competency in an increasingly more digitally smart world.

Research Sources:

Gillespie, N., Lockey, S., Curtis, C., Pool, J., & Akbari, A. (2023). Trust in Artificial Intelligence: A Global Study. The University of Queensland and KPMG Australia. 10.14264/00d3c94

Follow me on Twitter or LinkedInCheck out my website or some of my other work here