BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story

Modeling Trust: AI And The Technology, Media And Telecommunications Industry

Deloitte

Late last year, the European Union introduced the Artificial Intelligence Liability Directive (AILD) to “improve the functioning of the internal market by laying down uniform rules for certain aspects of non-contractual civil liability for damage caused with the involvement of AI systems.” In other words, protecting society from bad AI.

Bad AI is AI that isn’t trustworthy—AI that is based on biased or incomplete data that then, in turn, could perpetuate harmful outcomes. And with AI expecting a compound annual growth rate of 20% by 2030—to reach nearly US $1.4 trillion—the technology, media and telecommunications (TMT) industry has a critical responsibility to not only develop the most trustworthy AI but also model the most trustworthy AI behavior to their business customers and society at large.

The real potential of AI

While AI may have seemed like the stuff of science fiction, it has now entered the realm of reality and offers incredible potential to make businesses more competitive. According to Deloitte’s AI Dossier, there are six key ways AI can help businesses create value:

  • Cost reduction: Applying AI to automate certain tasks, reducing costs through improved efficiency and quality
  • Speed to execution. Reducing the time required to achieve operational and business results by minimizing latency
  • Reduced complexity. Improving decision-making through analytics that can see patterns in complex sources
  • Transformed engagement. Enabling businesses to engage with customers via such AI applications like conversational chatbots
  • Fueled innovation. Using AI to develop innovative products, markets, and business models
  • Fortified trust. Securing business from risks such as fraud and cyber

But while AI presents amazing potential for business value, AI has an equal amount of potential to go wrong. By now, most people are aware that AI can present challenges in terms of bias as well as misuse. AI is driven by data and algorithms—and both can be infused with bias due to the use of incomplete data or bias from the developer. The fact that AI is based on data can compound the risks in that data is often perceived as “objective”—which, of course, is not always the case.

The EU’s AI Act seeks to address these kinds of bias issues, as well as the ways AI can be used or abused with applications such as facial recognition, or the responsible use of personal data or subliminal manipulation. But regulation in most countries is just starting to catch up with the market when it comes to AI—and as such, the guardrails in its application are not firmly in place.

Setting the right example

This lack of firm guardrails can leave a vacuum when it comes to the responsible—or trustworthy—use of AI. And as the pioneer of these technologies, TMT companies should help model the behavior that will ensure AI is applied equitably, inclusively, and safely.

Organizations have a myriad of opportunities to create a competitive advantage by using AI. They can use AI to automate engagement and communication with customers to predict customer behaviors. They can develop highly personalized products and services by using advanced analytics and leveraging data from a variety of sources. They can use AI to extract and monetize insights from the vast amounts of customer data generated by digital systems.

But just as companies use AI to create value, they also need to lead the way in implementing the safeguards and checks to ensure AI is used in the most trustworthy and ethical manner. To that end, TMT organizations should take the time to carefully consider the ethical application of AI within their own organizations. According to Deloitte’s Trustworthy AI framework, they can look to the following principles to help mitigate the common risks and challenges related to AI ethics and governance:

  • Fair and impartial use checks: actively identify biases within their algorithms and data and implement controls to avoid unexpected outcomes
  • Implementing transparency and explainable AI: be prepared to build algorithms, attributes, and correlations open to inspection
  • Responsibility and accountability: clearly establish who is responsible and accountable for AI’s output, which can range from the developer and tester to the CIO and CEO
  • Putting proper security in place: thoroughly consider and address all kinds of risks and then communicate those risks to users
  • Monitoring for reliability: assess AI algorithms to see if they are producing expected results for each new data set and establish how to handle inconsistencies.
  • Safeguarding privacy: respect consumer privacy by ensuring data is not leveraged beyond its stated use and allowing customers to opt-in or out of sharing data.

Stepping up

The ability of the TMT industry to effectively police its own use of AI can send a positive message to the market at large—and, potentially, to regulators. By working to set an example when it comes to trustworthy AI, TMT companies can help shape upcoming regulation and encourage the ongoing innovation needed to help AI achieve its potential.

Ultimately, however, modeling trustworthy behavior is its own reward. By avoiding unintentional bias and warding against possible abuses, TMT companies are not only doing the right thing, but can lead the way to a future where AI is fully embraced for the incredible value it can bring.