BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

How The Responsible Use Of AI Can Supercharge Disability Inclusion At Your Company

Following

Since the beginning of the year, artificial intelligence has been dominating the headlines with the unveiling of OpenAI’s advanced generative AI chatbot ChatGPT and the potential it holds for revolutionizing the way we all learn, work and live.

Certainly, this type of anthropomorphic technological innovation seen at the front end appears most attention-grabbing and exciting but the reality is that, at the back end, subtler more intrusive forms of AI have gradually crept into multiple critical facets of human existence for some years now.

These can be seen in the form of algorithmic decision-making where computer programs are deployed as efficiency tools to make screening decisions across societal areas as wide-ranging as welfare claims, criminal profiling and corporate recruitment.

As is always the case with big data, bias against minority groups such as those of non-white ethnicity or people with disabilities remains a clear and present danger. This is because the data sets themselves have been constructed to baseline the experience of the majority as the normative standard for typical human behavior.

For AI, anything outside of this is flagged as a deviation.

Indeed, the ever-deepening threat of algorithmic discrimination led the Biden-Harris administration to issue a statement last week outlining its commitment to decisive action in meeting the risks of societal harm posed by AI head-on.

This will include liaising with the CEOs of key AI Big Tech innovators such as Alphabet, Microsoft and OpenAI, a $140 million investment in seven new National AI Research Institutes and the issuing of draft policy guidance on the use of AI systems for public comment.

Discriminatory Practices

The above builds partly on warnings issued a year ago by the Equal Opportunity Employment Commission and the Department of Justice detailing the numerous ways in which the careless use of algorithmic evaluation and screening software significantly discriminates against candidates and employees with disabilities. Employers were further reminded that burying their heads in the sand on such matters could result in serious legal ramifications in relation to breaches of the Americans with Disabilities Act.

Algorithmic tools used to assess candidate and employee performance may be utilized in several different ways. These may include gamified computer tests, automated resume scanning and scoring software, video interviewing software and platforms designed to monitor productivity by capturing data points such as the logging of keystrokes.

These tools, if not deployed thoughtfully, can lead to a workplace culture that is ripe for disability discrimination.

Take a timed computer aptitude test for example – such a program will naturally disadvantage a candidate with limited dexterity due to an upper limb impairment or indeed someone with moderate low vision. Resume scanning software that flags gaps in employment history will naturally mark down a disabled job seeker who may have previously spent long periods in hospital. Furthermore, video interviewing software that analyzes speech patterns will score a candidate with a speech impediment unfavorably regardless of the quality of the information they are conveying.

AI tools are not just an obstacle for candidates with disabilities wanting to get their foot in the door. They can negatively impact existing employees too causing them to miss out on promotions or pay rises.

For example, a keystroke monitoring tool will not take into account the fact that a blind employee may complete a lot of their work tasks using speech-to-text software. This particular example goes to the heart of why AI is flawed in assessing individuals with disabilities in the workforce. Many such people use one or more workplace accommodations to enjoy a more productive and equitable experience. However, as AI is built from modeling typical workplace scenarios or using success criteria achieved by non-disabled employees – those with non-standard working styles are all too easily left out in the cold.

Keeping Ahead Of The Game

Nonetheless, the picture is not entirely bleak when it comes to workplace disability inclusion and its relationship with AI.

There are a number of steps organizations can undertake to counter some of the issues detailed above.

These may include HR departments possessing a robust knowledge of reasonable accommodations which, in the case of candidate assessments and employee productivity tests, may simply consist of an alternative means of measuring the same outcome.

These provisions may be as simple as allowing extra time for tests to be taken, providing assistive technology or giving an employee special dispensation to work in a quiet setting.

Another vital step that employers can undertake is asking AI software vendors what accessibility features are included within their products and whether these have been tested directly alongside people with disabilities. The more organizations prioritize this, the more they help establish accessibility provisions as a procurement requirement and differentiator.

Now and in the future, AI will undoubtedly bring speed and efficiency to the workplace by taking over some tasks related to both day-to-day admin and recruitment that has traditionally been completed by humans.

However, despite its myriad pitfalls and tendency to lean into minority group exclusion, AI needn’t be the enemy of corporate diversity and inclusion initiatives. Instead, the decision to use AI tools can bring about greater awareness and intentionality around DE&I through organizations ensuring that they are aware of both the limitations and mitigations around AI.

Finally, one vital missing piece comes in acknowledging that all such efforts to understand the relationship between AI and workplace disability inclusion will fall by the wayside if candidates and employees alike, as is often the case today, fail to disclose their disability for fear that they will encounter discriminatory attitudes from recruiters and co-workers.

Whilst employers may never be able to 100% assuage such concerns, they can go a long way towards this by simply communicating openly and transparently throughout recruitment and retention processes. Rather than candidates sweating over whether to ask for accommodations – employers should take the initiative and make it clear at the outset that accommodations are readily available and most welcome. Where specific software tools are being used, employers should be transparent in explaining what types of aptitudes are being measured and listen to any concerns or feedback expressed by the candidates.

Whilst the hype around AI right now foreshadows a journey towards the fabled “singularity” where super-intelligent machines take over the world, the reality is that in 2023, AI remains firmly in its embryonic infancy. Right now, whether it becomes a threat or opportunity in relation to vulnerable groups depends entirely on the living breathing human beings who decide how and when to use it.

Follow me on LinkedIn