BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Apple Announces ‘Preview’ Of New Accessibility Features Touching Various Developmental Domains

Following

Ahead of this year’s Global Accessibility Awareness Day celebration this week, Apple on Tuesday announced a slew of new accessibility features it says will come to its myriad platforms “later this year.” In a press release published to its Newsroom site, the Cupertino-based tech titan said the new software “[draws] on advances in hardware and software, include on-device machine learning to ensure user privacy, and expand on Apple’s long-standing commitment to making products for everyone.”

“Apple works in deep collaboration with community groups representing a broad spectrum of users with disabilities to develop accessibility features that make a real impact on people’s lives,” the company wrote. “Coming later this year, users with cognitive disabilities can use iPhone and iPad with greater ease and independence with Assistive Access; nonspeaking individuals can type to speak during calls and conversations with Live Speech; and those at risk of losing their ability to speak can use Personal Voice to create a synthesized voice that sounds like them for connecting with family and friends. For users who are blind or have low vision, Detection Mode in Magnifier offers Point and Speak which identifies text users point toward and reads it out loud to help them interact with physical objects such as household appliances.”

In a statement, Apple chief executive Tim Cook reiterated the company’s commitment to equality and inclusiveness vis-as-vis accessibility, saying in part “we’re excited to share incredible new features that build on our long history of making technology accessible, so that everyone has the opportunity to create, communicate, and do what they love.”

Among the highlights of the forthcoming software is significant enhancements to Apple’s suite of cognitive, speech, and vision assistive technologies. For cognition, the company has built Assistive Access. A close cousin of the longstanding Guided Access feature, Assistive Access essentially provides users with a bare-bones Home Screen on iOS and iPadOS. The idea here is, for many people with various cognitive disabilities, the default user interface may be unnecessarily complex and overwhelming to operate. Thus, Apple has created a more basic environment in which a person has only access to certain apps—in all likelihood, only the apps they care about most. In a clever bit of synergy, Apple has mashed together the Phone and FaceTime into one app called Calls. In addition, the Messages app has a special emoji keyboard for people who communicate best using visual markers. Apple says Assistive Access is customizable by the user, supporting both first-party and third-party apps. Users can choose between using Assistive Access with either a grid-based layout with icons plus text, or a row-based textual layout.

For speech, Apple is introducing two new features, Live Speech and Personal Voice. The former is highly akin the Type to Siri feature added in iOS 11, insofar as people can type what they’d like to say and the system will read it aloud. Users can save commonly-used phrases from which they can select when trying to converse with others. As for the latter, Personal Voice was built with the recognizance that many with certain conditions, such as ALS, are at risk of completely losing their ability to speak sooner than later. The feature allows users to record their voice reading a randomized set of prompts—up to 15 minutes long in duration—that is made possible with the on-device neural network technologies baked into Apple’s homegrown systems-on-a-chip.

Custom silicon has yet another massive benefit: accessibility.

In terms of visual accessibility, there’s a new Point to Speak option in the ever-burgeoning Detection Mode section of the Magnifier app. With Point to Speak, people can literally point to an object with text on it—buttons on a microwave or a recipe in a cookbook, for instance—and the system will identify said text. It’s effectively a miniaturized version of the popular Apple Design Award-winning Be My Eyes service, but glammed up in ways only Apple could do. Apple says Point to Speak requires an LiDAR-equipped iPhone Pro or iPad Pro in order to function.

According to Apple, additional features coming later in the year include the ability for Made for iPhone hearing aids to be paired with Macs, a Voice Control guide and phonetic suggestions, an ability to pause imagery (like GIFs) with moving elements, and much more.

Lastly, Apple announced a number of activities this week that span its various properties, whether virtual or physical. SignTime, the on-demand service that pairs Apple Store and Apple Support customers with sign language interpreters, is expanding to Germany, Italy, Spain, and South Korea. Elsewhere, there are accessibility-oriented sessions at select Apple Store outposts during which customers can learn about Apple’s accessibility work, a new “Remember This” shortcut, and a smorgasbord of disability-centric material spanning the App Store, Apple Books, Apple TV app, Apple Fitness+, and Apple Podcasts.

Follow me on Twitter or LinkedInCheck out my website