What’s Changed in Accessibility on iOS

Learn how accessibility has changed for the better in iOS 14

Apple has long integrated accessibility features into their software — and for good reason. By using accessibility features in your app, you’re allowing your app to reach a wider audience.

Sticking with their commitment to accessibility, Apple has introduced several new features at WWDC20 which help developers make their apps easier and more entertaining for users with disabilities.

By making apps more accessible, developers eliminate the need for users to purchase clunky, expensive devices in order to use their apps — everything they need to interact with the app is built right into the device.

In this article, you’ll learn about some of the biggest and best upgrades to accessibility, announced at this year’s WWDC.

Back Tap

The first on the list is an interesting feature — which didn’t get talked about on stage. As the name suggests, Back Tap allows users to set single, double, or triple taps on the back of their iPhones and link them to certain tasks. For example, you could double tap on the back of your iPhone to open the weather app.

And since the tapping feature can be linked to Shortcuts, it opens up a whole range of possibilities with home automation and more! As seen in the example above, you could triple tap the back of your phone to quickly take notes, or you could use it to unlock your door when you’re about to enter the house. Easy, right?

Headphone Accommodations

For users with hearing impairments, Apple has added the ability to adjust sound frequencies on supported headphones. By doing this, users can now set their own preferences on what they want to hear more of and what they want to hear less of.

The new feature also comes with pre-set profiles for specific outdoor situations, in case the user doesn’t want to manually configure the sound frequencies.

Further, the new feature also supports Transparency Mode on AirPods Pro, which allows users to adjust how much of the surroundings they want to hear. If they want to amplify soft voices or listen to the environment in more detail, they now have that autonomy.

In the same vein, a new feature called Sound Recognition can pick up important sounds in the environment, such as Sirens, Fire Alarms, or Car Horns and alert the user of them. Through machine learning models built into the operating system, these sounds can be picked up and transmitted to the user in any way that they wish.

FaceTime Sign Language and RTT

Group FaceTime calls have become more important than ever during the global pandemic, and Apple has added a small but important accessibility feature to them. Now, if a member of a group FaceTime call is using sign language to communicate, their video will be automatically pinned.

Using computer vision to detect this can be a boon to those with hearing loss, since reading sign language while the screen is moving around can be frustrating.

In addition to this, Apple has made further improvements to its Real-Time Text feature, which is used for text based communication during phone calls. Previously, it was difficult for RTT users to multitask during phone calls, but it no longer requires the full screen.


When we think of accessibility on iOS, VoiceOver is often the first to come to mind. This year, VoiceOver received several significant updates, making it even more useful than before. If you aren’t familiar with it, VoiceOver is Apple’s screen reader, available on all platforms, including iOS, macOS, tvOS, and watchOS.

VoiceOver Recognition

In the past, VoiceOver would require developers to adopt it inside their apps to work well on third-party apps.

This year, Apple is tapping into their machine learning technology to semantically detect where and how to use VoiceOver on unsupported apps. This makes virtually all apps natively supported by VoiceOver and increases their accessibility for those with visual impairments.

Image Descriptions

To make VoiceOver even more useful, Apple has used its computer vision library with even more machine learning to detect the contents of an image.

Instead of simply stating that an image is present, VoiceOver can now provide detailed descriptions of what’s pictured in an image for more useful information to VoiceOver users. It can also detect text in an image through optical character recognition — another great way that machine learning is being used in the iOS 14 update!

Evidently, there have been plenty of great updates at WWDC20 in accessibility, with even more that weren’t listed here. By releasing a large number of small features, Apple has made their devices more accessible than ever. And, they’ve supercharged many of their flagship accessibility solutions by coupling machine learning technology with them.

Be sure to smash that “clap” button as many times as you can, share this tutorial on social media, and follow me on Twitter.

Avatar photo


Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *