Technology

In this article

Apple iPhone 14
Nic Coury | Bloomberg | Getty Images

Ahead of its June WWDC event, Apple on Tuesday previewed a suite of accessibility features that will be coming “later this year” in its next big iPhone update.

The new “Personal Voice” feature, expected in part of iOS 17, will let iPhones and iPads generate digital reproductions of a user’s voice for in-person conversations and on phone, FaceTime and audio calls.

Apple said Personal Voice will create a synthesized voice that sounds like a user and can be used to connect with family and friends. The feature is aimed at users who have conditions that can impact their speaking ability over time.

Apple Personal Voice
Apple

Users can create their Personal Voice by recording 15 minutes of audio on their device. Apple said that the technology will use local machine-learning technology to maximize privacy.

It’s part of a larger suite of accessibility improvements for iOS devices, including a new Assistive Access feature that helps users with cognitive disabilities, and their caretakers, more easily take advantage of iOS devices.

Apple also announced another machine-learning-backed technology, augmenting its existing Magnifier feature with a new point-and-speak backed Detection Mode. The new functionality will combine Camera input, LiDAR input, and machine-learning technology to announce the text on the screen.

Apple typically launches software at WWDC in beta, meaning that the features are first available to developers to members of the public that want to opt-in. Those features will typically remain in beta throughout the summer and launch to the public in the fall when new iPhones hit the market.

Apple’s 2023 WWDC conference begins Jun. 5. The company is expected to unveil its first virtual reality headset among other software and hardware announcements.