Over the past few weeks I have slowly been setting up my personal voice on my iPad. As an autistic person who relies on typing to say what is truly on my mind, seeing this new feature was thrilling.
Many of my dear friends at Reach Every Voice have little to no understandable speech. My speech rarely stops. But it almost never says something that I'm thinking. The constant flow of words from Thomas the Train is like a leaky faucet I can't fix. The best thing to do is try to tune it out.
I seem like someone at the exact intersection of having the ability to speak and needing assistive technology to communicate.
Personal voice should be a game changer for me. It’s not.
The idea behind this accessibility feature is better than the accessible features in its design.
Actually getting my mouth to form words i see is tricky. I see all the words and my best kind of speech is gymnastic tongue effort. Too often I started to read and my words came out slower than the set up function thought I should be reading. It cut me off and moved on to the next sentence.
Having some longer wait time would have allowed me to be more successful.
It is also very difficult for my brain to process longer chunks of text. The shorter eight word or fewer sentences were manageable The longer two line sentences were not.
If there were a short sentence option people like me could do this more easily.
Horrible yearning to hear my voice say what i am really thinking so help me get this feedback to the Apple Accessibility Team.
editor's note: Nick describes his communication above, but asked me to proivde a note explaining further. Nick uses his natural speech in ways that sound purposeful and intentional but actually are just scripts from movies and tv shows. He tells us through typing, his preferred method of commuication, that we should ignore these words and listen to his fingers.
There is a growing interest in Gestalt Language Processing, which pays more attention to the scripts individuals are using and uses them to build speech in more discrete chunks that can then be manipulated into novel communications. On the surface, Nick looks like a person who might be a Gestalt Language Processor, but we have been toying with the idea of refining that label to a Gestalt SPEECH Processor. Nick's internal language is intact. He is able to spell and type what he wants to say although he cannot get his mouth to say it when he wants to say it. This video shows Nick repeating scripts while at the same time typing, "Good to do right now so folks see what it looks like."
Similarly, Nick struggles to read things out loud using his natural speech. You can see an example of what this looks like in the video below where he is working through reading the 150 sentences required to set up his personal voice.
Nick's feedback on the accessibility of this accessibility feature is truly priceless. We also want to acknowledge the things that are already built in that made the process somewhat easier but can still be improved on.
There is the ability to listen to a sentence before you record it yourself. This was sometimes helpful and sometimes not helpful for Nick. Sometimes hearing a word contained in the sentence would trigger a script with the same word and we would have to wait it out before he could attempt to read the written sentence.
You can also choose whether to allow continuous recording or to be in control of when to begin recording each phrase. We quickly learned the second option was far better for us since Nick's scripts sometimes interfered with his reading the words on the iPad.
The text is bold and presented in a large font. This was helpful; however, as Nick mentioned above, some sentences were long, extended over multiple lines, and were difficult for his brain to process in order to read it out loud. The lack of adequate spacing between lines contributed to this struggle.
Undoubtedly, the engineers at Apple created a version of this amazing feature that allows for the optimal capture of a user's pronunciation, inflection, cadence, and all the other things that make our voices sound like us. For someone without a speech disability, I found it easy to use. However, this feature was touted as a game-changer for folks who may one day lose their speech. If you know you are at risk of losing your speech, chances are you - like Nick - may not be able to read and recite sentences in these optimal conditions.
It would be an incredible accessibility feature within this accessibility feature to have the ability to choose whether to run a set-up that is this current version - the optimal version for the best quality voice - and a different version - one that provides longer wait times, shorter chunks of text, or possibly even single words but may produce a voice that is slightly less optimal but is still better than what a user like Nick could do with the original.
One final repeat of Nick's call to action here - let's help get this info to the developers at Apple. Please take a minute to like, comment, and share this writing. The apple accessibility email is email@example.com
~ Lisa Mihalich Quinn
Nick is just an autistic guy trying to get his big voice out.
Want to read more of Nick's writing? You can find his stuff here on the REV blog and also at NeuroClastic.
Want to leave Nick a tip to support his work? You can donate via his Ko-Fi page.