X
CNET logo Why You Can Trust CNET

Our expert, award-winning staff selects the products we cover and rigorously researches and tests our top picks. If you buy through our links, we may get a commission. Reviews ethics statement

How speech to text, password managers and other tech helped me work with a broken collarbone

Commentary: Accessibility tech is steadily improving.

Stephen Shankland Former Principal Writer
Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials
  • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
Stephen Shankland
5 min read
Doctors repaired my collarbone with the stainless steel plate shown in this X-ray image.

Doctors repaired my collarbone with the stainless steel plate shown in this X-ray image.

Stephen Shankland/CNET

I really wish I hadn't broken my collarbone. But one silver lining from the experience has been learning how well technology lets me live my life with one arm immobilized in a sling.

Far and away, the best feature has been speech-to-text tools that let me type without a keyboard. Honorable mentions go to swipe keyboards on phones, biometric authentication and password managers.

After two weeks with my arm in a sling, and at least four more to go, I have a much better appreciation for what accessibility technology offers people who have longer-term disabilities. My frustration at being unable to lift a box or tighten my belt contrasts with the liberation I feel seeing my words almost magically appear on a screen as I speak.

My ignominious bike crash

I crashed, my ego is sad to report, while mountain biking on a very easy trail. Fractured collarbones are a classic injury for cyclists who extend one arm while falling.

The break significantly limits my ability to use my right arm and hand. Even after surgery, which dramatically eased my pain and improved my range of movement, I'm still mostly one armed. It took two weeks before I could type on a laptop with both hands. During that period, I quickly grew to appreciate dictation technology, also called voice typing and speech recognition.

It's a great example of artificial intelligence, technology based in part on how human brains work. AI-based dictation is built into my Android phone, my iPhone, my MacBook, my Chromebook, some of my web browsers and my Windows laptop. For years, I've used dictation technology for quick text messages or email responses while on the go. With my busted clavicle, I've now embraced it for every task that involves typing. 

One surprise for me was discovering that speech-to-text technology narrows the productivity gap between smartphones and laptops. For touch typists, tiny phone screens are no match for a laptop keyboard. When you're talking to a microphone, however, phones often can match PCs. Google says its research shows voice typing is three times faster than tapping words out on a phone keyboard.

Speech to text conversion is everywhere

I prefer Google's dictation technology to Apple's. I find it more reliable with word comprehension, spelling and capitalization. It also works for longer. With my MacBook I have to restart dictation every couple sentences or so, which can derail my train of thought. Apple advises limiting your speech to 40-second blocks.

Google's Chrome OS has built-in voice dictation ability, but it listens for only a short time and therefore is best for quick bursts of text. I haven't been able to thoroughly try Microsoft's built-in speech-to-text technology.

With laptops, I quickly gravitated toward Google Docs' built-in transcription, which worked the best for me when it came to accuracy, typing paragraphs of text, and integrating with keyboard operations. It goes beyond ordinary smartphone dictation with voice control abilities to format text, move the cursor, select words and delete characters and perform other actions. Unfortunately, it works only in Google Chrome.

Dictation is only scratching the surface of today's accessibility technology. I didn't explore features like Voice Access on Android or Voice Control on iPhones, features that let you control devices by voice commands. With one hand working fine, I also didn't need features like Apple's AssistiveTouch, which helps people use touch screens. 

Google Docs has built-in voice typing, though only in Google's Chrome browser.

Google Docs has built-in voice typing, though only in Google's Chrome browser.

Screenshot by Stephen Shankland/CNET

Voice recognition problems

Speech recognition still has a long way to go in handling punctuation. Adding commas, periods, question marks and colons works well enough, but for quotation marks, iOS fumbles sometimes and Google is even worse. Neither is as convenient as a keyboard.

Capitalization is also a persistent problem. I understand it's tricky, but why did Google decide to begin the first word after a question mark with a lowercase letter, when it could have predicted I was beginning a new sentence?

There are also lots of typos and transcription problems, like Google misspelling "nitpick" as "knit pick."

Fixing all these errors is a pain, especially on smartphones, where positioning the cursor is a fiddly process. With short messages, I often leave the problems unfixed. Most of us by now understand phones are prone to typos and autocorrect errors. Voice transcription just adds a few more blemishes.

For its part, Google says it's improving its dictation technology through bug reports, user research, conferences, and other feedback mechanisms. It expects improvements to dictation on Android and Google Docs later this year, though it wouldn't share details.

The worst problem I can't blame on technology. It's how my own brain works. I simply have a hard time composing text using dictation. There's something about its linear structure that doesn't agree with my writing technique, which involves jumping from one thought to another and then rearranging when I edit.

Biometric authentication and swipe keyboards

Other features also helped me out.

Biometric authentication, both fingerprint and face recognition, is useful as an alternative to typing in passwords.

I already recommend password managers, but I grew to appreciate them more with one arm out of commission. I wish 1Password would restore Touch ID support for its 1Password X browser plugin, though. 1Password maker AgileBits is working on supporting Touch ID, but in the meantime I have to type my password more than I'd prefer.

I've also found swipe-style phone keyboards useful. It took me a couple of years to embrace these, starting with my Android phone and later with Apple's built-in iPhone keyboard and Google's Gboard app. Now I find them particularly useful for one-handed typing on my phone screen.

'OK, Google' and 'Hey, Siri'

Other technology I see in a new light includes smart speakers and digital assistants. I use "OK, Google" more often to play music, set calendar events, send text messages and start map navigation. I use "Hey, Siri' to dial phone numbers. Those phone features intended for hands-free use while driving turn out to be useful for hands-free use while waiting for your bones to knit back together.

I'm certainly a lightweight user of all the accessibility features available today, especially in phones. Screen readers, high-contrast displays and advanced voice control that goes beyond converting speech to text are all standard in smartphones. 

Accessibility tech is improving rapidly. I got a taste of the future while driving a Tesla, which handled steering for me on the freeway and could take on more of the work in the future. For the blind or vision-impaired, Facebook uses AI to interpret photos, Instagram captions videos and Apple iPhone lidar sensors can warn when other people are getting close. Speech recognition is now standard in browsers. Android phones can notify those who are deaf or hard of hearing about barking dogs, running faucets or beeping appliances.

Maybe someday we'll even get a direct neural connection to our digital devices for even more help communicating, sensing our surroundings and controlling our environment. For the next four weeks, though, I'll be happy watching my spoken words appear on the screen in front of me.