Report: Apple using Nuance voice tech in data center

Apple is said to be running a third-party's voice software in its massive new data center as part of a push to get better voice recognition tools in its mobile devices.

Voice Control on the iPhone.
Voice Control on the iPhone. Apple

The goings-on within Apple's new North Carolina data center remain largely unknown, though a new report suggests Apple is using at least part of the facilities to power an enhanced voice services platform that will be unveiled early next month.

In a report this afternoon, TechCrunch claims that Apple is running voice software, and "possibly" even hardware from communications company Nuance in its data center. The end result is said to be improved voice technologies in the next major version of Apple's iOS, which is expected to be unveiled at next month's Worldwide Developers Conference.

Burlington, Mass.-based Nuance is the maker of Dragon NaturallySpeaking speech recognition software, as well as Nuance Recognizer, a recognition tool for businesses that the company claims is the industry leader in recognition accuracy. Nuance has four voice-powered apps on Apple's platform, including one that lets users do speech to text dictation to send out as social-networking status updates.

During an interview last November, Apple co-founder Steve Wozniak incorrectly let loose that Apple acquired Nuance, before correcting the mention in a follow-up interview. As Reuters noted at the time, an acquisition by Apple would mean competitors who were using Nuance's technology would drop it and go elsewhere, potentially giving Apple a competitive edge, though squashing Nuance as a business.

Citing an anonymous source, TechCrunch adds that on the road to making the deal with Nuance, Microsoft was "pushing" Apple to use its own voice recognition technology in iOS. "That attempt was rebuffed, apparently," the outlet said. Microsoft uses its own speech recognition services, which are powered by TellMe (a company Microsoft acquired in 2007), in its Windows Phone 7 OS.

Apple's had speech recognition and voice control in iOS since the iPhone 3GS. Since then, it's been expanded onto the iPod Touch, though not the iPad. Apple's current implementation has users searching for songs, contacts, and managing music playback controls using their voice, but over the years the feature set has remained unchanged. By comparison, competitor Google has built voice-powered Web search, application launching, and transcription tools into its Android OS.

Seven months prior to the Wozniak incident, Apple had purchased virtual assistant tool Siri. A report from March (also from TechCrunch) said that Apple was currently in the middle of "deeply integrating" the voice technology from that acquisition into iOS in an attempt to make a voice platform for developers to build voice recognition tools into their games and apps.

During Apple's annual shareholders meeting earlier this year, the company said its North Carolina data center was on track to go live in the "spring" and would be used to support its iTunes and MobileMe services. The expectations remains that Apple will bolster both of those efforts to offer features like a storage locker for music and an enhanced suite of Web apps and services.

 

Join the discussion

Conversation powered by Livefyre

Don't Miss
Hot Products
Trending on CNET

HOT ON CNET

Want affordable gadgets for your student?

Everyday finds that will make students' lives easier: chargers, cables, headphones, and even a bona fide gadget or two!