When most people speak about voice recognition systems, they tend to be followed up by a short laugh or an inconsolable shrug. After all, and despite years of hope, voice recognition just hasn’t ever worked, especially if you have a non-US accent.
But enter Siri from Apple. From what hard-nosed commentators have mentioned on the blogosphere, Siri actually seems to work, and that could be huge. It could be the first functioning voice control system that actually takes off.
If it is the case, this has profound implications for developers on the iOS platform. Siri accesses information through the use of APIs, or application level interfaces to other services such as Wolfram Alpha, iMessage etc. As of today, Apple has not published those APIs, so only Apple’s applications can be used by Siri.
Think about this. Imagine if you wrote an application for the iPhone, but didn’t have the right to put it on the home screen. How would users access it? If voice becomes a major UI, then it needs to support third-party developers.
Imagine if you’ve invested in building a to-do application such as Orchestra or NirvanaHQ and you are frozen out of Siri? Suddenly your system is at a huge disadvantage compared with Apple’s reminders.
Hence, for Apple to maintain an attractive platform for developers, and everything they have done so far seems to indicate they want an attractive platform, they need to avoid such disadvantages. Of course, some developers believe that Apple’s platform is not “open” and pro-developer due to the app-store process. Yet given that Apple has tried to keep costs down with consistent screen sizes, introduced iAds and provided great developer tools, I believe Apple fully recognizes the need to have a large number of developers on board.
So I remain convinced that Apple will open up Siri APIs to third parties in due course, once the technology has matured, as they did when they introduced an SDK with the second generation iPhone.
What do you think?