Back when I was working on iPokédex, I had a pretty lofty, outrageous end-goal for the project: to get it to replicate the experience of a Pokédex from the TV show as closely as possible. Given the device on the TV show was a complete fantasy device in the year of 1998, I found the concept of owning a device with the remote possibility of realising that in 2008 incredibly exciting.
That being said, there was one aspect of a functioning Pokédex on iOS that was never quite possible for me to implement: the ability to synthesis audible speech.
Now, iPhone and iPad have had built-in speech capabilities since as early as iOS 3.0 (Well, iPhoneOS 3.o back then) . At that point, it was mainly used for accessibility purposes (So people with poor eyesight could interact with the devices), but it became way more exposed when Apple brought out Siri in iOS 5.
Unfortunately, the speech synthesis API has always been a private one, meaning it’s not normally possible to use it without a small bit of hacking, and even still, if an app containing it was submitted to the App Store, it would instantly get rejected by the automatic submission process.
Nevertheless, I had a play with this private API back in 2011 to see what it could theoretically be capable of.