Last year, I posed the question if Apple’s barrier to business adoption of the iPad was handwriting recognition. In fact at the time I thought that without a better data input method, the iPad would only be really useful as an information consumption device.
Since then, I have experimented extensively with the iPad as a note taking device using my handwriting. I’m to the point now where I neither carry any newspapers and magazines nor do I carry a notebook. The iPad has effectively replaced reading and writing material. There have also been a series of announcements about other tablet platforms and their inking interfaces, such as the ThinkPad’s Android-based tablet and some early apps for the Playbook.
The problem with tablet-based note taking is that it is image-based rather than text-based (via handwriting recognition) and can’t be searched, except for meta data and document titles that you can type in using the virtual keyboard. Yes, there is a handwriting recognition app for the iPad that converts your handwriting to text, but it’s just not the same as writing in your notebook – real or virtual – because there is a delay as the text is converted and I find myself always watching the converted text to see if it was correctly interpreted. Everything just slows down and doesn’t feel natural. There is also some discussion that other apps can convert (good) handwriting into searchable metadata, but I haven’t had the time to experiment yet. But that requires yet another app and more process to solve the problem.
Recent discussions about the next iOS release wonder if voice input is coming, hinted at by the inclusion of a microphone icon on the virtual keyboard. Further investigation describes a voice “assistant” function that would allow the user to make a request such as “make a reservation for 2 people at a good sushi restaurant nearby,” presumably using some of the capabilities of their recent acquisition of Siri, an app that accepts voice input based on the Nuance voice processing engine.
Controlling the phone and apps in a smart device with your voice is nice, but I wonder what it would do as a more sophisticated text entry interface? Is it possible that voice recognition gets to the point where we just turn on our phone or tablet in a meeting and watch the real-time transcription of the different voices while we annotate in parallel? Will a refined handwriting recognition capability be unnecessary?
For now, I’m satisfied taking my image based notes with a stylus. But, I’m also keeping an eye on the emerging voice interfaces.
Did you enjoy this article? Please subscribe to CIO Dashboard to receive the latest posts!