While traveling I have been doing more work on my iPad, some of which I had previously done on paper or on my laptop. I’ve been reading and reviewing conference papers, making UI design sketches, and writing longer chunks of text such as this blog. The experience has been informative, but not altogether positive.
I have been using iAnnotate to read and mark up papers, NoteTaker HD to draw and sketch, and Notes for more extended writing. Each works reasonably well for the narrow task; what’s missing is the integration. While I can add short text comments in iAnnotate, writing longer comments is awkward in that tool because the text window is so small and the centered text formatting makes it awkward to read. In addition, after reading and highlighting (I have given up scribbling for the most part), I need to write the review. While I could put it in a note or into the info area associated with the document, it feels more natural to write it in Notes. Of course the downside of this is that I cannot refer to my notes and annotations while I am writing the summary.
Another integration flaw is the difficulty of adding text notes to sketches made in NoteTaker or Penultimate. Yes, the point of those tools is to create ink, but adding notes or explanatory comments to a drawing is much easier to do via a keyboard than writing with one’s finger or the fat stylus. It’s doable, but a bit awkward to generate a lot of text and awkward to read a bunch of scribbled handwriting.
Rather than isolating these various operations — annotating, sketching, writing — and forcing each application to either re-create the feature or to do without it, it would be great to have a means of switching context to a text entry widget (for example) and then incorporating the typed text directly into the drawing or annotation app. Similarly, in iAnnotate, I should be able to create blank note sheets on which one could sketch or type, and either insert them into specific documents, to into a separate notebook.
My experience suggests that we need to think about applications on at least two levels: the task level (as we do now) and some sort of common sub-task such as text entry. The current set of controls seems to be too impoverished, and the applications are too siloed. One partial exception to this is the mail app: you can invoke it from another app (e.g, Photos) with content (a photo) that is then handled by the mail app. Once the message is sent, control returns to the Photos app. One difference, though, is that nothing is passed back from the mail app.
It will be interesting to see if the Android SDK makes it easier to integrate applications at varying levels of granularity, or if it will be possible to create and integrate more capable mini-apps. It would also be great to plug in alternate text entry widgets such as ShapeWriter.
More flexible interaction might be important if the iPad is to see any competition. On the other hand, done poorly this app mashup will make a real hash of the user experience. Seems like a great HCI/UX case study.