Rumor and inference have it that Apple will release the next generation iPad next spring. The new device is expected to have two cameras (front and back), and may be able to work with multiple carriers, rather than just AT&T. These seem like obvious enhancements, which makes me wonder if the press has thought this up, or if Apple is really not worried about the competition.
Blog Archive: 2010
I just read an interesting post by David Karger about PIM, end-user programming, data publishing, and lots of other interesting HCI ideas. The premise is that purpose-built applications for PIM impose strict schemas on their users, making it difficult to adapt, repurpose, or integrate the data with other applications. The alternative is something like Evernote, that lumps everything into one bucket, access to which is mediated largely by search. The tradeoff, then, is between a relatively undifferentiated interface backed by search on one hand, and a large number of siloed applications with dedicated interfaces.
David describes several systems (interfaces) his students built that leverage the Haystack framework for storing arbitrary data, and suggests that it’s possible to structure these data management tasks as authoring problems rather than as programming, thereby making flexible, extensible, customized interfaces more widely accessible.
Those of you who’ve followed this blog and Jeremy Pickens’ blog will recall his many comments about Google’s un-Googly behavior. Recently, Benjamin Edelman actually tested the hypothesis about Google injecting bias into organic results. His post details several kinds of queries that don’t produce organic results. Which ones? Ones that are related to Google properties such as finance, health, and travel. While it’s clear why Google pushes its own properties, it seems that this behavior is inconsistent with the image it tries to project.
CHI rebuttals are due at the end of the week. What to do? What to write? How do you convince those reviewers (particularly Reviewer #3) that your work has merit, if only they would brush up on their understanding of regression analysis. I am not promising any miracles, but I’ve written and read a few rebuttals over the years. Here’s my take.
Panos Ipeirotis posted a great, (unintentionally) funny letter one of his students received from some clueless person claiming that the student’s site warrants a DMCA take-down because it allegedly deprived another similar site of $0.52/day in revenue by affecting its page ranking in some search results.
The letter Panos quotes is worth reading for its sheer comedic value; it is hard to imagine a better parody of a DMCA take-down notice. Unfortunately for all of us, however, the DMCA is invoked with similar justification with much larger sums of money at stake, and with much less humorous outcomes.
In the past, media capture and access suffered primarily from a lack of storage and bandwidth. Today networked, multimedia devices are ubiquitous, and the core challenge has less to do with how to transmit more information than with how to capture and communicate the right information. Our first application to explore intelligent media capture was NudgeCam, which supports guided capture to better document problems, discoveries, or other situations in the field. Today we introduce another intelligent capture application: mVideoCast. mVideoCast lets people communicate meaningful video content from mobile phones while semi-automatically removing extraneous details. Specifically, the application can detect, segment, and stream content shown on screens or boards, faces, or arbitrary, user-selected regions. This can allow anyone to stream task-specific content without needing to develop hooks into external software (e.g., screen recorder software).
Check out the video demonstration below and read the paper for more details.
Bill van Melle, Thea Turner, and Eleanor Rieffel contributed to this post
FXPAL’s work on the MyUnity Awareness Platform has received considerable attention from the popular press and the Internet blogosphere in recent weeks, following a nice write-up in MIT’s Technology Review. That article, despite its misleading headline, correctly relays the core motivation for the work: to improve communication among workers in an increasingly fragmented workplace. However, some writers who picked up on that article focused instead on the sensational aspects of having technology monitor people’s behaviors and activities while they are working. They incorrectly described some of the platform’s technical details, overstated what the platform does and what it is able to do with the data it collects, and failed to mention the numerous options we offer users to control their privacy. We thought we should clear up some of these misconceptions and clarify the technical details.
Google just released a YouTube remote control app that allows one to seamlessly continue watching a YouTube video from the Android phone to the YouTube Leanback system (and back). Leanback provides a relaxing way to access YouTube contents on a large screen such as on a Google TV or on a desktop screen. Leanback continually picks videos from your feed, including your video subscriptions, video rentals and related videos. The system is designed for minimal user interaction.
It’s interesting to consider how this device, designed for a specific vertical, stacks up against its obvious competitor, the iPad.
In her recent CIKM 2010 keynote address, Sue Dumais emphasized the importance of time to the understanding of to structure search over web collections. It was an provocative and inspiring talk, but one that left me with a sense of futility: how does a research group that doesn’t have access to a large scale search engine engage in this kind of research?