Blog Author: Scott Carter

DocuGram at DocEng


Teleconferencing is now a nearly ubiquitous aspect of modern work. We routinely use apps such as Google Hangouts or Skype to present work or discuss documents with remote colleagues. Unfortunately, sharing source documents is not always as seamless. For example, a meeting participant might share content via screencast that she has access to, but that the remote participant does not. Remote participants may also not have the right software to open the source document, or the content shared might be only a small section of a large document that is difficult to share.

Later this week in Vienna, we will present our work at DocEng on DocuGram, a tool we developed at FXPAL to help address these issues. DocuGram can capture and analyze shared screen content to automatically reconstitute documents. Furthermore, it can capture and integrate annotations and voice notes made as the content is shared.

The first video below describes DocuGram, and the second shows how we have integrated it into our teleconferencing tool, MixMeet. Check it out, and be sure to catch our talk on Friday, September 16th at 10:00AM.

FXPAL at Mobile HCI 2016


Early next week, Ville Mäkelä and Jennifer Marlow will present our work at Mobile HCI on tools we developed at FXPAL to support distributed workers. The paper, “Bringing mobile into meetings: Enhancing distributed meeting participation on smartwatches and mobile phones”, presents the design, development, and evaluation of two applications, MixMeetWear and MeetingMate, that aim to help users in non-standard contexts participate in meetings.

The videos below show the basic functionality of the two systems. If you are in Florence for Mobile HCI, please stop by their presentation on Thursday, September 8, in the 2:00-3:30 session (in Sala Verde) to get the full story.


MixMeet: Live searching and browsing


Knowledge work is changing fast. Recent trends in increased teleconferencing bandwidth, the ubiquitous integration of “pads and tabs” into workaday life, and new expectations of workplace flexibility have precipitated an explosion of applications designed to help people collaborate from different places, times, and situations.

Over the last several months the MixMeet team observed and interviewed members of many different work teams in small-to-medium sized businesses that rely on remote collaboration technologies. In work we will present at ACM CSCW 2016, we found that despite the widespread adoption of frameworks designed to integrate information from a medley of devices and apps (such as Slack), employees utilize a surprisingly diverse but unintegrated set of tools to collaborate and get work done. People will hold meetings in one app while relying on another to share documents, or share some content live during a meeting while using other tools to put together multimedia documents to share later. In our CSCW paper, we highlight many reasons for this increasing diversification of work practice. But one issue that stands out is that videoconferencing tools tend not to support archiving and retrieving disparate information. Furthermore, tools that do offer archiving do not provide mechanisms for highlighting and finding the most important information.

In work we will present later this fall at ACM MM 2015 and ACM DocEng 2015, we describe new MixMeet features that address some of these concerns so that users can browse and search the contents of live meetings to retrieve rapidly previously shared content. These new features take advantage of MixMeet’s live processing pipeline to determine actions users take inside live document streams. In particular, the system monitors text and cursor motion in order to detect text edits, selections, and mouse gestures. MixMeet applies these extra signals to user searches to improve the quality of retrieved results and allow users to quickly filter a large archive of recorded meeting data to find relevant information.

In our ACM MM paper (and toward the end of the above video) we also describe how MixMeet supports table-top videoconferencing devices, such as Kubi. In current work, we are developing multiple tools to extend our support to other devices and meeting situations. Publications describing these new efforts are in the pipeline: stay tuned.

More evidence of the value of HMD capture


At next week’s CSCW 2015 conference, a group from University of Wisconsin-Madison will present an interesting piece of work related to the last post: “Handheld or Handsfree? Remote Collaboration via Lightweight Head-Mounted Displays and Handheld Devices”. Similar to our work, the authors compared the use of Google Glass to a tablet-based interface for two different construction tasks: one simple and one more complex. While in our case study participants created tutorials to be viewed at a later time, this test explored synchronous collaboration.

The authors found that Google Glass was helpful for the more difficult task, enabling better and more frequent communication, while for the simpler task the results were mixed. This more-or-less agrees with our findings: HMDs are helpful for capturing and communicating complicated tasks but less so for table-top tasks.

Another key difference between this work and ours is that the authors relied on Google Hangouts to stream videos. However, as the authors write, “the HMD interface of Google Hangouts used in our study did not offer [live preview feedback],” a key feature for any media capture application.

At FXPAL, we build systems when we are limited by off-the-shelf technology. So when we discovered a related capture feedback issue in early pilots we were able to quickly fix it in our tool. Of course in our case the technology was much simpler because we did not need to implement video streaming. However, since this paper was published we have developed mechanisms to stream video from Glass, or any Android device, using open WebRTC protocols. More than that, our framework can analyze incoming frames and then stream out arbitrary image data, potentially allowing us to implement many of the design implications the authors describe in the paper’s discussion section.

Head-mounted capture and access with ShowHow


Our IEEE Pervasive paper on head-mounted capture for multimedia tutorials was recently accepted and is currently in press. We are excited to share some our findings here.

Creating multimedia tutorials requires two distinct steps: capture and editing. While editing, authors have the opportunity to devote their full attention to the task at hand. Capture is different. In the best case, capture should be completely unobtrusive so that the author can focus exclusively on the task being captured. But this can be difficult to achieve with handheld devices, especially if the task requires that the tutorial author move around an object and use both hands simultaneously (e.g., showing how to replace a bike derailleur).

For this reason, we extended our ShowHow multimedia tutorial system to support head-mounted capture. Our first approach was simple: a modified pair of glasses with a Looxcie camera and laser guide attached. While this approach interfered with the user’s vision less than other solutions, such as a full augmented reality system, it nonetheless suffered from an array of problems: it was bulky, it was difficult to control, and without a display feedback of the captured area it was hard to frame videos and photos.

Picture2Our first head-mounted capture prototype

Luckily, Google Glass launched around this time. With an onboard camera, a touch panel, and display, it seemed an excellent choice for head-mounted capture.

Our video application to the Glass Explorers program

To test this, we built an app for Google Glass that requires minimal attention to the capture device and instead allows the author to focus on creating the tutorial content. In our paper, we describe a study comparing standalone capture (camera on tripod) versus head-mounted (Google Glass) capture. Details are in the paper, but in short we found that tutorial authors prefer wearable capture devices, especially when recording activities involving larger objects in non-tabletop environments.

The ShowHow Google Glass capture app

Finally, based on the success of Glass for capture we built and tested an access app as well. A detailed description of the tool, as well as another study we ran testing its efficacy for viewing multimedia tutorials, is the subject of an upcoming paper. Stay tuned.

The ShowHow Google Glass access app



At FXPAL, we build and evaluate systems that make multimedia content easier to capture, access, and manipulate. In the Interactive Media group we are currently focusing on remote work and distributed meetings in particular. On one hand, meetings can be inefficient at best and a flat-out boring, waste-of-time at worst. However, there are some key benefits to meetings, especially those that are more ad hoc and driven by specific, concrete goals. More and more meetings are held with remote workers via multimedia-rich interfaces (such as HipChat and Slack).  These systems augment web-based communication with lightweight content sharing to reduce communication overhead while helping teams focus on immediate tasks.

We are developing a tool, MixMeet, to make lightweight, multimedia meetings more dynamic, flexible, and hopefully more effective. MixMeet is a web-based collaboration tool designed to support content interaction and extraction for use in both live, synchronous meetings as well as asynchronous group work. MixMeet is a pure web system that uses the WebRTC framework to create video connections. It supports live keyframe archiving and navigation, content-based markup, and the ability to copy-and-paste content to personal or shared notes. Each meeting participant can flexibly interact with all other clients’ shared screen or webcam content.  A backend server can be configured to archive keyframes as well as record each user’s stream.

Our vision for MixMeet is to make it easy to mark up and reuse content from meetings, and make collaboration over visual content a natural part of web-based conferencing. As you can see from the video below, we have made some progress toward this goal. However, we know there are many issues with remote, multimedia-rich work that we don’t yet fully understand. To that end, we are currently conducting a study of remote videoconferencing tools. If your group uses any remote collaboration tools with distributed groups please fill out our survey.

on automation and tacit knowledge


We hear a lot about how computers are replacing even white collar jobs. Unfortunately, often left behind when automating these kinds of processes is tacit knowledge that, while perhaps not strictly necessary to generate a solution, can nonetheless improve results. In particular, many professionals rely upon years of experience to guide designs in ways that are largely invisible to non-experts.

One of these areas of automation is document layout or reflow in which a system attempts to fit text and image content into a given format. Usually such systems operate using templates and adjustable constraints to fit content into new formats. For example, the automated system might adjust font size, table and image sizes, gutter size, kerning, tracking, leading, etc. in different ways to match a loosely defined output style. These approaches can certainly be useful, especially for targeting output to devices with arbitrary screen sizes and resolutions. One of the largest problems, however, is that these algorithms often ignore what might have been a considerable effort by the writers, editors, and backshop designers to create a visual layout that effectively conveys the material. Often designers want detailed control over many of the structural elements that such algorithms adjust.

For this reason I was impressed with Hailpern et al.’s work at DocEng 2014 on document truncation and pagination for news articles. In these works, the authors’ systems analyze the text of an article to determine pagination and truncation breakpoints in news articles that correspond to natural boundaries in articles between high-level, summary content and more detailed content. This derives from an observation that journalists tend to write articles in “inverted pyramid” style in which the most newsworthy, summary information appears near the beginning with details toward the middle and background info toward the end. This is a critical observation in no small part because it means that popular newswriting bears little resemblance to academic writing. (Perhaps what sets this work apart from others is that the authors employed a basic tenet of human-computer interaction: the experiences of the system developer are a poor proxy for the experiences of other stakeholders.)

Foundry, which Retelny et al. presented at UIST 2014, takes an altogether different approach. This system, rather than automating tasks, helps bring diverse experts together in a modular, flexible way. The system helps the user coordinate the recruitment of domain experts into a staged workflow toward the creation of a complex product, such as an app or training video. The tool also allows rapid reconfiguration. One can imagine that this system could be extended to take advantage of not only domain experts but also people with different levels of expertise — some “stages” could even be automated. This approach is somewhat similar to the basic ideas in NudgeCam, in which the system incorporated general video guidelines from video-production experts, templates designed by experts in the particular domain of interest, novice users, and automated post hoc techniques to improve the quality of recorded video.

The goal of most software is to improve a product’s quality as well as efficiency with which it is produced. We should keep in mind that this is often best accomplished not by systems designed to replace humans but rather those developed to best leverage people’s tacit knowledge.

Introducing cemint


At FXPAL we have long been interested in how multimedia can improve our interaction with documents, from using media to represent and help navigate documents on different display types to digitizing physical documents and linking media to documents.


In an ACM interactions piece published this month we introduce our latest work in multimedia document research. Cemint (for Component Extraction from Media for Interaction, Navigation, and Transformation) is a set of tools to support seamless intermedia synthesis and interaction. In our interactions piece we argue that authoring and reuse tools for dynamic, visual media should match the power and ease of use of their static textual media analogues. Our goal with this work is to allow people to use familiar metaphors, such as copy-and-paste, to construct and interact with multimedia documents.

Cemint applications will span a range of communication methods. Our early work focused on support for asynchronous media extraction and navigation, but we are currently building a tool using these techniques that can support live, web-based meetings. We will present this new tool at DocEng 2014 — stay tuned!

mVideoCast: Mobile, real time ROI detection and streaming


In the past, media capture and access suffered primarily from a lack of storage and bandwidth. Today networked, multimedia devices are ubiquitous, and the core challenge has less to do with how to transmit more information than with how to capture and communicate the right information. Our first application to explore intelligent media capture was NudgeCam, which supports guided capture to better document problems, discoveries, or other situations in the field. Today we introduce another intelligent capture application: mVideoCast. mVideoCast lets people communicate meaningful video content from mobile phones while semi-automatically removing extraneous details. Specifically, the application can detect, segment, and stream content shown on screens or boards, faces, or arbitrary, user-selected regions. This can allow anyone to stream task-specific content without needing to develop hooks into external software (e.g., screen recorder software).

Check out the video demonstration below and read the paper for more details.

the problem with paper

on Comments (2)

A writer for the TC blog, Erick Schonfeld, recently posted a description of an encounter he had with a Stanford student at a drug store trying to recruit users to experiment with a paper prototype. The prototype and study were being carried out as a requirement for an HCI course the student is taking. The TC writer, in short, found the whole experience ridiculous, especially with respect to all of the whiz-bang, interactive demos he is used to seeing. While, as many point out, paper prototyping is a standard technique in HCI, that does not mean that it always works or is always appropriate. In fact, in my experience with early stage prototypes I was overall underwhelmed with paper prototyping. But I realized over time that experience did not reflect a problem with the prototyping tool per se, but rather a lack of understanding of the context of the user.

Continue Reading