Blog Category: CSCW

CollaboPlanner @ CSCW 2018

on Comments (0)

Traveling and visiting new cities is often done in pairs or groups. Besides, searching collaboratively for places of interest is a common activity that frequently occurs on individual mobile phones, or on large tourist-information displays in public places such as visitor centers or train stations.

Prior work suggests that the technologies available to travelers prevent effective collaborative trip planning. Each of these modalities has its pros and cons in terms of supporting collaborative decision-making in the tourist context: mobile phones are private and familiar, but lack screen real estate and are hard to co-reference; large public displays are bigger and can provide content from multiple sources in one place but are located in a fixed position and are more visible to others.

We created CollaboPlanner, a collaborative itinerary planning application that combines mobile interaction with a public display, and evaluated them against third-party mobile apps in a simulated travel-search task to understand how the unique features of mobile phones and large displays might be leveraged together to improve collaborative travel planning experience.

We designed CollaboPlanner to support two scenarios: creating travel itineraries with a public display application, and creating travel itineraries with the public display and mobile applications combined.

CollaboPlanner

CollaboPlanner allows users to explore destinations, add them to an itinerary, and see their itinerary visualized on an interactive map. The hybrid version of CollaboPlanner includes a dedicated mobile app. This mobile app allows users to select preferences independently and then send them to the large display for additional discussion and decision-making.

Our user tests provide initial evidence that while using mobile phones is familiar, public displays have added advantages, both as standalone tools and in combination with a mobile app to help travelers collaboratively search unfamiliar environments.

Come see our demo at 6:00PM on November 5th (Mon) to find out more about this system as well as find a restaurant in NYC!

Matthew Lee of FXPAL is also presenting ReflectLive@CSCW 2018.

ReflectLive

on

When clinicians communicate with patients via video conferencing, they must not only exchange information but also convey a sense of sympathy, sensitivity, and attentiveness. However, video-mediated communication often is less effective than in-person communication because it is challenging to convey and perceive essential non-verbal behaviors, such as eye contact, vocal tone, and body posture. Moreover, non-verbal behaviors that may be acceptable in in-person business meetings such as looking away at notes may be perceived as being rude or inattentive in a video meeting (patients already feel disengaged when clinicians frequently look at medical records instead of at them during in-person visits).

Prior work shows that in video visits, clinicians tend to speak more, being more dominant in the conversation and less empathetic toward patients, which can lead to poorer patient satisfaction and incomplete information gathering. Further, few clinicians are trained to communicate over a video visit, and many are not always aware of how they present themselves to patients over video.

In our paper, I Should Listen More: Real-time Sensing and Feedback of Non-Verbal Communication in Video Telehealth, we describe the design and evaluation of ReflectLive, a system that senses and provides realtime feedback about clinicians’ communication behaviors during video consultations with patients. Furthermore, our user tests showed that real-time sensing and feedback has the potential to train clinicians to maintain better eye contact with patients and be more aware of their non-verbal behaviors.

ReflectLive

The ReflectLive video meeting system, with the visualization dashboard on the right showing real-time metrics about non-verbal behaviors. Heather (in the thumbnail) is looking to the left. A red bar flashes on the left of her window as she looks to the side to remind her that her gaze is not centered on the other speaker. A counter shows the number of seconds and direction she is looking away.  

This paper is published in the Proceedings of the ACM on Human-Computer Interaction. We will present the work at CSCW 2018 in November.

FXPAL at Mobile HCI 2016

on

Early next week, Ville Mäkelä and Jennifer Marlow will present our work at Mobile HCI on tools we developed at FXPAL to support distributed workers. The paper, “Bringing mobile into meetings: Enhancing distributed meeting participation on smartwatches and mobile phones”, presents the design, development, and evaluation of two applications, MixMeetWear and MeetingMate, that aim to help users in non-standard contexts participate in meetings.

The videos below show the basic functionality of the two systems. If you are in Florence for Mobile HCI, please stop by their presentation on Thursday, September 8, in the 2:00-3:30 session (in Sala Verde) to get the full story.

Ciao!

MixMeet: Live searching and browsing

on

Knowledge work is changing fast. Recent trends in increased teleconferencing bandwidth, the ubiquitous integration of “pads and tabs” into workaday life, and new expectations of workplace flexibility have precipitated an explosion of applications designed to help people collaborate from different places, times, and situations.

Over the last several months the MixMeet team observed and interviewed members of many different work teams in small-to-medium sized businesses that rely on remote collaboration technologies. In work we will present at ACM CSCW 2016, we found that despite the widespread adoption of frameworks designed to integrate information from a medley of devices and apps (such as Slack), employees utilize a surprisingly diverse but unintegrated set of tools to collaborate and get work done. People will hold meetings in one app while relying on another to share documents, or share some content live during a meeting while using other tools to put together multimedia documents to share later. In our CSCW paper, we highlight many reasons for this increasing diversification of work practice. But one issue that stands out is that videoconferencing tools tend not to support archiving and retrieving disparate information. Furthermore, tools that do offer archiving do not provide mechanisms for highlighting and finding the most important information.

In work we will present later this fall at ACM MM 2015 and ACM DocEng 2015, we describe new MixMeet features that address some of these concerns so that users can browse and search the contents of live meetings to retrieve rapidly previously shared content. These new features take advantage of MixMeet’s live processing pipeline to determine actions users take inside live document streams. In particular, the system monitors text and cursor motion in order to detect text edits, selections, and mouse gestures. MixMeet applies these extra signals to user searches to improve the quality of retrieved results and allow users to quickly filter a large archive of recorded meeting data to find relevant information.

In our ACM MM paper (and toward the end of the above video) we also describe how MixMeet supports table-top videoconferencing devices, such as Kubi. In current work, we are developing multiple tools to extend our support to other devices and meeting situations. Publications describing these new efforts are in the pipeline: stay tuned.

Looking for volunteers for collaborative search study

on Comments (2)

We are about to deploy an experimental system for searching through CiteSeer data. The system, Querium, is designed to support collaborative, session-based search. This means that it will keep track of your searches, help you make sense of what you’ve already seen, and help you to collaborate with your colleagues. The short video shown below (recorded on a slightly older version of the system) will give you a hint about what it’s like to use Querium.

Continue Reading

MyUnity, explained

on Comments (1)

Bill van Melle, Thea Turner, and Eleanor Rieffel contributed to this post

FXPAL’s work on the MyUnity Awareness Platform has received considerable attention from the popular press and the Internet blogosphere in recent weeks, following a nice write-up in MIT’s Technology Review. That article, despite its misleading headline, correctly relays the core motivation for the work: to improve communication among workers in an increasingly fragmented workplace. However, some writers who picked up on that article focused instead on the sensational aspects of having technology monitor people’s behaviors and activities while they are working. They incorrectly described some of the platform’s technical details, overstated what the platform does and what it is able to do with the data it collects, and failed to mention the numerous options we offer users to control their privacy. We thought we should clear up some of these misconceptions and clarify the technical details.

Continue Reading

The Map Trap

on Comments (4)

Are maps better than text for presenting information on mobile devices? That was the question explored by Karen Church, Joachim Neumann, Mauro Cherubini and Nuria Oliver in a paper (about to be) presented at the WWW 2010 conference, they present evidence that in some cases a textual display of information supports people’s information needs more effectively than a map-based one.

The two interfaces were evaluated over the course of a month of use “in the wild” (but in Ireland, not in in Spain). Each participant had access to both interfaces, and was shown how to use them to ask location-specific questions, which would be answered by others nearby. Availability of answers was communicated via SMS messages.

Continue Reading

Social work

on Comments (2)

The slides for our CHI 2010 talk on workplace communication tool use are now available online. In the study, we explored people’s use of workplace communication tools, and found that new tools don’t replace previous ones, that multiple similar tools coexist, and that people’s communication patterns shift over time. Please see Thea’s earlier post for additional details on the research.

Overall, the talk was well-received, but I thought one question from the audience might warrant some additional comments. The question focused on our use of the word “workplace” in the paper (and in the title) while still discussing some aspects of communication that seemed not quite work-like.

Continue Reading

Parallels

on

Aruna Balakrishnan, Tara Matthews and Tom Moran have a paper at CHI 2010 that examines how people used Lotus Activities to structure their interaction with digital artifacts and to help them collaborate. They observed 22 participants over the course of a couple of years to characterize their use of this tool.

Their findings bear interesting similarities to our CHI 2010 paper that described the use of various communication technologies in the workplace. Continue Reading

Exploring workplace communication

on Comments (1)

Modern work is a collaborative enterprise. As such, it depends on communication among the collaborators to reach successful outcomes. An increasing number of communication tools are based on somewhat recent computer technologies, such as email, blogs, wikis, social networking, and Twitter.While there have been many studies of single communication tools in the workplace (IM, wikis, blogging, etc.) we believe that we are one of the first to take a broad view of the communication landscape since the introduction of these new technologies.

In our paper, to be presented at CHI 2010, we explored the communication ecology of a small business. We examined the work communication practices of our participants, including what methods people used to communicate and why, how they viewed the various methods and how they adopted them.

Continue Reading