Blog Archive: 2017

ReflectLive

on

When clinicians communicate with patients via video conferencing, they must not only exchange information but also convey a sense of sympathy, sensitivity, and attentiveness. However, video-mediated communication often is less effective than in-person communication because it is challenging to convey and perceive essential non-verbal behaviors, such as eye contact, vocal tone, and body posture. Moreover, non-verbal behaviors that may be acceptable in in-person business meetings such as looking away at notes may be perceived as being rude or inattentive in a video meeting (patients already feel disengaged when clinicians frequently look at medical records instead of at them during in-person visits).

Prior work shows that in video visits, clinicians tend to speak more, being more dominant in the conversation and less empathetic toward patients, which can lead to poorer patient satisfaction and incomplete information gathering. Further, few clinicians are trained to communicate over a video visit, and many are not always aware of how they present themselves to patients over video.

In our paper, I Should Listen More: Real-time Sensing and Feedback of Non-Verbal Communication in Video Telehealth, we describe the design and evaluation of ReflectLive, a system that senses and provides realtime feedback about clinicians’ communication behaviors during video consultations with patients. Furthermore, our user tests showed that real-time sensing and feedback has the potential to train clinicians to maintain better eye contact with patients and be more aware of their non-verbal behaviors.

ReflectLive

The ReflectLive video meeting system, with the visualization dashboard on the right showing real-time metrics about non-verbal behaviors. Heather (in the thumbnail) is looking to the left. A red bar flashes on the left of her window as she looks to the side to remind her that her gaze is not centered on the other speaker. A counter shows the number of seconds and direction she is looking away.  

This paper is published in the Proceedings of the ACM on Human-Computer Interaction. We will present the work at CSCW 2018 in November.

DocHandles @ DocEng 2017

on

The conversational documents group at FXPAL is helping users interact with document content using the interface that best matches their current context and without worrying about the structure of underlying documents. With our system, users should be able to refer to figures, charts, and sections of their work documents seamlessly in a variety of collaboration interfaces to better communicate with their colleagues.

out

To achieve this goal, we are developing tools for understanding, repurposing, and manipulating document structure. The DocHandles work, which we will present at DocEng 2017, is a first step in this direction. With this tool a user can type, for example, “@fig2” into their multimedia chat tool to see a list of recommended figures extracted from recently shared documents. In this case, suggestions returned correspond to figures labeled “figure 2” in the most recently discussed documents in the chat, along with the document filename or title and caption. Users can then select their desired figure, which is automatically injected into the chat.

Please come see our presentation in Session 7 (User Interactions) at 17:45 on September 5th on to find out more about this system as well as some of our future plans for conversational documents.