Blog Category: multimedia

Open sourcing DisplayCast

on

Open source plays an important role in a research laboratory like FXPAL. It allows our researchers to focus their energy on their own innovations and build on the efforts of the community. Open source projects thrive when many openly contribute their work for the common good. However, FXPAL has a business imperative to protect its innovations. We believe that we have found the balance between contributing back to the open source community and protecting our innovations.

Thus we are happy to announce that we have open sourced DisplayCast using a liberal NewBSD license. DisplayCast is a high performance screen sharing system designed for Intranets. It supports real time multiuser screen sharing across Windows 7, Mac OS X (10.6+) and iOS devices. The technical details of our screen capture and compression algorithms will be presented at the upcoming ACM Multimedia 2012 conference. The source code is hosted at GitHub. We provide two repositories: an Objective C based screen capture, playback and archive component that targets the Apple Mac OS X and iOS platforms, and an .NET/C# based screen capture and real time playback component that targets Windows 7.

We hope others find DisplayCast useful and that they will release their own innovations back to the open source community. FXPAL will continue to open source relevant projects in the future.

TalkMiner update

on

Since its debut a few months ago, TalkMiner has been busily crawling the web and indexing all sorts of talks and lectures. In the mean time we engaging in some self-promotion. As the press release details, we’ve now indexed over 15,000 talks, so there is likely to be something for everyone here, whether you’re into 3D models, or big data.

So when you think about turning to YouTube for some lecture, think TalkMiner instead. And if you have any comments or content you’d like to have indexed, let us know.

mVideoCast: Mobile, real time ROI detection and streaming

on

In the past, media capture and access suffered primarily from a lack of storage and bandwidth. Today networked, multimedia devices are ubiquitous, and the core challenge has less to do with how to transmit more information than with how to capture and communicate the right information. Our first application to explore intelligent media capture was NudgeCam, which supports guided capture to better document problems, discoveries, or other situations in the field. Today we introduce another intelligent capture application: mVideoCast. mVideoCast lets people communicate meaningful video content from mobile phones while semi-automatically removing extraneous details. Specifically, the application can detect, segment, and stream content shown on screens or boards, faces, or arbitrary, user-selected regions. This can allow anyone to stream task-specific content without needing to develop hooks into external software (e.g., screen recorder software).

Check out the video demonstration below and read the paper for more details.

Lean back with YouTube and Android

on Comments (1)

Google just released a YouTube remote control app that allows one to seamlessly continue watching a YouTube video from the Android phone to the YouTube Leanback system (and back). Leanback provides a relaxing way to access YouTube contents on a large screen such as on a Google TV or on a desktop screen. Leanback continually picks videos from your feed, including your video subscriptions, video rentals and related videos. The system is designed for minimal user interaction.

Continue Reading

NudgeCam.I.Am

on

Somebody named ITALONSOG posted a video that describes the motivation and some of the approach to real-time video capture implemented in NudgeCam, which John Adcock wrote about earlier. The video, set to Usher’s OMG (feat. Will.I.Am), consists of stills with Spanish text and some nifty “high-tech” backgrounds, and features a mug shot of FXPALer John Doherty.

I don’t know if this means the technology is going viral, but it does seem to have some popular appeal.

Continue Reading

Active capture at ACM MM 2010

on

FXPAL has a few papers appearing at the upcoming ACM Multimedia Conference in Firenze, Italy.  Among them is NudgeCam, which was recently featured in an article on MIT’s Technology Review as noted previously on this very blog.

NudgeCam is an experiment in “active capture”. Media capture (in this case, photos and videos) is enhanced by providing a template of elements to capture and also real-time interactive tips to aid the quality of each shot or clip.  The template allows the author to insure that essential story components are captured, and the realtime feedback helps insure that the parts are of high quality. Together the creation of high quality result is streamlined.

The author, Scott Carter, will be presenting this work on Tuesday, October 26th in Session S1 at ACM Multimedia in Firenze, Italy.

See you there!

http://palblog.fxpal.com/?p=4573As

Nudging the world toward better pictures and video

on Comments (2)

An excellent article on FXPAL’s NudgeCam application recently appeared in MIT’s Technology Review. NudgeCam encapsulates standard video capture heuristics, such as how to frame a face and good brightness characteristics, in order to provide guidance to users as they are taking video, using image analysis techniques such as face recognition,  as to how to adjust the camera to improve the video capture.

For its size, FXPAL has surprising breadth and variety of expertise. The NudgeCam work resulted from a collaboration between Scott Carter, whose expertise is in mobile and ubiquitous computing,  and John Doherty, our multimedia specialist, who knows all the standard video capture heuristics and many more. John Adcock brought image analysis techniques to the team, and 2009 FXPAL summer intern Stacy Branham contributed her human-computer interaction expertise.

A different application, also developed at FXPAL, supports rephotography in an industrial setting. Rephotography is the art of taking a photograph from the same location and angle as a previous photograph. Continue Reading

Streaming media and users

on

Just a short note to point at two articles on Facebook that discuss issues relating to streaming media and the home.  It is a continuing frustration that the vendors are not building the open environment we all want.  No surprise there.  But it is interesting that even when a vendor (Apple) has many of the required pieces it does not put them together well.

First note: my posting about the Sonos system I have installed at home.  I am a big fan of Sonos – we now use our iPad sitting by the TV to control it.  Now, if we could just get control and inter-operation between more devices.

Second note: Surendar Chandra has an interesting take on how Apple has all the pieces needed to make for a better environment but they don’t seem to do it.

pCubee: a interactive cubic display

on Comments (2)

Our friend Takashi Matsumoto, (who built the Post-Bit system with us here at FXPAL) built a cubic display called Z-agon with colleagues at the Keio Media Design Laboratory. Takashi points us at this video of a very nicely realized cubic display (well, five-sided, but still). It’s called pCubee: a Perspective-Corrected Handheld Cubic Display and it comes from the Human Communications Technology Lab at the University of British Columbia. Some of you may have seen a version of this demoed at ACM Multimedia 2009; it will also be at CHI 2010. Longer and more detailed video is here.

Choose Your Own Adventure

on Comments (1)

Historically, the Hypertext research community is an intertwingling (a Ted Nelson-logism) of three distinct strands — structural computing, interaction, and HT literature, which could be mapped, roughly, onto the engineers, the HCI folk, and the humanists. While engineering and HCI aspects were somewhat necessary for HT literature, the focus, by definition, has been on exploring the boundaries of electronic literature. In the end, I think, it’s good writing that makes hypertext literature interesting much more so than clever interaction. In fact, the electronic component is often not necessary at all: see If On a Winter’s Night a Traveler, for example.

But there is room for beauty in interaction as well. Thanks to Mark Bernstein of Eastgate, I came across a beautiful set of visualizations of narrative structure of CYOA, a series of hypertext books for children. Through a variety of charts and graphs like the one shown here, the author of these diagrams conveys the many alternate paths through a each story in the collection, and uses these visuals to compare, to analyze, and to appreciate the books. And don’t forget the animations, accessible through a link near the top of the page.

My retelling won’t do it justice; take a look for yourself, and think about these designs next time you’re building a slide deck.

Finally, since these stories are now available as Kindle editions, in principle, it would be possible to collect actual reading paths that readers take through the works, and subject them to the same analyses. What sorts of hypotheses about reading, personality, and interaction could we answer with such data?