We are looking for an intern to work with us this summer in the area of social media analysis. The project will involve understanding and mining patterns within Twitter data, in both text and images. An ideal candidate is a PhD student with strong machine learning skills. Prior experience in image understanding, text data mining, social network analysis, or statistical modeling is a plus. If you are interested in this project, please send your CV to Dhiraj email@example.com or Francine firstname.lastname@example.org.
Blog Archive: 2013
Open source plays an important role in a research laboratory like FXPAL. It allows our researchers to focus their energy on their own innovations and build on the efforts of the community. Open source projects thrive when many openly contribute their work for the common good. However, FXPAL has a business imperative to protect its innovations. We believe that we have found the balance between contributing back to the open source community and protecting our innovations.
Thus we are happy to announce that we have open sourced DisplayCast using a liberal NewBSD license. DisplayCast is a high performance screen sharing system designed for Intranets. It supports real time multiuser screen sharing across Windows 7, Mac OS X (10.6+) and iOS devices. The technical details of our screen capture and compression algorithms will be presented at the upcoming ACM Multimedia 2012 conference. The source code is hosted at GitHub. We provide two repositories: an Objective C based screen capture, playback and archive component that targets the Apple Mac OS X and iOS platforms, and an .NET/C# based screen capture and real time playback component that targets Windows 7.
We hope others find DisplayCast useful and that they will release their own innovations back to the open source community. FXPAL will continue to open source relevant projects in the future.
Since its debut a few months ago, TalkMiner has been busily crawling the web and indexing all sorts of talks and lectures. In the mean time we engaging in some self-promotion. As the press release details, we’ve now indexed over 15,000 talks, so there is likely to be something for everyone here, whether you’re into 3D models, or big data.
FXPAL would like to hire a summer intern who wants to work on Android internals. In particular, we are talking with a vendor for an innovative pen/touch interface sensor. We are exploring how to effectively support pen and multi-touch interface controls on forthcoming tablets using this sensor. While Android (Gingerbread) has some interesting touch events, there are some things this hardware provides that are not reflected through to the Android event system. Depending on exactly what happens in Honeycomb, we are thinking about modifying the device driver and event mapping code to show through some additional device information. This information would then be reflected in an application we are developing that combines pen and touch inputs in novel ways.
While it would be nice to find someone with Android source code experience, we would be happy to offer an internship to someone who was an experienced OS person (Unix?) who is interested in learning Android.
Please see the FXPAL internship page for more information about applying, and do not hesitate to ask me any questions you might have about this project. Please disregard the January application time-frame.
I am pleased to announce that we are releasing a version of the reverted indexing framework as open source software! The release includes the framework and an implementation in Lucene.
Reverted indexing is an information retrieval technique for query expansion, relevance feedback, and a variety of other operations. The details are described on our web site, in several posts on this blog, and in our CIKM 2010 paper. The source code and JAR file can be downloaded from Reverted Indexing page; see the Javadocs for details of the API.
It’s intern time again! I am looking for someone to help me run an exploratory study of a collaborative, session-based search tool that I’ve been building over the last few months. Session-based search frames information seeking as an on-going activity, consisting of many queries on a particular topic, with searches conducted over the course of hours, days, or even longer. Collaborative search describes how people can coordinate their information-seeking activities in pursuit of a common goal.
The intern for this project will help frame a set of research questions around collaborative, session-based search, and then take the lead on an experiment to gain insight into this rich space and to help understand how to improve our search tool. The intern will also participate in writing up this work for publication at a major conference such as CHI, CSCW, JCDL, etc.
One of FXPAL’s papers at the ACM Multimedia conference this year describes FACT, an interactive paper system for fine-grained interaction with documents. The FACT system consists of a small camera-projector unit, a laptop, and ordinary paper documents. The system works as follows: a user makes pen gestures on a paper document in the view a of a camera-projector unit. FACT processes these gestures to select fine-grained content and to apply various digital functions. For example, the user can choose individual words, symbols, figures, and arbitrary regions for keyword search, copy and paste, web search, and remote sharing. FACT thus enables a computer-like user experience on paper. This paper interaction can be integrated with laptop interaction for cross-media manipulations on multiple documents and views. FACT can be used in the application areas such as document manipulation, map navigation and remote collaboration.
In the past, media capture and access suffered primarily from a lack of storage and bandwidth. Today networked, multimedia devices are ubiquitous, and the core challenge has less to do with how to transmit more information than with how to capture and communicate the right information. Our first application to explore intelligent media capture was NudgeCam, which supports guided capture to better document problems, discoveries, or other situations in the field. Today we introduce another intelligent capture application: mVideoCast. mVideoCast lets people communicate meaningful video content from mobile phones while semi-automatically removing extraneous details. Specifically, the application can detect, segment, and stream content shown on screens or boards, faces, or arbitrary, user-selected regions. This can allow anyone to stream task-specific content without needing to develop hooks into external software (e.g., screen recorder software).
Check out the video demonstration below and read the paper for more details.
Bill van Melle, Thea Turner, and Eleanor Rieffel contributed to this post
FXPAL’s work on the MyUnity Awareness Platform has received considerable attention from the popular press and the Internet blogosphere in recent weeks, following a nice write-up in MIT’s Technology Review. That article, despite its misleading headline, correctly relays the core motivation for the work: to improve communication among workers in an increasingly fragmented workplace. However, some writers who picked up on that article focused instead on the sensational aspects of having technology monitor people’s behaviors and activities while they are working. They incorrectly described some of the platform’s technical details, overstated what the platform does and what it is able to do with the data it collects, and failed to mention the numerous options we offer users to control their privacy. We thought we should clear up some of these misconceptions and clarify the technical details.
Here are the slides from our talk at CIKM 2010 last week. More details on reverted indexing can be found in an earlier post and on the FXPAL site, the full paper is available here, and the previous post describes why the technique works. The contribution of the paper can be summarized as follows:
We treat query result sets as unstructured text “documents” — and index them.