Happy to note that our overview paper on the Virtual Factory work, “The Virtual Chocolate Factory: Building a mixed-reality system for industry” has been accepted at IEEE’s ICME 2010. The conference is in Singapore in July; I’ll be there, co-chairing a session there that focuses on workplace use of virtual realities, augmented reality, and telepresence. You can see more on the Virtual Factory work here.
Blog Archive: 2010
Apple.com has a lovely article here on the iPhone app we built so our collaborators at TCHO could monitor and control their chocolate lab machines remotely. This work is part of our explorations in mixed reality for industrial enterprises, in particular the Virtual Factory project. Below you can see a few screenshots from the iPhone lab app (click for larger image).
One highly inconvenient thing about working with virtual worlds or 3D content in general is: where do your 3D models come from (especially if you’re on a budget)? A talented but (inevitably) overworked 3D artist? An online catalog of variable quality and cost? Messing around yourself with tools like SketchUp or Blender? What if you want something very specific, very quickly? The MIR (Mixed and Immersive Realities) team here at FXPAL is very interested in these questions and has done some work in this area. Others are working on it too: here’s an elegant demo from Qi Pan at the University of Cambridge, showing the construction of a model with textures from a webcam image:
We’re looking forward to participating in ARdevcamp the first weekend in December. It’s being organized in part by Damon Hernandez of the Web3D Consortium, Gene Becker of Lightning Labs, and Mike Liebhold of the Institute for the Future (among others – it’s an unconference, so come help organize!) So far, there are ~60 people signed up; I’m not sure what capacity will be, but I’d sign up soon if you’re interested. You can add your name on the interest list here.
From the wiki:
The first Augmented Reality Development Camp (AR DevCamp) will be held in the SF Bay Area December 5, 2009.
After nearly 20 years in the research labs, Augmented Reality is taking shape as one of the next major waves of Internet innovation, overlaying and infusing the physical world with digital media, information and experiences. We believe AR must be fundamentally open, interoperable, extensible, and accessible to all, so that it can create the kinds of opportunities for expressiveness, communication, business and social good that we enjoy on the web and Internet today. As one step toward this goal of an Open AR web, we are organizing AR DevCamp 1.0, a full day of technical sessions and hacking opportunities in an open format, unconference style.
AR DevCamp: a gathering of the mobile AR, 3D graphics and geospatial web tribes; an unconference:
# Timing: December 5th, 2009
# Location: Hacker Dojo in Mountain View, CA
Looks like there will be some simultaneous ARdevcamp events elsewhere as well – New York and Manchester events are confirmed; Sydney, Seoul, Brisbane, and New Zealand events possible but unconfirmed.
The next talk in the Bay Area Mathematical Adventures series is this Friday. Robert Bryant, the current director of MSRI, will speak on “Rolling and Tumbling—The idea of Holonomy.” It sounds like a fun talk; he’ll illustrate his talk with “everyday and some not-so-everyday toys.”
I’ve posted the slides from my Bay Area Mathematical Adventures talk last month on From Photographs to Models: The Mathematics of Image-Based Modeling. I blogged about that experience here. I had hoped to post a link to the video at the same time, but it isn’t ready yet. I never feel that a talk is fully captured from just the slides, especially one that was designed to be interactive. I will post a link to the video once it is up.
I’d be tempted to go to Bryant’s talk except that I’m singing that night. Two FXPAL folks, Bill van Melle and I, sing in the 40 voice Bay Choral Guild. We have concerts Fri, Sat, and Sun at various Bay Area locations. Come if you are in the area and would enjoy a concert of festive Baroque choral works performed by our excellent group together with an outstanding group of soloists and musicians!
FXPAL’s Pantheia system enables users to create virtual models by ‘marking up’ a physical scene with pre-printed visual markers and then taking pictures. The meanings associated with the markers come from a markup language that enables users to specify geometric, appearance, or interactive aspects of the model that are then used by the system to construct the model. Our “Marking up the World” video appeared at ACM Multimedia this week. In the video you can see how our system works, our viewer features, and a selection of the spaces and objects we have used the system to reconstruct.
Thanks much to Qiong Liu for presenting it, and to John Doherty for putting it together from our clips and for narrating it. The geometric reconstruction work I spoke about last week as part of the Bay Area Mathematical Adventures series was inspired by the issues we discovered while building the system. For more details on our work, see the paper we presented at CGVR ’09 Interactive Models from Images of a Static Scene.
We are happy to see that the summer issue of the AIEDAM journal is now published (editors: Ellen Yi-Luen Do and Mark D. Gross). It contains our article on the electronic-paper-based Post-Bits system, “Prototyping a tangible tool for design: Multimedia e-paper sticky notes.”
So, what are Post-Bits? We were looking for new ways to use e-paper, and at the same time, we were (and are) very interested in tangible tools for enhancing all kinds of work. This project started when Takashi Matsumoto interned here at FXPAL. You can see Takashi talking about Post-Bits in the video below the fold:
Last week we had two interesting visitors who each gave talks in the area of tangible computing. (Briefly, tangible computing explores ways of interacting with computers using real-world physical objects; much more info can be found online including at the Tangible Media Group at the MIT Media Lab). FXPAL has done a number of tangible interface projects over the years, including the PostBits project, the Convertible Podium, and others.
What happens when high-tech chocolate company meets high-tech research lab? FXPAL is investigating virtual and mixed reality systems for collaboration and control in industrial settings. We are working with TCHO Inc., a chocolate start-up in San Francisco, to build and instrument virtual representations of a real, working factory and its processes.
Introducing our collaborator: TCHO of San Francisco is a new kind of chocolate company, combining innovative methods and a sense of social mission with a commitment to creating obsessively good dark chocolate. Founded by a Space Shuttle technologist and a grizzled chocolate industry veteran, TCHO’s aim is to create a direct, transparent connection between cacao farmers and consumers, illuminating – and sometimes reinventing – the chocolate production process at every step to the benefit of everyone concerned.
For a process as complex as making great chocolate, this kind of clarity and accuracy is vital. We see this collaboration as a way to apply emerging technologies in clarifying end-to-end industrial production processes. It’s also interesting from a consumer’s perspective: a way to innovate in bringing people closer to the products they consume, through combining industrial process data with social applications like virtual worlds.
We’re experimenting with new technologies for fine-grained monitoring, mobile process control, and real/virtual collaborations based on real users and real-world problems in manufacturing industries. In the process, we are finding new applications for existing technologies, as well as insight into real-world needs in globally distributed systems and ways to use new technologies to map complex, real world processes.
As our work progresses, we’ll add more specific updates here on the FXPAL blog.