Happy to note that our overview paper on the Virtual Factory work, “The Virtual Chocolate Factory: Building a mixed-reality system for industry” has been accepted at IEEE’s ICME 2010. The conference is in Singapore in July; I’ll be there, co-chairing a session there that focuses on workplace use of virtual realities, augmented reality, and telepresence. You can see more on the Virtual Factory work here.
Blog Archive: 2010
Apple.com has a lovely article here on the iPhone app we built so our collaborators at TCHO could monitor and control their chocolate lab machines remotely. This work is part of our explorations in mixed reality for industrial enterprises, in particular the Virtual Factory project. Below you can see a few screenshots from the iPhone lab app (click for larger image).
One highly inconvenient thing about working with virtual worlds or 3D content in general is: where do your 3D models come from (especially if you’re on a budget)? A talented but (inevitably) overworked 3D artist? An online catalog of variable quality and cost? Messing around yourself with tools like SketchUp or Blender? What if you want something very specific, very quickly? The MIR (Mixed and Immersive Realities) team here at FXPAL is very interested in these questions and has done some work in this area. Others are working on it too: here’s an elegant demo from Qi Pan at the University of Cambridge, showing the construction of a model with textures from a webcam image:
We’re looking forward to participating in ARdevcamp the first weekend in December. It’s being organized in part by Damon Hernandez of the Web3D Consortium, Gene Becker of Lightning Labs, and Mike Liebhold of the Institute for the Future (among others – it’s an unconference, so come help organize!) So far, there are ~60 people signed up; I’m not sure what capacity will be, but I’d sign up soon if you’re interested. You can add your name on the interest list here.
From the wiki:
The first Augmented Reality Development Camp (AR DevCamp) will be held in the SF Bay Area December 5, 2009.
After nearly 20 years in the research labs, Augmented Reality is taking shape as one of the next major waves of Internet innovation, overlaying and infusing the physical world with digital media, information and experiences. We believe AR must be fundamentally open, interoperable, extensible, and accessible to all, so that it can create the kinds of opportunities for expressiveness, communication, business and social good that we enjoy on the web and Internet today. As one step toward this goal of an Open AR web, we are organizing AR DevCamp 1.0, a full day of technical sessions and hacking opportunities in an open format, unconference style.
AR DevCamp: a gathering of the mobile AR, 3D graphics and geospatial web tribes; an unconference:
# Timing: December 5th, 2009
# Location: Hacker Dojo in Mountain View, CA
Looks like there will be some simultaneous ARdevcamp events elsewhere as well – New York and Manchester events are confirmed; Sydney, Seoul, Brisbane, and New Zealand events possible but unconfirmed.
The next talk in the Bay Area Mathematical Adventures series is this Friday. Robert Bryant, the current director of MSRI, will speak on “Rolling and Tumbling—The idea of Holonomy.” It sounds like a fun talk; he’ll illustrate his talk with “everyday and some not-so-everyday toys.”
I’ve posted the slides from my Bay Area Mathematical Adventures talk last month on From Photographs to Models: The Mathematics of Image-Based Modeling. I blogged about that experience here. I had hoped to post a link to the video at the same time, but it isn’t ready yet. I never feel that a talk is fully captured from just the slides, especially one that was designed to be interactive. I will post a link to the video once it is up.
I’d be tempted to go to Bryant’s talk except that I’m singing that night. Two FXPAL folks, Bill van Melle and I, sing in the 40 voice Bay Choral Guild. We have concerts Fri, Sat, and Sun at various Bay Area locations. Come if you are in the area and would enjoy a concert of festive Baroque choral works performed by our excellent group together with an outstanding group of soloists and musicians!
A couple of weeks ago I attended the SIAM/ACM Joint Conference on Geometric and Physical Modeling and heard a lovely talk by Richard Riesenfeld. Riesenfeld and his wife Elaine Cohen were this year’s Bézier award winners for their work in computer aided geometric design (CAGD). He spoke about his correspondence with Bézier and showed us many of the letters they sent back and forth in the early days of CAD/CAM, with their many hand drawn diagrams and the typed text with the math symbols added in by hand. I spent the time marveling at how they managed to have an effective collaboration over such an impoverished communication channel. But even with all of the wonderful 3D rendering capabilities we have today, it is still hard to communicate about 3D objects and spaces over a distance. Having a visual rendering is not sufficient. Spatial reasoning requires more. Riesenfeld mentioned Bézier’s view that “touch is more discriminative than eyes.”
No, really, it’s on the TV schedule this time (a couple of weeks ago the show got pre-empted for a pledge drive): You can get a look at our Virtual Factory and some of our molecular dynamics animations on “The Science of Chocolate” which is showing tonight on Channel 9 (in the Bay Area) as part of the KQED Quest series. The story is focused on the hows and whys of chocolate making, not on our Virtual Factory project, but it’s still fun to see some of our work on the air.
All these 3D models and animations were created by FXPAL’s resident Art Guy, Tony Dunnigan, with Sagar Gattepally handling the virtual world construction; the video embedded in-world was shot by John Doherty.
The show is on tonight, June 16 at 7:30PM on KQED, Channel 9; will repeat at 1:30 AM Wednesday June 17; and should also commence streaming on the KQED web site as of tomorrow. It looks like the “Science of Chocolate ” story is one of two stories in this show.
Or, when worlds collage…
In yesterday’s post I promised to discuss my favorite feature in the beta version of the 3D web browser Exit Reality. This was the discovery: as a way to create rich 3D worlds quickly, you can stack worlds and models — and their accompanying scripts and animations — inside of each other, all inside one browser window. The Exit Reality 3D search provides a rich source of 3D objects and worlds; you simply drag-and-drop them from the search results into your open world-window.
The collage effect is less of a mess than you might expect, despite differing scales and environment settings. Or OK, it’s a mess, but an interesting mess.
Arguably the two most common topics on this blog are search, especially collaborative exploratory search, and virtual worlds. Now, the new browser-based 3D platform ExitReality, has piqued my curiosity by bringing these topics together. As part of their 3D platform, they offer a search engine optimized for finding and displaying 3D objects and worlds. You can either enter the found 3D sources as a world entire unto itself, or (my favorite) drag-n-drop it into your current 3D space in the browser window. (If you’d like to see the 3D search via a normal 2D web page, that’s available here.)
Note that this 3D search engine is one that searches for 3D objects, models etc., not something like the SpaceTime browser that displays standard search results in a 3D(ish) format.
So it was this morning I found myself standing with a wizard, a Doberman and a rat on the outskirts of Stonehenge, contemplating several quite nice Moon lander models and a gigantic purple flower (Cattleya – one of the search results for “cat”). This all in the space of ten minutes’ carefree clicking around. Dali would’ve had a ball.
ExitReality’s tag line is “the entire Web in 3D.” The idea is you can convert your own website to 3D via a fairly simple process – and it’ll still look the same in 2D; you’ve just added a 3D button. In general the interface is very well thought out – where it falters is most likely due to its beta status (e.g. avatars can’t yet fly or change clothes, though you can change avatars).
My second favorite feature so far: when other people visit the web site you’re viewing, you see them as avatars (if they have ExitReality installed). It’s possible to use in a standalone kiosk mode, or in secure mode behind a firewall. My first favorite feature? Check this space tomorrow for details.
Damon caught me (and several other people) for a short interview that’s now up on his website (and YouTube).