Happy to note that our overview paper on the Virtual Factory work, “The Virtual Chocolate Factory: Building a mixed-reality system for industry” has been accepted at IEEE’s ICME 2010. The conference is in Singapore in July; I’ll be there, co-chairing a session there that focuses on workplace use of virtual realities, augmented reality, and telepresence. You can see more on the Virtual Factory work here.
Blog Category: virtual worlds
One highly inconvenient thing about working with virtual worlds or 3D content in general is: where do your 3D models come from (especially if you’re on a budget)? A talented but (inevitably) overworked 3D artist? An online catalog of variable quality and cost? Messing around yourself with tools like SketchUp or Blender? What if you want something very specific, very quickly? The MIR (Mixed and Immersive Realities) team here at FXPAL is very interested in these questions and has done some work in this area. Others are working on it too: here’s an elegant demo from Qi Pan at the University of Cambridge, showing the construction of a model with textures from a webcam image:
We’re looking forward to participating in ARdevcamp the first weekend in December. It’s being organized in part by Damon Hernandez of the Web3D Consortium, Gene Becker of Lightning Labs, and Mike Liebhold of the Institute for the Future (among others – it’s an unconference, so come help organize!) So far, there are ~60 people signed up; I’m not sure what capacity will be, but I’d sign up soon if you’re interested. You can add your name on the interest list here.
From the wiki:
The first Augmented Reality Development Camp (AR DevCamp) will be held in the SF Bay Area December 5, 2009.
After nearly 20 years in the research labs, Augmented Reality is taking shape as one of the next major waves of Internet innovation, overlaying and infusing the physical world with digital media, information and experiences. We believe AR must be fundamentally open, interoperable, extensible, and accessible to all, so that it can create the kinds of opportunities for expressiveness, communication, business and social good that we enjoy on the web and Internet today. As one step toward this goal of an Open AR web, we are organizing AR DevCamp 1.0, a full day of technical sessions and hacking opportunities in an open format, unconference style.
AR DevCamp: a gathering of the mobile AR, 3D graphics and geospatial web tribes; an unconference:
# Timing: December 5th, 2009
# Location: Hacker Dojo in Mountain View, CA
Looks like there will be some simultaneous ARdevcamp events elsewhere as well – New York and Manchester events are confirmed; Sydney, Seoul, Brisbane, and New Zealand events possible but unconfirmed.
December’s issue of Esquire features augmented reality not only on its cover but a couple of places inside. This is not the first instance of AR on print media, of course, but it’s nicely done. I’d love to see this sort of thing make its way into scientific publishing eventually, for 3d and animated illustrations and data visualization. Right now authors can put digital content related to their work out on the web, but it’s an altogether different subjective experience when it’s integrated into the printed object (book, journal, etc.).
Here’s a video tour of the AR in the Esquire issue:
“Print might be in trouble, but Esquire magazine won’t be going gently into that good night. The December issue of the magazine will feature augmented reality pages that will come alive when displayed in front of a webcam.
Augmented reality is a trend and phenomenon we’re starting to see more and more uses of across the web. In March, GE played with augmented reality while showing off its Smart Grid technology. Earlier this month, musician John Mayer released an augmented reality enhanced music video. The Disney.com iPhone app that was released earlier this week also utilizes some AR features.”
A couple of weeks ago I attended the SIAM/ACM Joint Conference on Geometric and Physical Modeling and heard a lovely talk by Richard Riesenfeld. Riesenfeld and his wife Elaine Cohen were this year’s Bézier award winners for their work in computer aided geometric design (CAGD). He spoke about his correspondence with Bézier and showed us many of the letters they sent back and forth in the early days of CAD/CAM, with their many hand drawn diagrams and the typed text with the math symbols added in by hand. I spent the time marveling at how they managed to have an effective collaboration over such an impoverished communication channel. But even with all of the wonderful 3D rendering capabilities we have today, it is still hard to communicate about 3D objects and spaces over a distance. Having a visual rendering is not sufficient. Spatial reasoning requires more. Riesenfeld mentioned Bézier’s view that “touch is more discriminative than eyes.”
FXPAL’s Pantheia system enables users to create virtual models by ‘marking up’ a physical scene with pre-printed visual markers and then taking pictures. The meanings associated with the markers come from a markup language that enables users to specify geometric, appearance, or interactive aspects of the model that are then used by the system to construct the model. Our “Marking up the World” video appeared at ACM Multimedia this week. In the video you can see how our system works, our viewer features, and a selection of the spaces and objects we have used the system to reconstruct.
Thanks much to Qiong Liu for presenting it, and to John Doherty for putting it together from our clips and for narrating it. The geometric reconstruction work I spoke about last week as part of the Bay Area Mathematical Adventures series was inspired by the issues we discovered while building the system. For more details on our work, see the paper we presented at CGVR ’09 Interactive Models from Images of a Static Scene.
The “Virtual Worlds in 2020” Workshop
Palo Alto, CA
Tuesday, Oct. 13, 2009
From the program description:
This is the 3rd annual “Future of Virtual Worlds” session – the Virtual Worlds in 2020 Workshop. This year it’s an interactive workshop where you can bring ideas, input, and questions for a rare, long term view of virtual worlds, at the Virtual Worlds SIG.
In just a few weeks we enter a new decade equipped with abilities that existed only in science fiction a few years ago. Although plans for using using graphical, collaborative virtual worlds predate the internet itself by many years, many advances in productivity remain unclaimed. It’s time now to take a look ahead. This workshop will produce a set of inputs showing what might be possible – along with a list of challenges to be overcome along the way over the next decade. Continue Reading
No, really, it’s on the TV schedule this time (a couple of weeks ago the show got pre-empted for a pledge drive): You can get a look at our Virtual Factory and some of our molecular dynamics animations on “The Science of Chocolate” which is showing tonight on Channel 9 (in the Bay Area) as part of the KQED Quest series. The story is focused on the hows and whys of chocolate making, not on our Virtual Factory project, but it’s still fun to see some of our work on the air.
All these 3D models and animations were created by FXPAL’s resident Art Guy, Tony Dunnigan, with Sagar Gattepally handling the virtual world construction; the video embedded in-world was shot by John Doherty.
The show is on tonight, June 16 at 7:30PM on KQED, Channel 9; will repeat at 1:30 AM Wednesday June 17; and should also commence streaming on the KQED web site as of tomorrow. It looks like the “Science of Chocolate ” story is one of two stories in this show.
Or, when worlds collage…
In yesterday’s post I promised to discuss my favorite feature in the beta version of the 3D web browser Exit Reality. This was the discovery: as a way to create rich 3D worlds quickly, you can stack worlds and models — and their accompanying scripts and animations — inside of each other, all inside one browser window. The Exit Reality 3D search provides a rich source of 3D objects and worlds; you simply drag-and-drop them from the search results into your open world-window.
The collage effect is less of a mess than you might expect, despite differing scales and environment settings. Or OK, it’s a mess, but an interesting mess.
Arguably the two most common topics on this blog are search, especially collaborative exploratory search, and virtual worlds. Now, the new browser-based 3D platform ExitReality, has piqued my curiosity by bringing these topics together. As part of their 3D platform, they offer a search engine optimized for finding and displaying 3D objects and worlds. You can either enter the found 3D sources as a world entire unto itself, or (my favorite) drag-n-drop it into your current 3D space in the browser window. (If you’d like to see the 3D search via a normal 2D web page, that’s available here.)
Note that this 3D search engine is one that searches for 3D objects, models etc., not something like the SpaceTime browser that displays standard search results in a 3D(ish) format.
So it was this morning I found myself standing with a wizard, a Doberman and a rat on the outskirts of Stonehenge, contemplating several quite nice Moon lander models and a gigantic purple flower (Cattleya – one of the search results for “cat”). This all in the space of ten minutes’ carefree clicking around. Dali would’ve had a ball.
ExitReality’s tag line is “the entire Web in 3D.” The idea is you can convert your own website to 3D via a fairly simple process – and it’ll still look the same in 2D; you’ve just added a 3D button. In general the interface is very well thought out – where it falters is most likely due to its beta status (e.g. avatars can’t yet fly or change clothes, though you can change avatars).
My second favorite feature so far: when other people visit the web site you’re viewing, you see them as avatars (if they have ExitReality installed). It’s possible to use in a standalone kiosk mode, or in secure mode behind a firewall. My first favorite feature? Check this space tomorrow for details.