Blog Archive: 2009

Quantum inspired classical results

on Comments (7)

In yesterday’s post, I mentioned that one of my favorite topics is classical results informed by the quantum information processing viewpoint. There are now sufficiently many such results that Drucker and deWolf have written a survey, “Quantum Proofs for Classical Theorems.” I was surprised last month, when another such  example popped up in one of the biggest cryptographic results of 2009, Craig Gentry’s discovery of a fully homomorphic encryption scheme.

Continue Reading

Quantum Computing for Technology Managers

on Comments (3)

Wiley’s Handbook of Technology Management, which includes my entry on Quantum Computing, just appeared. I received my tome in the mail today. It is definitely the biggest, weightiest, and most expensive publication I’ve contributed to yet! I was only willing to write the entry if I could also post it on the ArXiv. Wiley agreed, so you can find my entry on the ArXiv as “An Overview of Quantum Computing for Technology Managers.”

I hope the entry conveys the excitement of the field while eliminating some of the hype.  It is focused on what is known about what quantum computers can and cannot do. It does not try to explain how they do what they do. (For that, my tutorial with Wolfgang Polak remains a good starting place.) While the entry discusses well known aspects of quantum computation, such as Shor’s algorithm, quantum key distribution, and quantum teleportation,  it also discusses many lesser known results including more recent algorithmic results and established limitations on quantum computation. I had the pleasure of writing about some of my favorite topics in quantum computing, including purely classical results inspired by the quantum information processing point of view, the elegant cluster state model of quantum computation, and Aaronson’s suggestion that limits on computational power be considered a fundamental guiding principle for physical theories, much like the laws of thermodynamics.

Comments and questions welcome!

Oops. Offline for a day.


Sorry for being AWOP (away without posts).   We upgraded the underlying OS on this server, and in the process we made the machine non-bootable.  It was booting from a logical volume, which is illogical.  And after the upgrade, not a valid drive.   And since this isn’t “mission critical”, we didn’t have a hot spare.   So, this is now the backup of WordPress restored to a wiped-clean and re-installed machine.   I think it is close to back-to-normal.  Never upgrade a machine if you don’t remember exactly how and why it was set up the way it was (random chance or old poor decisions?).  Thanks to my anonymous colleagues for fixing it.

Continue Reading

Best of the Overlooked

on Comments (1)

I will wind up this year with a post that highlights some posts that I thought were interesting but that didn’t receive as much traffic as I thought they deserved. These are not the out-of-the-ballpark home run posts (see the sidebar for those) but rather solid hits that didn’t get caught on camera for some reason.

Continue Reading

Tradeoffs and opportunitites

on Comments (5)

My interest in photography started in the 1980s with a small disc camera. At some point, my brother, a professional photographer, looked at my pictures and commented: “You shoot in color, but you think in black-and-white. Here’s some TMAX film.”

I started with “digital” photography by scanning my paying for high-quality scans of black and white negatives of that TMAX that I shot with my Canon Elan II. The results were mostly good, but the process took a couple of weeks and cost more than I care to remember. At some point, I started using a Canon 20D digital camera, with the same lenses. Now all I had to do was copy the files from the flash card to the PC, and tinker with them in Photoshop. While this produced much better digital pictures, I had to give up my black and white photography because no matter what I tried, I couldn’t get these images to rival the scanned negatives. The tradeoff between convenience and quality swayed toward convenience.

Continue Reading

Reintroducing ReBoard

on Comments (2)

ReBoard is a system we built at the lab to automatically capture whiteboard images and make them accessible and sharable through the web. A technical description of the system is available here. At CHI 2010, Stacy Branham will present an evaluation of ReBoad that she conducted over the summer as an intern at FXPAL1.

Until then, check out our dorky demonstration video!

And be sure to watch the other videos of the latest and greatest FXPAL technologies.

1. The paper is
“Let’s go from the whiteboard: Supporting transitions in work through whiteboard capture and reuse” by Stacy Branham, Gene Golovchinksy, Scott Carter, and Jacob Biehl

Never mind about the Turkers, what do YOU think?

on Comments (4)

Let’s do an experiment. Here’s a TREC topic that specifies an information need

Food/Drug Laws

Description: What are the laws dealing with the quality and processing of food, beverages, or drugs?

Narrative: A relevant document will contain specific information on the laws dealing with such matters as quality control in processing, the use of additives and preservatives, the avoidance of impurities and poisonous substances, spoilage prevention, nutritional enrichment, and/or the grading of meat and vegetables. Relevant information includes, but is not limited to, federal regulations targeting three major areas of label abuse: deceptive definitions, misleading health claims, and untrue serving sizes and proposed standard definitions for such terms as high fiber and low fat.

Below are links to four documents that have been identified by some systems as being relevant to the above topic. Are they?

(I apologize in advance for the primitive nature of this form and its many usability defects.)

Turk vs. TREC

on Comments (9)

We’ve been dabbling in the word of Mechanical Turk, looking for ways to collect judgments of relevance for TREC documents. TREC raters’ coverage is spotty, since it is based on pooled (and sometimes sampled) documents identified by a small number of systems that participated in a particular workshop. When evaluating our research systems against TREC data from prior years, we found that many of the identified documents had not received any judgments (relevant or non-relevant) from TREC assessors. Thus we turned to Mechanical Turk for answers.

Continue Reading

Ghosts of interns past

on Comments (4)

In the past 15 years, FXPAL has hosted a large number of interns, many of whom have become (even more) prominent in their fields. We will soon be recruiting a new crop of interns, and I thought it would be interesting to dig around a bit and see what people are up to these days. A few are now employees at FXPAL. What about the others?

Continue Reading

Does the CHI PC meeting matter?

on Comments (1)

Jofish reports some interesting numbers regarding the role that the associate chairs play in the outcome of CHI paper reviews. He analyzes the CHI 2010 data to reach the following conclusions:

  • Of the 302 submissions accepted, 57 or so were affected by decisions made at the meeting
  • The 1AC (the primary meta-reviewer) was instrumental in getting a paper rejected 31 times, but was not able to prevent rejection 111 times, and represented reviewers’ consensus 1199 times.
  • He also provides some more ammunition for the desk-reject debate.

It would be great to repeat this analysis on other years to see how reliable the patterns are.

An open question is whether the 57 or so papers whose fates were determined at the PC meeting deserved the outcome they received. (Obviously, the rejected papers’ authors would argue against this process.) It’s also interesting to note that it is not possible to replace the CHI PC process with an rule based on average scores, because both the reviewers and the ACs might then try to game the system by assigning extreme scores to marginal papers.