Well, I’ve been so busy with the conference and squeezing a bit of sight-seeing that I am only now getting around to posting the recaps of Thursday and Friday. I would say that my first QIP has been a major success for me personally as I learned an awful lot, including the very important differences between physicists and computer scientists. I have a much better appreciation for how the quantum information community as a whole operates, despite my many years working with APS GQI. Anyway, on to the talks.

Thursday morning’s session began with an awesome talk by Aram Harrow (see arxiv.org:0811.3171v3) on work he performed with Avinatan Hassidim and Seth Lloyd on developing a quantum algorithm for solving linear systems of equations. Now this is something just about anyone with an undergraduate background in engineering, math, physics, computer science, maybe even chemistry or even economics could wrap their head around. Solving linear systems of equations – linear algebra – is one of the most widely used bits of mathematics. For instance, it’s the basis of finite element analysis which forms the backbone of structural engineering. In any case, the idea is to solve the equation (by the way, I highly recommend watching the video if you can, since many of his early slides were hilarious). Here we’re assuming that *A* is an matrix. If *A* happens to not be Hermitian, then we have

.

Of course there are an endless number of classical algorithms out there to solve these types of things (just think back to your linear algebra class or google it). Essentially the goal, of course, is to invert *A *in order to find *x*. The efficiency of these algorithms can be compared via their condition number . So, the quantum algorithm is roughly as follows:

is a unit vector. Suppose *A* is a sparse and that . Then we have

where .

What this algorithm produces is and latex \epsilon$ in a much better run-time than classical algorithms. Specifically, its run time is compared to (assuming I’m reading my notes correctly).

The advantages of this method come from the relatively small and the sparseness of *A*. The run-time really can’t be improved by much and so this is pretty much the limit on the optimality of the problem (from what I gathered anyway). The algorithm is based on two key primitives that are what invert the action of *A*: Hamiltonian simulation and phase estimation. Doing so coherently gives either a (destruction) or result. The best lower bound on the run-time with this method is .

Now, comparing this to finite element models, this quantum algorithm can apparently be tweaked to be a bit better than the best finite element models out there, e.g. it can solve thermal conductivity in 36 spatial dimensions. Actually, there are in fact several types of solutions for the quantum algorithm that can be used to compare to Monte Carlo methods. Now, interestingly enough if the run-time could be improved to , it would apparently imply that BQP = PSPACE, though I’ll readily admit I’m not entirely sure I understand why (but I trust they know what they’re talking about). In addition the algorithm also apparently proves BQP-hardness, though again, I’m not sure I understand the details.

One thing they’re looking for is more applications. Luckily, I used to be a mechanical engineer who worked on fluid mechanics. I may have some possibly wicked hard real-world problems that might be a good test for them. For instance, one interesting question Aram suggested is how costly would it be to high-energy theory with low-energy states? Anyway, on to the next few talks, but congrats to Aram on an excellent talk – probably my favorite from the workshop.

Stefano Chesi went next and gave a talk about self-correcting quantum memories in conjunction with a bath that is supposed to improve on the Kitaev model. The connection to the bath was in the weak coupling limit. Further details on this work can be found at arxiv.org:0908.4264. The next talk was by Robert Koenig on quantum computation on Turaev-Viro codes. It was basically cool topological stuff – lots of neat topological diagrams including one I’m calling the upside-down pants diagram. Their paper didn’t have a picture of it, but let’s just put it this way: Todd Brun, who was sitting next to me, noted that it seemed to indicate that crossing one’s legs (which I was doing at the time) is like performing a unitary transformation on a pair of pants (hey, I’m a talented guy). Anyway, there was a lot of cool category and graph theory stuff and it’s a really neat approach to coding. I can’t say I understood every last detail, but it was a cool talk nevertheless.

Mark Howard following that with some more topology/graph theory stuff relating to stabilizer codes. Basically it was about trying to enable universal quantum computation (UQC) or, put another way, how close to that ideal can we get? How much of the quantum computing “power” can we obtain? His work uses a Clifford polytrope with a nifty octahedron lattice (while the geek in me should have been thinking it looked like D&D dice, I’ve been playing Zome Tools recently and kept thinking it looked like something I had built with them). One of his results was that UQC for single-qubit gates reduces to UQC for single-qubit states. Given a noisy operation that violates something, for example, his approach can tell you which measurement to perform. At the moment he’s only tackled unital noise, but non-unital would be useful.

Hui Khoon Ng went next with a simple approach to approximate quantum error correction (AQEC). Prior work has mostly been on perfectly-correcting the CP noise channel asking how far a proportionality factor, , is from trace-preserving (in fact she later defined “perfect” QEC). So what is AQEC exactly? It’s a smaller system that’s used to encode the information while not sacrificing much reliance against noise. You could think about AQEC as an optimization problem. Semi-definite programming methods are one example, though it gives an average performance. What she was interested in was the absolute worst-case performance scenario so that she could come up with something that worked for *all* channels. What AQEC does is condition perturbation conditions by tossing in an extra factor to the channel. So, for example, take a look at this:

where we have . The are the minimum eigenvalues of and the is traceless.

In any case, she obtained a near-optimal recovery map for all channels with the optimality based on the worst-case fidelity (i.e. she used fidelity to identify her worst-case performance scenario for AQEC).

Following that, Roderich Moessner gave a talk on the statistical mechanics of disordered quantum optimization. Basically – and I find this fascinating – PNP apparently implies the existence of glassy physical systems. Now *that’s* wild. Now, the main crux of the talk was focused on more graph theoretic stuff. So, for instance, when they do stuff like this (applying graph theory to physical systems) you do things (as Moessner did) such as finding that the number of non-zero energy states depend on the properties of the graph. The punchline of his talk was that their methods provide improved lower bounds on the SAT-UNSAT transition (more complexity stuff).

Julia Kempe finished up the afternoon with a talk on a quantum Lovász lemma. A somewhat minor point I noticed was her “definition” of independent events. I’m still not entirely sure I agree (it’s the cosmologist in me talking). She “defines” them as, given two independent events, it means there is a non-zero probability that each occurs at some point. There’s something vaguely unsatisfying about this from a cosmological point of view, but I am also being cautious by putting things in quotes since her paper might be more rigorous. At any rate, her motivation, similar to the previous talk, revolved around K-SAT. Unfortunately, sitting in the airport almost two days later, I can’t read the rest of my own chicken scratch on the talk. My apologies to both readers and Julia on this.

The conference dinner was held at a lakeside restaurant. While very good, those of us who were still a bit jetlagged (like the poor Japanese guys next to me who had come in to town a bit later than me) started dozing. The main dish wasn’t served until 9 and dessert wasn’t served until 11. In the midst of this, Gilles Brassard (who has shed his beard) gave a nice historical talk on the BB84 protocol. The gestation for the idea actually dated to 1970 and a guy who’s name I have since forgotten (did I mention I was exhausted?). It was a great talk and included plenty of discussion about the failures of getting the idea published. Basically, Gilles said that they couldn’t convince anyone until they built an actual prototype.

Friday morning began with a great talk by Marcin Pawlowski in the principle of information causality. First, let me say that Marcin’s talk was by far one of the absolute funniest at the conference. He began with an awesome overview of just exactly what a physical principle is (this is so interesting from a foundational point of view). In relation to the idea of “reasonability” regarding communication between Alice and Bob, for instance, and relating to T’sirelson’s bound he really tried to explain why it is the way it is. He then began talking about mutual information and I SWEAR he was inches away from saying what I’ve said in my poster – mutual information is a statement of the second law, i.e. and (for isolated systems and where *S* is the usual thermodynamic entropy) are really the same thing. Plus he brought causality into it which is another thing I’ve been ranting about for years (not that I’m sold on causality being a necessary condition, just that I know it can be related to this). In regards to information causality, the principle is that the sender decides what information is transmitted. I think I could relate this to my CTC communication protocol too, but I’ll have to think about it some more.

Sergio Boixo and Markus Mueller followed with pretty good foundational talks, though I didn’t take too many notes at this point. Both used the CHSH inequalities (which is good stuff) and Boixo’s work assumed that quantum mechanics is at least good locally. His closing advice was “if you break something, break it big.”

I, alas, skipped the rest of the talks due to a melted brain and a stubborn insistence that I find Liechtenstein which I did (I’ll post pictures eventually). I even found a place to stamp my passport (since there are no official border crossings between Switzerland and Liechtenstein) thanks to a friendly truck driver whose wife is from Seattle. I also found the graveyard of famous astronomer Fritz Zwicky, but don’t think I actually found the real Fritz because there were several who were all about the same age! Every other name in the graveyard (which was the only one in town) was “Zwicky” and the local drug store was “Zwicky’s.” So I missed the actual grave apparently, but saw plenty of his relatives! I then drove up into the mountains until the road I was on was closed due to weather higher up. But I did see some wicked cool stuff up there and will – again – have pictures soon.

So, that was QIP 2010. It was a great experience despite my problems with the Program Committee and I thoroughly enjoyed Switzerland (though I need to bone up on my German for my next visit).

For the record, the guy who started this quantum information thing in 1970 was S. Wiesner, a friend of Charles Bennett.

Stephen Wiesner. I remember now. Thanks for jogging my memory!

Hello … Decoherence is not only a source of noise, it is claimed to have potentially profound implications for issues like why we don’t see continued macro superpositions (Schroedinger’s cat, of course.) That is controversial, but there seems wide agreement that decoherence at least converts supers into an effective mixture, and “we can’t tell the difference.” If we could, it should impact QIP and not just “quantum philosophy.” At my link I describe a way to recover amplitude information, that would not be possible if the output from a beamsplitter was truly a mixture. Notation is still a bit crude but the point should be readily accessible. Anyone, please take a look.

Not ALL computer scientists are so separated from reality, there are just a lot of theoretical computer scientists at conferences like QIP, where they actually have an excuse not to do any useful experiments. It’s still fairly bad in some CS fields where there’s no excuse, though; it’s so easy to do so many innovative CS experiments on actual computers, but many don’t ever bother.

Setting up innovative physics experiments, on the other hand, can be extremely expensive and time-consuming, so it’s good that physcists are still so interested in experimentation. 🙂

Setting up innovative physics experiments can also be extremely hazardous but extremely fun! 😀 Anyway, it’s really the theoretical computer scientists who, for example, can talk abstractly about programming but couldn’t write a good program to save their life that I’m talking about.