Let me begin by saying that I’m developing a growing dislike of theoretical computer scientists (cue the flaming in three, two, one…). These people don’t really seem to be connected to reality. They discuss, with impunity, results for such outrageous things as T = 0K (any pre-med majors I have taught over the years can explain what’s wrong with this – and, no, it was not a typo) and various parameters (frequently something nebulous like *n*) at infinity. While there may be admittedly valid mathematical results in these cases, they have little or no connection to reality. Personally I don’t think people should be working on physics problems if they can’t recognize the physical from non-physical (note that I consider CTCs physical, though they are not necessarily real – there is an important distinction). At any rate, I will have another blog post about something related to this in a moment.

So the first talk was by Ben Reichardt from Waterloo on quantum query complexity using span programs. Monotone span programs are equivalent to linear secret sharing schemes in cryptography. In fact he was able to derive Grover’s search algorithm using these span programs. I’m afraid I don’t have much more to say on the issue, though. Ben has a very soothing voice and the room was sweltering, thus I had a hard time keeping my eyes open.

David Gross (not the one with the Nobel Prize – the other one from Hannover) gave a very well-organized talk (there are so few of these) on non-commutative compressed sensing related to quantum tomography. Suppose you have some unknown density matrix on with rank . In any case, that was the purpose of this work – to answer questions like that. David gave a great analogue to a pipe organ in that the pipes were and the number of fingers the organist had were . One of the questions that arises from the pipe organ analogy is whether we can perform an exact Fourier transform given only partial information. Again, the answer is yes – even in the classical (i.e. pipe organ) case using classical compressed sensing. So the idea in the quantum case is to recover the density matrix if the rank is unknown. In the quantum case this is possible of we know (log ) randomly chosen Pauli expectation values. Technically the scheme is probabilistic, but it is so with a very high degree of success. David highlighted the difference between physicists and mathematicians by noting that the mathematical proof of the classical case was 50 pages long while the proof of the quantum case was 4.

I missed most of Norbert Schuch’s talk while finishing up my poster. Then I did manage to catch Dominic Berry’s (Caltech) on the query complexity of Hamiltonian simulation. Basically, if you want to implement unitary matrices, you need gates just to approximate it. In essence this work, done with Andrew Childs, is an extension of Childs’ quantum random walk stuff.

Maarten Van den Nest of MPQ Garching gave a talk about simulating quantum computers using probabilistic methods. It was more algorithmic stuff and I can’t say it was overly exciting to me.

I ate lunch with the D-Wave and Google guys – yes, Google is here. Google recently used D-Wave’s chip to do some interesting things and their presentation at QIP centers around using it to, essentially, make Streetview even scarier. Basically they used D-Wave’s chip to train a piece of recognition software to tell the difference between a car and a truck. I’m not kidding. They gave me a demo at lunch using matchbox cars. I might actually get involved in some of this since it uses adiabatic quantum computing, of which I am swiftly becoming a HUGE fan (his Pontiffness welcomed me to the Dark Side).

Phillippe Corboz (Queensland) gave a very nicely organized talk (I’m starting to notice things like this) on simulating fermionic lattice models in 2-D using tensor network algorithms. As someone else in the audience pointed out, there’s an eerie similarity between some of these methods and categorical quantum mechanics. Anyway, quantum Monte Carlo methods tend to fail for many-body problems and this turns out to be a nice alternative since it allows you to approximate sums over exponentially-many terms with something polynomial. The ground state is the most interesting, of course, though I’ll note that it was Phillipe who equated this to T = 0K (tsk, tsk). The problem with the fermionic case is that you end up with negative probabilities in some cases. Tensor networks, however, get around this problem. They also tackled the 2-D Hubbard model of high-T superconductors. Tree tensor networks are very good for describing states obeying an area law. The CS people in the audience had no clue about the physics stuff (I really don’t think they should be working on physics problems). The computational cost of tensor networks is still high but it provides some symmetries that can be exploited for useful purposes.

Hari Krovi (UConn) gave a talk on adiabatic quantum optimization failing for random instances of NP-complete problems, though I heard through the grapevine he wasn’t sure he believed his own results (I suppose one could take that to mean he thought the real world might behave a bit differently than the math). He talked a lot about Anderson localization and had to apologize at one point for not being “rigorous” enough (ah, those CS people…).

**Correction: **I want to apologize to Hari Krovi for the aforementioned statement. The last line was actually meant to imply that, personally, *I* thought he *was* rigorous enough. At the time I was in a particularly virulent mood about the rigor of the CS people since physicists generally recognize that the real world is a messy place (not to offend CS people – I think the differing views are products of the differing disciplines themselves). Of course, it was apparently a physicist who made the comment that produced Hari’s reply, but I didn’t realize it at the time. In addition, spreading an unsubstantiated rumor is, of course, wrong. As a scientist I’m sure he believes his results. Why else would he present them?

Kristan Temme (Vienna) gave another very well-done talk on quantum metropolis sampling. Basically they were looking at simulating quantum systems on a universal quantum computer. Adiabatic quantum computation is one way to give access to the states. Anyway, the classical Metropolis problem (named after the lead author of the paper) appeared in 1953 and didn’t find practical use until the first useful computers arrived. So these guys tried to find a quantum version. In both cases they were trying to essentially simulate a system plus a bath. The classical version used stochastic maps that converged (over many iterations) to the Gibb’s state. This improved things by 6-10 orders of magnitude. The quantum case used CP maps instead, though it did introduce a few problems along the way, notably the introduction of quantum phase transitions that brought led to imperfections.

Finally, Sergei Bravyi gave a talk based on the question: is there a fundamental upper bound on the amount of quantum information that can be stored reliably in a volume of -dimensional space? The main result was that the upper bound found using the quantum situation was while classical codes obey the weaker bound . What’s interesting about this is that, usually, the classical cases for things like this are a squareroot of the quantum case. Here it’s the fourth root for some reason (they don’t know the reason yet). Oddly, somewhere in the middle they seemed to come close to invoking the Cerf-Adami inequalities that I will be giving a poster on on Thursday.

More tomorrow…