Summary of Scott Aaronson’s Quantum Supremacy Lecture

Arnaldo Gunzi
Arnaldo Gunzi Quantum
3 min readSep 19, 2021

--

Scott Aaronson, professor at the University of Texas at Austin, is one of the world’s biggest references on Quantum Computing. The following is a summary of his recent lecture on quantum supremacy.

Quantum computing is not about testing all solutions in parallel and taking the best one.

It’s true that superpositions can be created, but eventually you’ll have to see the result, and we can only observe one of these instances at a time.

The only possibility of having a speedup is to exploit the interference phenomenon, to suppress the amplitudes of the results we don’t want.

It’s like a choreography, where the desired amplitudes are pointing in the same direction, and the other amplitudes will cancel out by interference.

There is only speedup for special cases where the above situation is possible.

What is quantum supremacy? It would be the first use of QC for a mathematically well-defined task, faster than any classic computer on Earth could do (not necessarily a useful task).

What is the point of quantum supremacy? It would be like the question of what is the point of flying the first plane, the Wright Brothers, to show that something like this is possible.

In addition to the laws of physics, it is also a challenge to overcome technological barriers.

Currently, quantum simulation is the most important application of QC, and this follows the original proposal by Richard Feynman, in 1982, where he stated that quantum computers would be good for simulating quantum phenomena.

In the following decades, they followed the results of Bernstein and Vazirani, with some of the first models of quantum computation algorithms, and Peter Shor, showing how it is possible to break cryptography.

About a work by himself, Aaronson, in 2009: is there any simple way to prove quantum supremacy, even without needing a full computer?

Proposed methodology: Boson sampling. Have QC sample a distribution over n-bits, a sampling of where a bunch of photons would stop after passing through beamsplitters. Note that the problem went from a factorization problem, which has a correct answer, to a sampling problem, where it is probabilistic. If a classical computer samples in poly(n) and the QC is better, it has implications for computational complexity theory.

For classic computers this is a difficult problem.

In 2019, Google publishes in Nature a claim of quantum supremacy, based on a 53-qubit computer ~ 9 quadrillion amplitudes, which would take 10,000 years in a classical computer, but only 3 min in a quantum computer. A few days later, IBM said it could use the biggest supercomputer on Earth, the Summit, to run the same experiment in 2.5 days.

In 2020, USTC, China, made a claim of quantum supremacy based on a Boson sampling experiment with 60~70 photons and hundreds of beam splitters. In addition to the photonic computer, they also used a superconductor-based computer, and a few days ago (now in 2021), they published an article saying they used 60 qubits.

An important question. How to know if the distribution result really came from a QC? How could a skeptic check?

One test would be to use classical computers as a benchmark, and calculate linear cross entropy with the result of the quantum experiment.

If the result was generated by something random, it will have cross entropy ~ 1. If not, it is ~ 2.

After all, was quantum supremacy achieved or not?
Answer: not clear. One of the reasons that experiments in classical computing are constantly improving, as are quantum ones. Better to think not of a single clear event, but of constant evolution.

Is this supremacy scalable?
Answer: No, at least for now. One of the factors is error correction. Without error correction, this cross entropy will drop quickly, nullifying the gain.

The point so far. It has been shown that it is possible to currently master around 60 qubits to achieve good results.

Are quantum supremacy experiments useful?
Not for the time being. There may be some small short-term advantage, but many applications require error correction and fault tolerance.

Most claims that we can use QC for machine learning, optimization, in the short term, are about 95% bullshit.

What are the next steps?
Better experiments that provide greater cross entropy. It’s not more qubits, it’s improving accuracy.

Better theoretical evidence on computational complexity for difficulty in algorithms.

Search for possible applications.

Improve error correction.

Original video:

--

--

Arnaldo Gunzi
Arnaldo Gunzi Quantum

Project Manager - Advanced Analytics, AI and Quantum Computing. Sensei of Analytics.