Quanta has an article out today about the wormhole publicity stunt, which sticks to the story that by doing a simple SYK model calculation on a quantum computer instead of a classical computer, one is doing quantum gravity in the lab, producing a traversable wormhole and sending information through it. From what I’ve heard, the consensus among theorists is that the earlier Quanta article and video were nonsense, outrageously overhyping a simulation and then bizarrely identifying a simulation with reality if it’s done on a quantum computer.
The new article is just about as hype-laden, starting off with:
A holographic wormhole would scramble information in one place and reassemble it in another. The process is not unlike watching a butterfly being torn apart by a hurricane in Houston, only to see an identical butterfly pop out of a typhoon in Tokyo.
and
In January 2022, a small team of physicists watched breathlessly as data streamed out of Google’s quantum computer, Sycamore. A sharp peak indicated that their experiment had succeeded. They had mixed one unit of quantum information into what amounted to a wispy cloud of particles and watched it emerge from a linked cloud. It was like seeing an egg scramble itself in one bowl and unscramble itself in another.
In several key ways, the event closely resembled a familiar movie scenario: a spacecraft enters one black hole — apparently going to its doom — only to pop out of another black hole somewhere else entirely. Wormholes, as these theoretical pathways are called, are a quintessentially gravitational phenomenon. There were theoretical reasons to believe that the qubit had traveled through a quantum system behaving exactly like a wormhole — a so-called holographic wormhole — and that’s what the researchers concluded.
An embarrassing development provides the ostensible reason for the new article, the news that “another group suggests that’s not quite what happened”. This refers to this preprint, which argues that the way the Jafferis-Lykken-Spiropulu group dramatically simplified the calculation to make it doable on a quantum computer threw out the baby with the bathwater, so was not meaningful. The new Quanta piece has no quotes from experts about the details of what’s at issue. All one finds is the news that the preprint has been submitted to Nature and that
the Jafferis, Lykken and Spiropulu group will likely have a chance to respond.
There’s also an odd piece of identity-free and detail-free reporting that
five independent experts familiar with holography consulted for this article agreed that the new analysis seriously challenges the experiment’s gravitational interpretation.
I take all this to mean that the author couldn’t find anyone willing to say anything in defense of the Nature article. An interesting question this raises is that if all experts agree the Nature article was wrong, will it be retracted? Will the retraction also be a cover story?
The update of the original story is framed by enthusiastic and detailed coverage of the work of Hrant Gharibyan on similar wormhole calculations. The theme is that while Jafferis-Lykken-Spiropulu may have hit a bump in the road, claiming to be doing “quantum gravity in the lab” by SYK model calculations on quantum computers is the way forward for fundamental theoretical physics:
The holographic future may not be here yet. But physicists in the field still believe it’s coming, and they say that they’re learning important lessons from the Sycamore experiment and the ensuing discussion.
First, they expect that showing successful gravitational teleportation won’t be as cut and dry as checking the box of perfect size winding. At the very least, future experiments will also need to prove that their models preserve the chaotic scrambling of gravity and pass other tests, as physicists will want to make sure they’re working with a real Category 5 qubit hurricane and not just a leaf blower. And getting closer to the ideal benchmark of triple-digit numbers of particles on each side will make a more convincing case that the experiment is working with billowing clouds and not questionably thin vapors.
No one expects today’s rudimentary quantum computers to be up to the challenge of the punishingly long Hamiltonians required to simulate the real deal. But now is the time to start chiseling away at them bit by bit, Gharibyan believes, in preparation for the arrival of more capable machines. He expects that some might try machine learning again, this time perhaps rewarding the algorithm when it returns chaotically scrambling, non-commuting Hamiltonians and penalizing it when it doesn’t. Of the resulting models, any that still have perfect size winding and pass other checks will become the benchmark models to drive the development of new quantum hardware.
If quantum computers grow while holographic Hamiltonians shrink, perhaps they will someday meet in the middle. Then physicists will be able to run experiments in the lab that reveal the incalculable behavior of their favorite models of quantum gravity.
“I’m optimistic about where this is going,” Gharibyan said.
I had thought that perhaps this fiasco would cause the Quanta editors to think twice, talk to skeptical experts, and re-report the original credulous story/video. Instead, it looks like their plan is to double down on the “quantum gravity in the lab” hype.
]]>A commenter in the previous posting pointed to an interview with Lenny Susskind that just appeared at the CERN Courier, under the title Lost in the Landscape. Some things I found noteworthy:
I can tell you with 100% confidence that we don’t live in that world.
since the real world is non-supersymmetric and dS, not supersymmetric and AdS. He describes this theory as being “a very precise mathematical structure”, which one might argue with.
Something very different is “string theory”:
you might call it string-inspired theory, or think of it as expanding the boundaries of this very precise theory in ways that we don’t know how to at present. We don’t know with any precision how to expand the boundaries into non-supersymmetric string theory or de Sitter space, for example, so we make guesses. The string landscape is one such guess…
The first primary fact is that the world is not exactly supersymmetric and string theory with a capital S is. So where are we? Who knows! But it’s exciting to be in a situation where there is confusion.
Witten, who had negative thoughts about the anthropic idea, eventually gave up and accepted that it seems to be the best possibility. And I think that’s probably true for a lot of other people. But it can’t have the ultimate influence that a real theory with quantitative predictions can have. At present it’s a set of ideas that fit together and are somewhat compelling, but unfortunately nobody really knows how to use this in a technical way to be able to precisely confirm it. That hasn’t changed in 20 years. In the meantime, theoretical physicists have gone off in the important direction of quantum gravity and holography.
The argument seems to be: let’s put a constraint on parameters in cosmology so that we can put de Sitter space in the swampland. But the world looks very much like de Sitter space, so I don’t understand the argument and I suspect people are wrong here.
I had one big negative surprise, as did much of the community. This was a while ago when the idea of “technicolour” – a dynamical way to break electroweak symmetry via new gauge interactions – turned out to be wrong. Everybody I knew was absolutely convinced that technicolour was right, and it wasn’t. I was surprised and shocked.
I remember first hearing about the Technicolor idea around 1979 when Susskind and Weinberg wrote about it. It was a very attractive idea by itself, but the problem was that to match known flavor physics you needed to go to “Extended Technicolor”, which was really ugly (lots of new degrees of freedom, no predictivity). No idea when people supposedly were “absolutely convinced that technicolour was right”, maybe it was for the few months it took them to realize you needed Extended Technicolor.
One extremely interesting idea is “quantum gravity in the lab” – the idea that it is possible to construct systems, for example a large sphere of material engineered to support surface excitations that look like conformal field theory, and then to see if that system describes a bulk world with gravity. There are already signs that this is true. For example, the recent claim, involving Google, that two entangled quantum computers have been used to send information through the analogue of a wormhole shows how the methods of gravity can influence the way quantum communication is viewed. It’s a sign that quantum mechanics and gravity are not so different.
Unclear to me how this enthusiastic reference to the wormholes relates to his much less enthusiastic recent quote in New Scientist:
What is not so clear is whether the experiment is any better than garden-variety quantum teleportation and does it really capture the features of macroscopic general relativity that the authors might like to claim… only in the most fuzzy of ways (at best).
The paper explaining that this Nature cover story, besides being a publicity stunt, was also completely wrong, has so far attracted very little media attention. The first thing I’ve seen came out today at New Scientist, a publication often accused of promoting hype, but in this case so far the only ones reporting problems with the hyped result. The title of the article is Google’s quantum computer simulation of a wormhole may not have worked. It contains an explanation of the technical problems:
The first problem has to do with how the simulated wormhole reacted to the signals being sent through it….Yao and his colleagues found that for each individual test, the system continued to oscillate indefinitely, which doesn’t match the expected behaviour of a wormhole.
The second issue was related to the signals themselves. One of the signatures of a real wormhole – and therefore of a good holographic representation of a wormhole – is that the signal comes out looking the same as it went in. Yao and his team found that while this worked for some signals – those similar to the ones the researchers used to train a machine learning algorithm used to simplify the system – it didn’t work for others.
…it seems that for this particular quantum system, the size winding would disappear if the model was made larger or more detailed. Therefore, the perfect size winding observed by the original authors may just be a relic of the model’s small size and simplicity.
There is a response from Maria Spiropulu:
“The authors of the comment argue about the many-body properties of the individual decoupled quantum systems of our model,” she says. “We observed features of the coupled systems consistent with traversable wormhole teleportation.”
Remarkably, Lenny Susskind throws the authors of the stunt under the bus:
]]>“What is not so clear is whether the experiment is any better than garden-variety quantum teleportation and does it really capture the features of macroscopic general relativity that the authors might like to claim… only in the most fuzzy of ways (at best),” he says.
I just noticed that last semester Edward Witten was teaching Physics 539 at Princeton, a graduate topics course. Since he’s now past the age of 70, at the IAS he is officially retired and an emeritus professor (the IAS is the only place I know of in the US with retirement at 70, presumably since it is a non-teaching institution). I don’t know if there are other times Witten has been teaching courses at the university since his move to the IAS in 1987.
Videos of the first few lectures are on Youtube here, problem sets on this web-page. It seems like the course started out covering issues with causality in general relativity, following these lecture notes, then later moved on to topics in quantum information theory.
]]>Some interviews that readers of this blog may find of interest:
I had thought that the wormhole story had reached peak absurdity back in December, but last night some commenters pointed to a new development: the technical calculation used in the publicity stunt was nonsense, not giving what was claimed. The paper explaining this is Comment on “Traversable wormhole dynamics on a quantum processor”, from a group led by Norman Yao. Yao is a leading expert on this kind of thing, recently hired by Harvard as a full professor. There’s no mention in the paper about any conversations he might have had with the main theorist responsible for the publicity stunt, his Harvard colleague Daniel Jafferis.
Tonight Fermilab is still planning a big public event to promote the wormhole, no news yet on whether it’s going to get cancelled. Also, no news from Quanta magazine, which up until now has shown no sign of understanding the extent they were taken in by this. Finally, no news from Nature about whether the paper will be retracted, and whether the retraction will be a cover story with a cool computer graphic of a non-wormhole.
]]>This posting is about the problems with the idea that you can simply formulate quantum mechanical systems by picking a configuration space, an action functional S on paths in this space, and evaluating path integrals of the form
$$\int_{\text{paths}}e^{iS[\text{path}]}$$
Necessity of imaginary time
If one tries to do this path integral for even the simplest possible case (a free particle in one space dimension), the answer (for paths from $q_0$ to $q_t$ in time $t$) is the propagator
$$U_t(q_t-q_0)=\frac{1}{2\pi}\int_{-\infty}^\infty c^{ik(q_t-q_0)}e^{-i\frac{k^2}{2m}t}dk$$
As written, this integral doesn’t exist. To make sense of it you have to interpret it as a distribution, and define it by giving $t$ a small negative imaginary part, which you then take to zero. It’s really a boundary value of something holomorphic defined in the lower half complex $t$ plane. As such, you can just define it for negative imaginary $t$, and analytically continue the result to the real $t$ boundary. If you try and think of discretize the usual path integral, you face the same kind of problem making sense of the integrals for each discretized time step.
The good news is that the path integral is perfectly well-defined for imaginary time, with a well-understood concept of measure widely used in probability theory.
If you try and do the same thing for Yang-Mills theory, again you’ll get something ill-defined for real time, with the added disadvantage of no way to actually calculate it. Going to imaginary time and discretizing, you get lattice gauge theory, which gives well-defined integrals for fixed lattice spacing, and is conjectured to have a well-defined limit at the lattice spacing is taken to zero.
Not an integral and not needed for fermions
Actual fundamental matter particles are fermions, with an action functional that is quadratic in the fermion fields. For these there’s a “path integral”, but it’s in no sense an actual integral, rather an interesting algebraic gadget. Since the action functional is quadratic, you can explicitly evaluate it and just work with the answer the algebraic gadget gives you. You can formulate this story as an analog of an actual path integral, but it’s unclear what this analogy gets you.
Phase space path integrals don’t make sense in general
Another aspect of the fermion action is that it has only one time derivative. For actions of this kind, bosonic or fermionic, the variables are not configuration space variable but phase space variables. For a linear phase space and quadratic action you can figure out what to do, but for non-linear phase spaces or non-quadratic actions, in general it is not clear how to make any sense of the path integral, even in imaginary time.
In general this is a rather complicated story (see some background in the part I post). For an interesting recent take on the phase-space path integral, see Witten’s A New Look At The Path Integral Of Quantum Mechanics.
]]>Two things recently made me think I should write something about path integrals: Quanta magazine has a new article out entitled How Our Reality May Be a Sum of All Possible Realities and Tony Zee has a new book out, Quantum Field Theory, as Simply as Possible (you may be affiliated with an institution that can get access here). Zee’s book is a worthy attempt to explain QFT intuitively without equations, but here I want to write about what it shares with the Quanta article (see chapter II.3): the idea that QM or QFT can best be defined and understood in term of the integral
$$\int_{\text{paths}}e^{iS[\text{path}]}$$
where S is the action functional. This is simple and intuitively appealing. It also seems to fit well with the idea that QM is a “many-worlds” theory involving considering all possible histories. Both the Quanta article and the Zee book do clarify that this fit is illusory, since the sum is over complex amplitudes, not a probability density for paths.
This posting will be split into two parts. The first will be an explanation of the context of what I’ve learned about path integrals over the years. If you’re not interested in that, you can skip to part II, which will list and give a technical explanation of some of the problems with path integrals.
I started out my career deeply in thrall to the idea that the path integral was the correct way to formulate quantum mechanics and quantum field theory. The first quantum field theory course I took was taught by Roy Glauber, and involved baffling calculations using annihilation and creation operators. At the same time I was trying to learn about gauge theory and finding that sources like the 1975 Les Houches Summer School volume or Coleman’s 1973 Erice lectures gave a conceptually much simpler formulation of QFT using path integrals. The next year I sat in on Coleman’s version of the QFT course, which did bring in the path integral formalism, although only part-way through the course. This left me with the conclusion that path integrals were the modern, powerful way of thinking, Glauber was just hopelessly out of touch, and Coleman didn’t start with them from the beginning because he was still partially attached to the out-of-date ways of thinking of his youth.
Over the next few years, my favorite QFT book was Pierre Ramond’s Field Theory: A Modern Primer. It was (and remains) a wonderfully concise and clear treatment of modern quantum field theory, starting with the path integral from the beginning. In graduate school, my thesis research was based on computer calculations of path integrals for Yang-Mills theory, with the integrals done by Monte-Carlo methods. Spending a lot of time with such numerical computations further entrenched my conviction that the path integral formulation of QM or QFT was completely essential. This stayed with me through my days as a postdoc in physics, as well as when I started spending more time in the math community.
My first indication there could be some trouble with path integrals I believe started in around 1988, when I learned of Witten’s revolutionary work on Chern-Simons theory. This theory was defined as a very simple path integral, a path integral over connections with action the Chern-Simons functional. What Witten was saying was that you could get revolutionary results in three-dimensional topology, simply by calculating the path integral
$$\int_{\mathcal A} e^{iCS[A]}$$
where the integration is over the space of connections A on a principal bundle over some 3-manifold. During my graduate student days and as a postdoc I had spent a lot of time thinking about the Chern-Simons functional (see unpublished paper here). If I could find a usable lattice gauge theory version of CS[A] (I never did…), that would give a way defining the local topological charge density in the four-dimensional Yang-Mills theory I was working with. Witten’s new quantum field theory immediately brought back to mind this problem. If you could solve it, you would have a well-defined discretized version of the theory, expressed as a finite-dimensional version of the path integral, and then all you had to do was evaluate the integral and take the continuum limit.
Of course this would actually be impractical. Even if you solved the problem of discretizing the CS functional, you’d have a high dimensional integral over phases to do, with the dimension going to infinity in the limit. Monte-Carlo methods depend on the integrand being positive, so won’t work for complex phases. It is easy though to come up with some much simpler toy-model analogs of the problem. Consider for example the following quantum mechanical path integral
$$\int_{\text {closed paths on}\ S^2} e^{i\frac{1}{2}\oint A}$$
Here $S^2$ is a sphere of radius 1, and A is locally a 1-form such that dA is the area 2-form on the sphere. You could think of A as the vector potential for a monopole field, where the monopole was inside the sphere.
If you think about this toy model, which looks like a nice simple version of a path integral, you realize that it’s very unclear how to make any sense of it. If you discretize, there’s nothing at all damping out contributions from paths for which position at time $t$ is nowhere near position at time $t+\delta t$. It turns out that since the “action” only has one time derivative, the paths are moving in phase space not configuration space. The sphere is a sort of phase space, and “phase space path integrals” have well-known pathologies. The Chern-Simons path integral is of a similar nature and should have similar problems.
I spent a lot of time thinking about this, one thing I wrote early on (1989) is available here. You get an interesting analog of the sphere toy model for any co-adjoint orbit of a Lie group G, with a path integral that should correspond to a quantum theory with state space the representation of G that the orbit philosophy associates to that orbit. Such a path integral that looks like it should make sense is the path integral for a supersymmetric quantum mechanics system that gives the index of a Dirac operator. Lots of people were studying such things during the 1980s-early 90s, not so much more recently. I’d guess that a sensible Chern-Simons path integral will need some fermionic variables and something like the Dirac operator story (in the closest analog of the toy model, you’re looking at paths moving in a moduli space of flat connections).
Over the years my attention has moved on to other things, with the point of view that representation theory is central to quantum mechanics. To truly play a role as a fundamental formulation of quantum mechanics, the path integral needs to find its place in this context. There’s a lot more going on than just picking an action functional and writing down
$$\int_{paths}e^{iS[\text{path}]}$$
Since I had a little free time today, I was thinking of writing something motivated by two things I saw today, Sabine Hossenfelder’s What’s Going Wrong in Particle Physics, and this summer’s upcoming SUSY 2023 conference and pre-SUSY 2023 school. While there are a lot of ways in which I disagree with Hossenfelder’s critique, there are some ways in which it is perfectly accurate. For what possible reason is part of the physics community organizing summer schools to train students in topics like “Supersymmetry Phenomenology” or “Supersymmetry and Higgs Physics”? “Machine Learning for SUSY Model Building” encapsulates nicely what’s going wrong in one part of theoretical particle physics.
To begin my anti-SUSY rant, I looked back at the many pages I wrote 20 years ago about what was wrong with SUSY extensions of the SM in chapter 12 of Not Even Wrong. There I started out by noting that there were 37,000 or so SUSY papers at the time (SPIRES database). Wondering what the current number is, I did the same search on the current INSPIRE database, which showed 68,469 results. The necessary rant was clear: things have not gotten better, the zombie subject lives on, fed by summer schools like pre-SUSY 2023, and we’re all doomed.
But then I decided to do one last search, to check number of articles by year (e.g. search “supersymmetry and de=2020”). The results were surprising, and I spent some time compiling numbers for the following table:
These numbers show a distinct and continuing fall-off starting after 2015, the reason for which is clear. The LHC results were in, and a bad idea (SUSY extensions of the SM) had been torpedoed by experiment, taking on water and sinking after 20 years of dominance of the subject. To get an idea of the effect of the LHC results, you can read this 2014 piece by Joe Lykken and Maria Spiropulu (discussed here), by authors always ahead of the curve. No number of summer schools on SUSY phenomenology are going to bring this field back to health.
Going back to INSPIRE, I realized that I hadn’t needed to do the work of creating the above bar graph, the system does it for you in the upper left corner. I then started trying out other search terms. “string theory” shows no signs of ill-health, and “quantum gravity”, “holography”, “wormhole”, etc. are picking up steam, drawing in those fleeing the sinking SUSY ship. With no experimentalists to help us by killing off bad ideas in these areas, some zombies are going to be with us for a very long time…
]]>A few things that may be of interest:
A wormhole, also known as an Einstein-Rosen bridge, is a hypothetical tunnel connecting remote points in spacetime. While wormholes are allowed by Albert Einstein’s theory of relativity, wormholes have never been found in the universe. In late 2022, the journal Nature featured a paper co-written by Joe Lykken, leader of the Fermilab Quantum Institute, that describes observable phenomenon produced by a quantum processor that “are consistent with the dynamics of a transversable wormhole.” Working with a Sycamore quantum computer at Google, a team of physicists was able to transfer information from one area of the computer to another through a quantum system utilizing artificial intelligence hardware.
The “utilizing artificial intelligence hardware” seems to be an incoherent attempt to add more buzzwords to the bullshit. If you know anyone with any influence at the lab, you might want to consider contacting them and asking them to try and get this embarrassment canceled.
Some non-Japanese mathematicians questioned, “This is an insult to the Japanese mathematics world! Why don’t Japanese mathematicians say anything after being so insulted?” I also think the question is valid.
I’ve been wondering if there is a historical parallel to look to, with one possibility the situation in 1938-39 when Hitler invaded Czechoslovakia. By this point (as now) a lot of scientists had fled to the West, and the issue must have arisen of how scientists in the West should deal with their German colleagues who were staying in Germany.
If I have any serious criticism of this group at all, it is that their recent concentration on superstrings seems to me a tactical error, too much devotion of effort to a line of development that (at least to an outsider’s eye) is not that promising. However, I could well be wrong in this, and, even if I am right, they’ll soon discover they’ve drilled a dry hole and be off exploring other fields next year.
Unfortunately the last part of this was very wrong…
The New York Times today has Where is Physics Headed (and How Soon Do We Get There?). It’s an interview by Dennis Overbye of Maria Spiropulu and Michael Turner, the chairs of the NAS Committee on Elementary Particle Physics – Progress and Promise. This committee is tasked with advising the DOE and NSF so they can “make informed decisions about funding, workforce, and research directions.”
The transcript of the interview is rather bizarre, for several reasons. Spiropulu, probably the main person responsible for the recent wormhole publicity stunt, is here the voice of sober reason:
Overbye: String theory — the vaunted “theory of everything” — describes the basic particles and forces in nature as vibrating strings of energy. Is there hope on our horizon for better understanding it? This alleged stringiness only shows up at energies millions of times higher than what could be achieved by any particle accelerator ever imagined. Some scientists criticize string theory as being outside science.
Spiropulu: It’s not testable.
whereas Turner (an astronomer with no particular background in mathematics) is a big fan of string theory as mathematics:
Turner: But it is a powerful mathematical tool. And if you look at the progress of science over the past 2,500 years, from the Milesians, who began without mathematics, to the present, mathematics has been the pacing item. Geometry, algebra, Newton and calculus, and Einstein and non-Riemannian geometry.
…
We will have to wait and see what comes from string theory, but I think it will be big.
On the topic of particle physics and unification, there’s
Overbye: You’re referring to Grand Unified Theories, or GUTs, which were considered a way to achieve Einstein’s dream of a single equation that encompassed all the forces of nature. Where are we on unification?
Spiropulu: The curveball is that we don’t understand the mass of the Higgs, which is about 125 times the mass of a hydrogen atom.
When we discovered the Higgs, the first thing we expected was to find these other new supersymmetric particles, because the mass we measured was unstable without their presence, but we haven’t found them yet. (If the Higgs field collapsed, we could bubble out into a different universe — and of course that hasn’t happened yet.)
That has been a little bit crushing; for 20 years I’ve been chasing the supersymmetrical particles. So we’re like deer in the headlights: We didn’t find supersymmetry, we didn’t find dark matter as a particle.
Turner makes the case one often hears these days from string theorists: the field may have given up on unification, but it has moved on to something much less boring:
Turner: I feel like things have never been more exciting in particle physics, in terms of the opportunities to understand space and time, matter and energy, and the fundamental particles — if they are even particles. If you asked a particle physicist where the field is going, you’d get a lot of different answers.
But what’s the grand vision? What is so exciting about this field? I was so excited in 1980 about the idea of grand unification, and that now looks small compared to the possibilities ahead.
…
Turner: The unification of the forces is just part of what’s going on. But it is boring in comparison to the larger questions about space and time. Discussing what space and time are and where they came from is now within the realm of particle physics.From the perspective of cosmology, the Big Bang is the origin of space and time, at least from the point of view of Einstein’s general relativity. So the origin of the universe, space and time are all connected. And does the universe have an end? Is there a multiverse? How many spaces and times are there? Does that question even make sense?
Spiropulu: To me, by the way, unification is not boring. Just saying.
The problem with the idea that we’ve moved on to a new, far more exciting time in physics, devoted to replacing conventional space-time and exploring the multiverse, is that there’s no actual way to do experiments about any of this (other than the wormholes…). If this is the vision of the coming NAS report, a possible response from the DOE and NSF may be “That’s nice, we can now shut down all those expensive labs and experiments doing the boring stuff and focus on investigating the wormholes that Google’s quantum computer is producing for us.”
]]>In recent years I’ve found there’s no point to trying to have an intelligible argument about “string theory”, simply because the term no longer has any well-defined meaning. At the KITP next spring, there will be a program devoted to What is String Theory?, with a website that tells us that “the precise nature of its organizational principle remains obscure.” As far as I can tell though, the problem is not one of insufficient precision, but not knowing even the general nature of such an organizational principle.
What one hears when one asks about this these days is that the field has moved on to focusing on the one part of this that is understood: the “AdS/CFT conjecture.” I’ve gotten the same answer when asking about the meaning of the “ER=EPR conjecture”, and recently the claim seems to be that the black hole information paradox is resolved, again, somehow using the “AdS/CFT conjecture.” Today I noticed this twitter thread from Jonathan Oppenheim raising questions about the “AdS/CFT conjecture” and the discussion there reminded me that I don’t understand what the people involved mean by those words. What exactly (physicist meaning of “exactly”, not mathematician meaning) is the “AdS/CFT conjecture”?
To be clear, I have tried to follow this subject since its beginnings, and at one point was pretty well aware of the exact known statements relating type IIB superstring theory on five-dim AdS space times a five-sphere with M units of flux to N=4 U(M) SYM. While this provided an impressive realization of the old dream of relating a large M QFT to a weakly coupled string theory, it bothered me that there was no meaning to the duality in the sense that no one knew how to define the strongly coupled string theory. This problem seemed to get dealt with by turning the conjecture into a definition of string theory in this background, but it was always unclear how that was supposed to work.
So, my question isn’t about that, but about the much more general use of the term to refer to all sorts of gravity/CFT relationships in various dimensions. There are hundreds if not thousands of theorists actively working on this these days, and my question is aimed at them: what exactly do you mean when you say “the AdS/CFT conjecture”?
]]>This semester I’m teaching the second half of our graduate course on Lie groups and representations, and have started making plans for the course, which will begin next week. There’s a web-page here which I’ll be adding to as time goes on. The plan is to try and write up lecture notes for most of the course, some of which may be rewrites of older notes written when I taught earlier versions of this course. I’ll post these typically after the corresponding lectures. Any corrections or comments will be welcome.
This year I’m hoping to integrate ideas about “quantization” into the course more than in the past, starting off with the mathematics behind what physicists often call “canonical quantization”. This topic is worked out very explicitly and in great detail in this book, but in this course I’ll be giving a more stream-lined presentation from a more advanced point of view. This subject has a somewhat different flavor than usual for math graduate courses, in that instead of proving things about classes of representations, it’s one very specific representation that is the topic.
This topic is also the simplest example of the general philosophy of trying to understand Lie group representations in terms of the geometry of a “co-adjoint orbit”, and I’ll try and say a bit about this “orbit philosophy” and “geometric quantization”.
The next topic of the course will likely be more standard: the classification of finite dimensional representations of semi-simple complex Lie algebras (or, equivalently, compact Lie groups), and their construction using Verma modules. For this topic it’s hard to justify spending a lot of time writing notes, since there already are several places this has been done very well (e.g Kirillov’s book). After doing this algebraically, I’ll go back to the geometric and orbit point of view and explain the Borel-Weil-Bott theorem giving a geometric construction of these representations.
For the last part of the course, I hope to discuss the representations of SL(2,R) and the general classification of real semi-simple Lie algebras and groups. If I ever manage to understand what’s going on with the real Weil group and the Langlands classification of representations of real Lie groups, maybe I’ll say something about that, but that is probably too much to hope for.
Throughout the course, as well as the relation to quantization, I also hope to explain some things about relations to number theory. These would include the theory of theta functions early on, modular forms at the end.
]]>Earlier this year I bought a copy of the recently published version of Grothendieck’s Récoltes et Semailles, and spent quite a lot of time reading it. I wrote a bit about it here, intended to write something much longer when I finished reading, but I’ve given up on that idea. At some point this past fall I stopped reading, having made it through all but 100 pages or so of the roughly 1900 total. I planned to pick it up again and finish, but haven’t managed to bring myself to do that, largely because getting to the end would mean I should write something, and the task of doing justice to this text looks far too difficult.
Récoltes et Semailles is a unique and amazing document, some of the things in it are fantastic and wonderful. Quoting myself from earlier this year
there are many beautifully written sections, capturing Grothendieck’s feeling for the beauty of the deepest ideas in mathematics. One gets to see what it looked like from the inside to a genius as he worked, often together with others, on a project that revolutionized how we think about mathematics.
A huge problem with the book is the way it was written, providing a convincing advertisement for word processors. Grothendieck seems to have not significantly edited the manuscript. When he thought of something relevant to what he had written previously, instead of editing that, he would just type away and add more material. Unclear how this could ever happen, but it would be a great service to humanity to have a competent editor put to work doing a huge rewrite of the text.
The other problem though is even more serious. The text provides deep personal insight into Grothendieck’s thinking, which is simultaneously fascinating and discouraging. His isolation and decision to concentrate on “meditation” about himself left him semi-paranoid and without anyone to engage with and help channel his remarkable intellect. It’s frustrating to read hundreds of pages about motives which consist of some tantalizing explanations of these deep mathematical ideas, embedded in endless complaints that Deligne and others didn’t properly understand and develop these ideas (or properly credit him). One keeps thinking: instead of going on like this, why didn’t he just do what he said he had planned earlier, write out an explanation of these ideas?
As an excuse for giving up on writing more myself about this, I can instead recommend Pierre Schapira’s new article at Inference, entitled A Truncated Manuscript. Schapira provides an excellent review of the book, and also explains a major problem with it. Grothendieck devotes endless pages to complaints that Zoghman Mebkhout did not get sufficient recognition for his work on the so-called Riemann-Hilbert correspondence for perverse sheaves. Mebkhout was Schapira’s student, and he explains that a correct version of the story has the ideas involved originating with Kashiwara, who was the one who should have gotten more recognition, not Mebhkout. According to Schapira, he explained what had really happened to Grothendieck, who wrote an extra twenty pages or so correcting mistaken claims in Récoltes et Semailles, but these didn’t make it into the recently published version. If someone ever gets to the project of editing Récoltes et Semailles, a good starting point would be to simply delete all of the material that Grothendieck included on this topic.
The extra pages described are available now here, as part of an extensive website called the Grothendieck Circle, now being updated by Leila Schneps. For a wealth of material concerning Grothendieck’s writings, see this site run by Mateo Carmona. It includes a transcription of Récoltes et Semailles that provides an alternative to the recently published version.
The Schapira article is a good example of some of the excellent pieces that the people at Inference have published since they started nearly ten years ago (another example relevant to Grothendieck would be Pierre Cartier’s A Country Known Only by Name from their first issue). I’ve heard news that they have lost a major part of their funding, which was reportedly from Peter Thiel and was one source of controversy about the magazine. I wrote about this here in early 2019 (also note discussion in the comments). My position then and now is that the concerns people had about the editors and funding of Inference needed to be evaluated in the context of the result, which was an unusual publication putting out some high quality articles about math and physics that would likely not have otherwise gotten written and published. I hope they manage to find alternate sources of funding that allow them to keep putting out the publication.
]]>The wormhole publicity stunt story just keeps going. Today an article about the Google Santa Barbara lab and quantum computer used in the publicity stunt appeared in the New Yorker. One of the main people profiled is Hartmut Neven, the lab founder and a publicity stunt co-author. He is described as follows:
Neven, originally from Germany, is a bald fifty-seven-year-old who belongs to the modern cast of hybridized executive-mystics. He talked of our quantum future with a blend of scientific precision and psychedelic glee. He wore a leather jacket, a loose-fitting linen shirt festooned with buttons, a pair of jeans with zippered pockets on the legs, and Velcro sneakers that looked like moon boots. “As my team knows, I never miss a single Burning Man,” he told me.
The article explains what has been going on at the Google lab under Neven’s direction:
in the past few years, in research papers published in the world’s leading scientific journals, he and his team have also unveiled a series of small, peculiar wonders: photons that bunch together in clumps; identical particles whose properties change depending on the order in which they are arranged; an exotic state of perpetually mutating matter known as a “time crystal.” “There’s literally a list of a dozen things like this, and each one is about as science fictiony as the next,” Neven said. He told me that a team led by the physicist Maria Spiropulu had used Google’s quantum computer to simulate a “holographic wormhole,” a conceptual shortcut through space-time—an achievement that recently made the cover of Nature.
There are some indications given that the wormholes aren’t everything you’d like a wormhole to be:
Google’s published scientific results in quantum computing have at times drawn scrutiny from other researchers. (One of the Nature paper’s authors called their wormhole the “smallest, crummiest wormhole you can imagine.” Spiropulu, who owns a dog named Qubit, concurred. “It’s really very crummy, for real,” she told me.) “With all these experiments, there’s still a huge debate as to what extent are we actually doing what we claim,” Scott Aaronson, a professor at the University of Texas at Austin who specializes in quantum computing, said. “You kind of have to squint.”
I took another look at the Nature article and realized that at the end it has a section explaining the contributions of each author (I’ll reproduce the whole thing as an appendix here). For Neven it has
Google’s VP Engineering, Quantum AI, H.N. coordinated project resources on behalf of the Google Quantum AI team.
Two physicists profiled are John Preskill and Alexei Kitaev. Academia in this field is seeing a big impact of quantum computing jobs and funding. According to the article:
Preskill and Kitaev teach Caltech’s introductory quantum-computing course together, and their classroom is overflowing with students. But, in 2021, Amazon announced that it was opening a large quantum-computing laboratory on Caltech’s campus. Preskill is now an Amazon Scholar; Kitaev remained with Google. The two physicists, who used to have adjacent offices, today work in separate buildings. They remain collegial, but I sensed that there were certain research topics on which they could no longer confer.
Someone told me that the Amazon lab where Preskill works has postdoc-type positions for theoretical physicists, salary about 250K.
Theoretical physics hype and quantum computing hype come together prominently in the article. Besides Shor’s algorithm and its implications for cryptography, here’s the rest of what quantum computers promise:
A quantum computer could open new frontiers in mathematics, revolutionizing our idea of what it means to “compute.” Its processing power could spur the development of new industrial chemicals, addressing the problems of climate change and food scarcity. And it could reconcile the elegant theories of Albert Einstein with the unruly microverse of particle physics, enabling discoveries about space and time.
How long until quantum computers unify GR and the Standard Model? We just need better, fault-tolerant, qubits, and then:
A thousand fault-tolerant qubits should be enough to run accurate simulations of molecular chemistry. Ten thousand fault-tolerant qubits could begin to unlock new findings in particle physics.
The hype here is far hypier than any of the string theory hype I’ve been covering over the years, and it looks like it’s got a lot more money and influence behind it, so will be a major force driving the field in coming years and decades.
Appendix:
The Nature contributions section is:
]]>J.D.L. and D.J. are senior co-principal investigators of the QCCFP Consortium. J.D.L. worked on the conception of the research program, theoretical calculations, computation aspects, simulations and validations. D.J. is one of the inventors of the SYK traversable wormhole protocol. He worked on all theoretical aspects of the research and the validation of the wormhole dynamics. Graduate student D.K.K.47 worked on theoretical aspects and calculations of the chord diagrams. Graduate student S.I.D. worked on computation and simulation aspects. Graduate student A.Z.48 worked on all theory and computation aspects, the learning methods that solved the sparsification challenge, the coding of the protocol on the Sycamore and the coordination with the Google Quantum AI team. Postdoctoral scholar N.L. worked on the working group coordination aspects, meetings and workshops, and follow-up on all outstanding challenges. Google’s VP Engineering, Quantum AI, H.N. coordinated project resources on behalf of the Google Quantum AI team. M.S. is the lead principal investigator of the QCCFP Consortium Project. She conceived and proposed the on-chip traversable wormhole research program in 2018, assembled the group with the appropriate areas of expertise and worked on all aspects of the research and the manuscript together with all authors.
Most of the news I’m hearing today about the current wormhole publicity stunt is that physicists who could do something about it are instead blaming any problem on journalists and defending the stunt as some sort of progress forward.
I’ve been wondering what the future for this kind of thing looks like, got a partial answer by looking at this presentation today by the director of Fermilab. On page 67 she explains
Future experiments with better QC and with QCs connected through quantum networks, such as those under development at Fermilab, could provide better insight through better resolution and adding non-trivial spatial separation of the two systems.
So, next generation wormhole publicity stunts will involve, beyond going from 9 qubits to more, putting two quantum computers in two places and connecting them by a quantum network. The press reports will explain that physicists not only created a wormhole on a chip, but created a wormhole connecting two different labs.
I started looking for more information about these next-generation wormhole publicity stunts, and found instead something I hadn’t been aware of, an older such stunt, described in Towards Quantum Gravity in the Lab on Quantum Processors, which got attention last spring not at Quanta, but in the much lower profile Discover Magazine, where one reads:
The team developed quantum software that could reproduce wormhole inspired teleportation on both quantum computers and then characterized the results. “We have designed and carried out “wormhole-inspired” many-body teleportation experiments on IBM and Quantinuum quantum processors and we observe a signal consistent with the predictions,” say Shapoval and co.
One reason for the lack of significant attention to this publicity stunt as opposed to the current one surely is the decision of the authors to claim not “wormhole teleportation” but “wormhole-inspired teleportation”.
The past is the past, but it looks like the field of quantum gravity research is from now on going to be dominated by these wormhole publicity stunts, using more qubits and more quantum computers. This kind of research project is nearly ideal: you can get lots of funding from conventional sources like DOE, or even better, funding from and access to equipment at large tech companies like Google and IBM. You can convince the director of your lab or institute that you’re doing research of significance comparable to the discovery and testing of general relativity 100 years ago and your work will be vindicated by cover stories in Nature and all over the rest of the media.
Back in 1996, in The End of Science, John Horgan worried that this kind of science would end up in a “speculative post-empirical mode”, and quantum gravity theorists have for years now worried about accusations of not being connected to experiment. The solution to this problem is now clear: no one will take your wormholes seriously if they’re just on paper, so the thing to do is to get them realized in an algorithm that you run on the most twenty-first century experimental hardware available, a quantum computer in a tech company lab.
]]>If you’re sick of hearing about bogus wormholes, here are some other random topics:
Ethan Siegel has an extensive discussion of gravity-only dark matter models, under the title Is dark matter’s “nightmare scenario” true?
John Baez has a blog post on Neutrino Dark Matter, based on talking to Neil Turok about this recent paper by Boyle, Finn and Turok.
Latest news this evening from Scott Aaronson at the IAS in Princeton:
Tonight, David Nirenberg, Director of the IAS and a medieval historian, gave an after-dinner speech to our workshop, centered around how auspicious it was that the workshop was being held a mere week after the momentous announcement that a wormhole had been created on a microchip (!!)—in a feat that experts were calling the first-ever laboratory investigation of quantum gravity, and a new frontier for experimental physics itself. Nirenberg speculated that, a century from today, people might look back on the wormhole achievement as today we look back on Eddington’s 1919 eclipse observations providing the evidence for general relativity.
I confess: this was the first time I felt visceral anger, rather than mere bemusement, over this wormhole affair. Before, I had implicitly assumed: no one was actually hoodwinked by this. No one really, literally believed that this little 9-qubit simulation opened up a wormhole, or helped prove the holographic nature of the real universe, or anything like that. I was wrong.
Scott has been the one person in this field I’m aware of who has tried to do something about the out-of-control hype problem that has been going from bad to worse. I do disagree with him about one thing. He goes on to write:
I don’t blame the It from Qubit community—most of which, I can report, was grinding its teeth and turning red in the face right alongside me. I don’t even blame most of the authors of the wormhole paper, such as Daniel Jafferis, who gave a perfectly sober, reasonable, technical talk at the workshop…
I do blame all those people. Unlike Scott, they’ve been either participating in hype for years, or staying quiet and enjoying the benefits of it. Grinding their teeth and turning red in the face is not enough. They need to finally say something and take action.
]]>The best way to understand the “physicists create wormholes in the lab” nonsense of the past few days is as a publicity stunt (I should credit Andreas Karch for the idea to describe things this way), one that went too far. If the organizers of the stunt had stuck to “physicists study quantum gravity in the lab” they likely would have gotten away with it, i.e. not gotten any significant pushback.
There have already been a lot of claims about “quantum gravity in the lab” made in recent years, and surely many more will be made in the future. It’s important to understand that these all have been and always will be nothing but publicity stunts. In all cases, what is happening in these labs is some manipulation and observation of electron and electromagnetic fields at low energies. None of this has anything to do with gravitational degrees of freedom. One cannot possibly learn anything about the gravitational field or quantum gravity this way. If there is a dual theoretical description of QED in terms of a “gravitational” theory, this dual description is about other variables that have nothing to do with space-time and gravity in this world.
I’m hoping that journalists and scientists will learn something from this fiasco and not get taken in again anytime soon. It would be very helpful if both Nature and Quanta did an internal investigation of how this happened and reported the results to the public. Who were the organizers of the stunt and how did they pull it off? Already we’re hearing from Quanta that the problem was that they trusted “leading quantum gravity researchers”, and presumably Nature would make the same argument. Who were these “leading quantum gravity researchers”? Why weren’t any of the many other physicists who could have told them this was a stunt consulted?
It’s pretty clear that one of the organizers was Joe Lykken. After I wrote about his talk at CERN a month ago, someone told me that Dennis Overbye at the NYT was looking into writing about Lykken’s claims. I found it odd that the NYT would be interested in this, now it’s clear that the behind-the-scenes publicity campaign was starting already a month ago. If you look at Lykken’s slides, there’s no mention at all of the work he had done and knew was about to appear in Nature, but the whole talk is structured around arguing that such a quantum computer calculation would be a huge achievement. I still don’t know what to make of his claims in the Quanta video that the calculation was on a par with the Higgs discovery. Does he really believe this (he’s completely delusional) or not (he’s intentionally dishonest)?
It’s extremely unusual to not distribute a result like this on the arXiv before publication, to instead keep it confidential and go to the press with embargoed information. By doing this though you control the first wave of publicity, since you pick the press people you deal with and the terms of the embargo. One thing that first mystified me about this story is why Natalie Wolchover at Quanta was quoting comments from me on a different issue in her story, but hadn’t asked me about the article and its “physicists create wormholes in a lab” claims. One possible explanation for this is that the terms of the embargo meant she could not discuss the Nature article with me. I have to admit that if I had heard from her or any other journalist that a group was about to hold a press conference and announce publication in a major journal of claims about quantum gravity in a lab, and would I respect embargo terms so they could share info with me and get a quote, I would have said no. Likely I (and others in a similar situation) would immediately have gone and written a blog entry about how a publicity stunt was about to happen.
]]>I just heard the sad news that Igor Krichever passed away this morning at the age of 72. Igor was a great scholar, a wise man, and a wonderful human being. He will be sorely missed by his colleagues at Columbia and elsewhere. My condolences to his family, which includes another first-rate mathematician, his son-in-law Sasha Braverman. During the past year Igor had been suffering from a progressive neuro-degenerative disease. Fortunately he was still in good enough health to fully participate in and enjoy his 70th birthday conference, which took place at Columbia in early October.
In recent years Igor had been spending only one semester each year at Columbia, much of the rest of the time was in Moscow, where he was director of Skoltech’s Center for Advanced Studies. He came to Columbia in the mid-90s, with his hiring the beginning of a period of successful expansion and improvement in the math department. He was a gentle and friendly person, and it was always a pleasure to have a chance to talk to him about one topic or another. When he became chair of the department I remember thinking that it seemed unlikely that someone as scholarly and laid-back as him, with a somewhat typical Russian mathematician’s other-worldliness, could deal well with the challenges of the university bureaucracy. I was very, very wrong, as it became clear that he was extremely wise in the ways of the world and a great department chair. I guess that after growing up with Soviet bureaucracy, dealing with the Columbia version was child’s play.
Igor was a very distinguished mathematician, one of the leading figures working at the intersection of integrable systems and algebraic geometry. For more about his scientific work, there’s a biographical notice written by some of his colleagues at the time of his 60th birthday (which was also celebrated at Columbia with a conference, see here).
]]>