Curt Jaimungal’s Theories of Everything podcast has a new episode featuring a long talk with Edward Frenkel (by the way, I’ll be doing one of these next month). A few months ago I wrote about a Lex Fridman podcast with Frenkel here. While both of these are long, they’re very much worth watching.

While there’s some overlap between the two podcasts, some different topics are covered in the new one. In particular, one thing that happened to Frenkel since last spring is that he attended Strings 2023 and gave a talk there (slides here, video here). The experience opened his eyes to just how bad some of the long-standing problems with string theory have gotten, and starting around here in the podcast he has a lot to say about them.

It’s pretty clear that his reaction to what he saw going on at the conference was colored by his experience growing up in late Soviet-era Russia, where the failure of the system had become clear to everyone, but you weren’t supposed to say anything about this. He pins responsibility for this situation on senior leaders of the field, who have been unwilling to admit failure. As part of this, he acknowledges his own role in the past, in which he was often happy to get some reflected glory from string theory hype by playing up its positive influence on parts of mathematics while ignoring its failure as a theory of the real world. In any case, I urge you to watch the entire podcast, it’s well worth the time.

For a very different perspective on the responsibility of senior people for string theory’s problems, you might want to take a look at the bizarre twitter feed of stringking42069, which may or may not be some very high-quality trolling. In between replies and tweets devoted to weightlifting, weed and women, the author has some very detailed and mostly scornful commentary on the state of the field and the behavior of its leaders. His point of view is that the leaders have betrayed the true believers like himself, abandoning work on the subject in favor of irrelevancies like “it from qubit”, in the process tanking the careers of young people still trying to work on actual string theory. For a summary of the way he sees things, see here and here. Comments on specific people here and here.

This weekend here in New York if you’ve got $35 you can attend an event bringing together five of the people most responsible for the current situation. I doubt that the promised evaluation of “a mathematically elegant description that some have called a “theory of everything.”” will accurately reflect the state of the subject, but perhaps some of the speakers will have listened to what Edward Frenkel has to say (or read stringking42069’s tweets) and realized that a new approach to the subject is needed.

]]>This week Laurent Fargues has started a series of lectures here at Columbia on Some new geometric structures in the Langlands program. Videos are available here, but unfortunately there is a problem with the camera in that room, making the blackboard illegible (maybe we can get it fixed…). Fargues however is writing up detailed lecture notes, available here, so you can follow along with those.

Fargues is covering the story of the Fargues-Fontaine curve and the relationship between geometric Langlands on this curve and arithmetic local Langlands that he worked out with Scholze recently. On Monday Scholze gave a survey talk in Bonn entitled What Does Spec **Z** Look Like?, video available here. Scholze’s talk gave a speculative picture of how to thing about the global arithmetic story, with Spec **Z** as a sort of three-dimensional space. One thing new to me was his picture of the real place as a puncture, with boundary the twistor projective line. He then went on to motivate the course he will be teaching this fall with Dustin Clause on Analytic Stacks. Here at Columbia we have an ongoing seminar on some of the background for this, run by Juan Rodrigez-Camargo and John Morgan.

The math department at Columbia this fall will be hosting three special lecture series, each with some connection to physics:

- Sergiu Klainerman will be lecturing on the proof of nonlinear stability for slowly rotating black holes, Wednesday afternoons at 2:45.
- Nikita Nekrasov’s lectures will be on
*The Count of Instantons*, Friday afternoons at 1:30. - The Eilenberg lectures will be given by Laurent Fargues on Some new geometric structures in the Langlands program, Tuesdays at 4:10.

Some other less inspirational topics:

- The news this summer from the LHC has not been good. On July 17 a tree fell on two high-voltage power lines, causing beams to dump, magnets to quench, and damage (a helium leak) to occur in the cryogenics for an inner triplet magnet. See here for more details. Fixing this required warming up a sector of the ring, with the later cooldown a slow process. According to this status report today at the EPS-HEP2023 conference in Hamburg, there will be an ion run in October, but the proton run is now over for the year, with integrated luminosity only 31.4 inverse fb (target for the year was 75).
- The Mochizuki/IUT/abc saga continues, with Mochizuki today putting out a Brief Report on the Current Situation Surrounding Inter-universal Teichmuller Theory (IUT). The main point of the new document seems to be to accuse those who have criticized his claimed proof of abc of being in “very serious violation” of the Code of Practice of the European Mathematical Society. This is based upon a bizarre application of the language

Mathematicians should not make public claims of potential new theorems or the resolution of particular mathematical problems unless they are able to provide full details in a timely manner.

to the claim by Scholze and Stix that there is no valid proof of the crucial Corollary 3.12. It would seem to me that Mochizuki is the one in danger of being in violation of this language (he has not produced a convincing proof of this corollary), not Scholze or Stix. The burden of proof is on the person claiming a new theorem, not on experts pointing to a place where the claimed proof is unsatisfactory. Scholze in particular has provided detailed arguments here, here and here. Mochizuki has responded with a 156 page document which basically argues that Scholze doesn’t understand a simple issue of elementary logic.

Also released by Mochizuki today are copies of emails (here and here) he sent last year to Jakob Stix demanding that he publicly withdraw the Scholze-Stix manuscript explaining the problem with Mochizuki’s proof. Reading through these emails, it’s not surprising that they got no response. The mathematical content includes a long section explaining to Scholze and Stix that the argument they don’t accept is just like the standard construction of the projective line by gluing two copies of the affine line. On the topic of why he has not been able to convince experts of the proof of Corollary 3.12, Mochizuki claims that he convinced Emmanuel Lepage and that

one (very) senior, high-ranking member of the European mathematical community has asserted categorically (in a personal oral communication) that neither he nor his colleagues take such assertions (of a mathematical gap in IUT) seriously!

I suppose this might be Ivan Fesenko, but who knows.

- Since the Covid pandemic started, the World Science Festival has not been running its usual big annual event here in New York. This fall they will have an in-person event, consisting of four days of discussions moderated by Brian Greene. In particular there will be a panel Unifying Nature’s Laws: The State of String Theory evaluating the state of string theory, featuring four of the most vigorous proponents of the theory (Gross, Strominger, Vafa and Witten). I suspect their evaluation may be rather different than that of the majority of the theoretical physics community.

A few months back I saw a call for papers for a volume on “Establishing the philosophy of supersymmetry”. For a while I was thinking of writing something, since the general topic of supersymmetry is a complex and interesting one, about which there is a lot to say. Recently though it became clear to me that I should be writing up other more important things I’ve been working on. Also, taking a look back at the dozen or so pages I wrote about this 20 years or so ago for the book *Not Even Wrong*, there’s very little I would change (and I’ve written far too much since 2004 about this on the blog). What follows though are a few thoughts about what “supersymmetry” looks like now, maybe of interest to philosophers and others, maybe not…

First the good: “symmetry” is an absolutely central concept in quantum theory, in the mathematical form of Lie algebras and their representations. Most generally, “supersymmetry” means extending this to super Lie algebras and their representations, and there are wonderful examples of this structure. A central one for representation theory involves thinking of the Dirac operator as a supercharge: by extending a Lie algebra to a super Lie algebra, Casimirs have square roots, bringing in a whole new level of structure to familiar problems. In physics this is the phenomenon of Hamiltonians having square roots when you add fermionic variables, providing a “square root” of infinitesimal time translation.

Going from just a time dimension to more space-time dimensions, one finds supersymmetric quantum field theories with truly remarkable properties of deep mathematical significance. Example include 2d supersymmetric sigma models and mirror symmetry, 4d N=2 super Yang-Mills and four manifold invariants, 4d N=4 super Yang-Mills and geometric Langlands.

But then there’s the bad and the ugly: attempts to extend the Standard Model to a larger supersymmetric model. From the perspective of 2023, the story of this is one of increasingly pathological science. In 1971 Golfand and Likhtman first published an extension of the Poincaré Lie algebra to a super Lie algebra. This was pretty much ignored until the end of 1973 when Wess and Zumino rediscovered this from a different point of view and it became a hot topic among theorists. Very quickly it became clear what the problem was: the new generators one was adding took all known particle states to particle states with quantum numbers not corresponding to anything known. In other words, this supersymmetry acts trivially on known physics, telling you nothing new. It became commonplace to advertise supersymmetry as relating particles with different spin, without mentioning that no pairs of known particles were related this way. In all cases, a known particle was getting related to an unknown particle. Worse, for unbroken supersymmetry the unknown particle was of the same mass as the known one, something that doesn’t happen so the idea is falsified. One can try and save it by looking for a dynamical mechanism for spontaneous supersymmetry breaking and using this to push superpartners up to unobservable masses, but this typically makes an already pretty ugly theory far more so.

The seriousness of this problem was clear by the mid-late 1970s, when I was a student. The one hope was that maybe some extended supergravity theory with lots of extra degrees of freedom would dynamically break supersymmetry at a high scale, leaving the Standard Model as the low energy part of the spectrum. There wasn’t any convincing way to make this work, and it became clear that one couldn’t get chiral interactions like those of the electroweak theory this way. 1984 saw the advent of a different high scale model supposed to do this (superstring theory), but that’s another story.

Looking back from our present perspective, it’s very hard to understand why anyone saw supersymmetric extensions of the SM as plausible physics models that would be vindicated by observations at colliders. For example, Gross and Witten in 1996 published an article in the Wall Street Journal explaining that “There is a high probability that supersymmetry, if it plays the role physicists suspect, will be confirmed in the next decade.” Ten years later, when the Tevatron and LEP had seen nothing, the same argument was being made for the LHC. After over a decade of conclusive negative results from the LHC, one continues to hear prominent theorists assuring us that this is still the best idea out there and large conferences devoted to the topic. Long ago this became pathological science. In the call for papers, the issue is framed as:

recent debates on the prospects of low energy supersymmetry in light of its non-discovery at the LHC raise interesting epistemological questions.

From what I can see, the questions raised are not of an epistemological nature, but perhaps the philosophers will find a way to sort this out.

]]>For much of the past week, I’ve been attending off and on (on Zoom) the Strings 2023 conference. This year it’s in a hybrid format, with 200 participants in person at the Perimeter Institute, and another 1200 or so on Zoom. These yearly conferences give a good idea of what some of the most influential string theorists think is currently important, and I’ve been writing about them for twenty years. Videos of the talks are being posted here.

As in many of these Strings conferences in recent years, there was very little discussion of strings at Strings 2023. Of the 24 standard research talks, only 4 appeared to have anything to do with strings. A new innovation this year was to schedule in addition four “challenge talks”, conceived of as talks explicitly about material outside of string theory that might interest string theorists. In particular Edward Frenkel gave a nice survey of a wide range of ideas from quantum integrable systems and ending up with geometric Langlands. He motivated this with reference to what Feynman was working on very late in life and the problem of solving QCD. His slides are here, video here.

In addition there were four morning “Discussion Sessions”, which I attended most of, and at which string theory put in little to no appearance. Today’s discussion featured Nati Seiberg and Anton Kapustin and was about lattice versions of QFT, especially in their topological and geometrical aspects, a very non-stringy topic dear to my heart. Yesterday was It From Qubit, which had Geoff Penington discussing topics related to black holes. The conventional wisdom now seems to be that the information paradox is gone, solved semi-classically, so giving no insight into true quantum gravity dynamics. While this means you can’t see anything interesting at large distances from the black hole, Penington had some new ideas about something that might in principle be observable at atomic-scale distances from a super-massive black hole. Maldacena started off the session with slides promoting the way forward as quantum computer simulations involving 7000 qubits, a variant on the wormhole publicity stunt. The only time string theory made an appearance was in a suggestion by Dan Harlow that perhaps by doing quantum computer simulations theorists could solve the the problem of what “string theory” really is. It’s pretty clear what the leading direction is now for continuing the long tradition in string theory of outrageous hype.

After this week, I’m even more mystified about why the conference was called “Strings 2023” And how does one decide these days what “string theory” is and who is a “string theorist”? Oddly, two of the things that now distinguish this yearly conference from others are a pretty rigid exclusion of both real world physics (Frenkel comments on this here) as well as of what got people excited about string theory, superstring unification and its implications for seeing low energy SUSY at colliders. People still interested in that have split off to other conferences, especially String Phenomenology 2023 and SUSY 2023.

Those conference have their own kinds of mysteries (why do people keep working on ideas that failed long ago?). In particular, the closing talk on the Status and Future of Supersymmetry at SUSY 2023 was all about the great prospects for SUSY at the LHC, and included a Conclusion written (no joke) by ChatGPT:

The future of supersymmetry as a research program holds both exciting challenges and potential breakthroughs. While the LHC experiments have yet to observe direct evidence of supersymmetric particles, ongoing theoretical advancements and reﬁned experimental techniques oﬀer renewed hope. The future of supersymmetry research lies in two key directions. Firstly, novel theoretical models are being explored, including new variants of supersymmetry that incorporate dark matter candidates or non-linear realizations. These approaches push the boundaries of our understanding and allow for further exploration of the particle zoo. Secondly, upcoming experiments, such as the High-Luminosity LHC and future colliders, aim to explore higher energy scales and increase the sensitivity to supersymmetric signals. With these advancements, the quest for supersymmetry will continue to shape the ﬁeld of particle physics, inspiring new theoretical insights and propelling experimental discoveries.

Things just get stranger and stranger…

]]>Nanopoulos and co-authors have predictions from superstring theory that are “in strong agreement with NANOGrav data.” He has been at this now for almost 40 years. See for instance Experimental Predictions from the Superstring from 1985, where the superstring predicted the a top mass of 55 GeV and 360 GeV squarks.

]]>This week and next there’s an interesting summer school going on at the IAS, with topic Understanding Confinement. Videos of talks are available here or at the IAS video site.

Taking a look at some of the first talks brings back vividly my graduate student years, which were dominated by thinking about this topic. When I arrived in Princeton in 1979, the people there had been working for several years on trying to understand confinement semi-classically, in terms of instantons and other solutions to the Yang-Mills equations (e.g. merons). By 1979 it had become clear that such semi-classical calculations were not sufficient to understand confinement and people were looking for other ideas. There were quite a few around, including the idea that there was some sort of string theory dual to pure Yang-Mills theory, and I spent quite a lot of time reading up on efforts of Migdal, Polyakov and others to find a formulation of string theory that would provide the needed dual. I ended up writing my thesis on lattice gauge theory, an approach which had the great advantage that you could at least put the calculation on a computer and start trying to get a reliable result for pure Yang-Mills numerically. Some of the calculations I did were done at the IAS, with Nati Seiberg and others. The other thing I spent a lot of time thinking about was how to put spinor fields on the lattice, the beginning of my interest in the geometry of spinors.

I strongly recommend watching Witten’s talk on Some Milestones in the Study of Confinement. His career started a few years before mine, with the early part very much dominated by the problem of how to make sense of Yang-Mills theory non-perturbatively, and this has has always been a motivating problem behind much of his work. In his talk he explains clearly the approaches to the problem (lattice gauge theory, 1/N, dual Meissner) that appeared very soon after the advent of QCD in 1973. He emphasizes how each of these approaches shows indications of a possible string theory dual, while frustratingly not leading to a string model that has the right properties, summarizing (41:30) the situation with:

The string theory we want is probably quite unlike any that we actually know, as of now. We don’t know how to make a string theory with the short distance behavior of asymptotic freedom.

In his talk he discusses later developments, in particular the Seiberg-Witten solution to N=2 SYM and the AdS/CFT duality between a string theory and N=4 SYM, explaining how these advances still don’t provide a viable approach to the confinement problem in pure Yang-Mills.

I’m looking forward to seeing the rest of the talks, and finding out more about some things that have happened over the years since I was most actively paying attention to what was happening with the confinement problem.

]]>For several years now, David Ben-Zvi, Yiannis Sakellaridis and Akshay Venkatesh have been working on a project involving a relative version of Langlands duality, which among many other things provides a perspective on L-functions and periods of automorphic forms inspired by the quantum field theory point of view on geometric Langlands. For some talks about this, see quite a few by David Ben-Zvi (for example, talks here, here, here and here, slides here and here), the 2022 ICM contribution from Yiannis Sakellaridis, and Akshay Venkatesh’s lectures at the 2022 Arizona Winter School (videos here and here, slides here and here). Also helpful are notes from Ben-Zvi’s Spring 2021 graduate course (see here and here).

A paper giving details of this work has now appeared, with the daunting length of 451 pages. I’m looking forward to going through it, and learning more about the wide range of ideas involved. A recent post advertised James’s Arthur’s 204 page explanation of the original work of Langlands, and the ongoing progress on the original number field versions of his conjectures. It’s striking that while there are many connections, this new works shows that the “Langlands program” has expanded into a striking vision relating different areas of mathematics, with a strong connection to deep ideas about quantization and quantum field theory. The way in which these ideas bring together number theory and quantum field theory provide new evidence for the deep unity of fundamental ideas about mathematics and physics.

]]>At a news conference in Tokyo today there evidently were various announcements made about IUT, the most dramatic of which was a 140 million yen (roughly one million dollar) prize for a paper showing a flaw in the claimed proof of the abc conjecture. It is generally accepted by experts in the field that the Scholze-Stix paper Why abc is still a conjecture conclusively shows that the claimed proof is flawed. For a detailed discussion with Scholze about the problems with the proof, see here. For extensive coverage of the IUT story on this blog, see here.

Between paywalls and the limitations of Google translate, I’m not sure exactly what the process is for Scholze and Stix to collect their million dollars. Perhaps they just need to publish their paper, but it seems that the decision may be up to the businessman who is contributing the funds, and it’s unclear what his process will be.

For a few sources I’ve found, see here, here and here. If others have reliable and more detailed sources they can point to (especially anything in English), please do so.

]]>This is more of an advertisement than a blog post. This evening on the arXiv James Arthur has posted a wonderful 204 page document explaining the work of Robert Langlands, written in conjunction with the award of the Abel Prize to Langlands.

This isn’t an introduction to the subject, but if you have some idea of what the Langlands program is about, it provides a wealth of valuable explanations at a more detailed level of exactly what Langlands discovered. It ends with a discussion of the “Beyond Endoscopy” program of his later career.

]]>Stories about the latest prediction of superstring theory here and here, based on a Tsukuba University press release about this paper. Generally ignoring this kind of nonsense these days, but the new feature of this one is that the press release sure seems to have been written by ChatGPT.

]]>A few days ago I read a fascinating article in New York magazine: Inside the AI Factory, which explained how the very large business of hiring humans to do tasks that generate training data for AI works. One reaction I had to this was “at least this means math is safe from AI, nobody is going to pay mathematicians to generate this sort of training data.” Yesterday though I ran across this tweet from Kevin Buzzard, which advertises work that seems to be of this kind.

A company called Prolific is advertising work paying 20-25 pounds/hour doing tasks in Lean 3. This company is in exactly the business described in the NY mag articles, hiring people to do tasks as part of “studies”, which often are generating AI training data.

One unusual thing about this whole industry is that if you sign up for this work you often have no idea who your employer really is, or what your work will be used for, and you sign a non-disclosure agreement to not discuss what you do. In this case, a few things can be gleaned from discussions on the Lean Zulip server:

- “it’s 40 dollars/hr now actually”
- “I think everyone signed a consent form preventing them from disclosing any problems or even their participation.”
- One problem was formalizing a proof of sin(x)=1 implies x=pi/2 (mod 2pi).
- A representative of the company writes: “Some of our biggest partners are keen to work with Lean users because of its applicability as a theorem prover. They plan to launch numerous studies that require participants to have either a working or expert knowledge of Lean 3.” By “biggest partners”, presumably this is Microsoft/Google or something, not some small publishing organization.

If anyone knows anything about what’s up with this incipient possible math AI factory, please let us know. On the more general issues of math and AI, I’d rather not host a discussion here, partly because I’m pretty ignorant and not very interested in spending time changing that. The situation in mathematics is complicated, but as far as fundamental physics goes, theorem-proving is irrelevant, and applying “big data”/machine-learning/AI techniques to generate more meaningless calculations in a field drowning in them is pretty obviously unhelpful.

For a much better place to read about what is happening in this area, there’s an article in today’s New York Times by Siobhan Roberts: A.I. Is Coming for Mathematics, Too. At Silicon Reckoner, Michael Harris is in the middle of a series of posts about this past month’s workshop on “AI to Assist Mathematical Reasoning.”

]]>No blogging here the past few weeks, partly because I was away on vacation for a little while, but more because there hasn’t been anything I’ve seen worth writing about. Yesterday’s pulsar timing array and IceCube announcements unfortunately didn’t tell us anything about fundamental physics. In the past, such observational results pretty reliably led to absurd claims about evidence for string theory that I could complain about, but that phenomenon seems to be dying down. In this case, the only story that had such claims was one from Quanta Magazine, which explained that “the observations so far from NANOGrav and the other teams are consistent with what we’d expect to see from cosmic strings.”

I noticed that the people at the Institute of Art and Ideas have put together a program for Monday that includes a debate on the topic of “Fantasy, Faith and Physics.” The framing of the debate contrasts the conventional view of science with an alternative possibility: “should we accept that some beliefs, especially in the foundations of physics, are akin to religious beliefs dressed in mathematical language to give our theories meaning?” This kind of misses the point about the current problems in fundamental physics, since I doubt any of the panelists are going to defend such an alternative.

Very odd is what leads into this debate, an interview with Michio Kaku about his new book. Why promote such an atrociously bad book (see here and here) and broadcast Kaku’s absurd claims about this subject?

Maybe this debate will somehow lead to a substantive discussion of the main underlying problem, the nearly fifty-year dominance of a failed set (GUTs/SUSY/strings) of ideas about unification. A very powerful and influential part of the physics community, which will be represented in the debate by Juan Maldacena, continues to insist on the centrality of this set of ideas. To get a clear look at his arguments, see a recent IAI interview In defence of string theory and his colleague Edward Witten’s recent colloquium talk What Every Physicist Should Know About String Theory. The argument Maldacena and Witten are making is essentially the same one from the mid-eighties: string theory is the only possible consistent way to go beyond quantum field theory and get a consistent theory of quantum gravity. In my book and many other places, I’ve explained the many problems with this. Put simply, the problem is that there is no such thing as a well-defined string theory which successfully gives the SM and GR in four dimension. The claims about consistency are either about models that don’t reproduce the real world, or about still-unrealized hopes and dreams (which Penrose characterizes as “Fantasy”).

For a very clear statement of his point of view from Witten, see the question and answer section of the recent colloquium talk, starting around 1:20, where he starts by emphasizing the rigidity of the framework of relativistic quantum field theory. He then states:

My point of view is that string theory is the only significant idea that has emerged for any modification of the standard framework that makes any sense.

This is pretty much exactly the same argument he was making nearly forty years ago. I didn’t find it convincing then, since it seemed to me there was no reason to be so sure that a deeper understanding of relativistic QFT could not possibly lead to a consistent quantum theory with low energy limit GR. Witten had a good argument in 1984 that a possibly consistent generalization of relativistic QFT was worth studying, but the problem is that decades and tens of thousands of papers later, as far as unification goes, this study has been a failure, leading the field down paths (extra dimensions, SUSY) which lead to complex theories that don’t look at all like the real world.

If you look at where things have ended up and the current research directions Maldacena and Witten are pushing, the odd thing is that they seem to have given up on unification, and for years now have been emphasizing the study of black holes in toy models with little to no connection to string theory. The most disturbing thing I heard in the Witten talk was at 1:24:16

If you had sufficient computing power, maybe with a quantum computer with a million qubits, I think you could simulate the dynamics of a quantum black hole…

Here Witten seems to be pointing to exactly the argument recently made by Juan Maldacena (see here), which has a specific claim about what you could do with a million qubits. This particular calculation would not in any way address the problems of the string theory program and is getting into Michio Kaku/wormhole publicity stunt territory.

]]>The establishment of a new university in Japan has been announced, to be called ZEN University. One component of the new university will be the Inter Universal Geometry Center, with Fumiharu Kato as director, Ivan Fesenko as deputy director. The Center will offer an introductory course on IUTT. There’s a video here.

The website seems to be Japanese-only, here’s what I get via Google Translate:

If you pass all of our courses, you will be better equipped with IUT theory than any mathematics student in any university in the world. A student who blooms his talents that emerges from within. We plan to prepare prizes for such young people and encourage them to continue to participate in the community that seriously researches IUT theory…

Although it is difficult to understand, there are already more than 20 mathematicians in the world who understand and develop the IUT theory. I hope that you will boldly take on the challenge of researching IUT theory together with me so that you can be one of the next.

The problem with this subject though is not the number of people who understand IUTT, but the number who can explain to others in a convincing way the proof of corollary 3.12 in the third IUTT paper. From everything I have seen, that number has always been and remains zero.

]]>This past semester I taught our graduate class on Lie groups and representations, and spent part of the course on the Heisenberg group and the oscillator representation. Since the end of the semester I’ve been trying to clean up and expand this part of my class notes. I’m posting the current version, working title From Quantum Mechanics to Number Theory via the Oscillator Representation. This is still a work-in-progress, but I’ve decided today to step away from it a little while, work on other things, and then come back later perhaps with a clearer perspective on what I’d like to do with these notes. In a few days I’m heading off for a ten-day vacation in northern California, and one thing I don’t want to be thinking about then is things like how to get formulas involving modular forms correct.

There’s nothing really new in these notes, but this is material I’ve always found both fascinating and challenging, so writing it up has clarified things for me, and I hope will be of use to others. The basic relationship between quantum mechanics and representation theory explained here is something that I’ve always felt deserves a lot more attention than it has gotten.

In the past I’ve often made claims about the deep unity of fundamental physics and mathematics, One goal of this document is to lay out precisely one aspect of what I mean when making these claims. There are other much less well understood aspects of this unity, but the topic here is something well-understood.

One thing that struck me when thinking about this and teaching the class is that this is a central topic in representation theory, but one that often doesn’t make it into the textbooks or courses. Typically mathematicians develop theories with an eye to classifying all structures of a given kind. This case is a very unusual example where there is effectively a unique structure. The classification theorem here is that there is basically only one representation, but it is one with an unusually rich structure.

When I get back from vacation, I plan to get back to work on the ideas about twistors and unification that I’m still very excited about, but have set to the side for quite a few months while I was teaching the class and writing these notes. More about that in the next few months…

]]>A few unrelated items:

- I’ve been hearing from several people about their plans to travel to China this summer, just realized that they’re all going there for the same reason, to participate in the First International Congress of Basic Science. This is something new and on a grand scale, featuring 240 or so invited speakers, award of a new million dollar – plus prize, together with prizes for “Best paper” over the last five years in 36 different categories. Yau is the main organizer, and the Chinese government is providing the funding. So, if it’s July 16-28 and you are wondering where your colleagues are, quite possibly the answer is Beijing.
- I’m doing my best to try not to think about the implications of recent AI developments for mathematics, but someone who is doing a lot of thinking about this is Michael Harris, who this week at his Silicon Reckoner substack discusses Google’s use of arXiv math papers to train their Minerva language model. Harris raises the interesting question of whether this use of arXiv papers violates the licenses of these papers, standard ones of which include language like

You may not use the material for commercial purposes.

Even if Google is massively violating the arXiv licenses for commercial purposes, it’s unclear whether anything can be done about this, especially given the legal resources Google can afford. In addition, I suspect that when hearing about this a more common response than “this is terrible, I want to sue” would be “this is great, how can I get this thing to write papers for me, or even better, get Google to pay me to help make this possible.”

- Last month Symmetry magazine had an article Whatever happened to the theory of everything? featuring some quotes from me and John Ellis. Ellis explains that the particle physics community has become skeptical of supersymmetry and string theory:

Supersymmetry seemed less and less likely to be right, and superstring theory never materialized into something with testable and concrete predictions.

“The rest of the community is asking, ‘Where’s the beef?’” Ellis says. “There hasn’t been any beef yet. Maybe particle physicists have turned a bit vegetarian and have lost interest in stringy beef.”

- Possibly in response to the problem for string theory that Ellis is pointing to, Witten next week is giving a non-technical theoretical physics colloquium talk at the ICTP on What Every Physicist Should Know About String Theory. Back in 2015 he published something with the same title in Physics Today, which I wrote about here. We’ll see if there are any new arguments on this now very old topic.

We’re hearing this week from two very different parts of the string theory community that quantum supremacy (quantum computers doing better than classical computers) is the answer to the challenges the subject has faced.

New Scientist has an article Quantum computers could simulate a black hole in the next decade which tells us that “Understanding the interactions between quantum physics and gravity within a black hole is one of the thorniest problems in physics, but quantum computers could soon offer an answer.” The article is about this preprint from Juan Maldacena which discusses numerical simulations in a version of the BFSS matrix model, a 1996 proposal for a definition of M-theory that never worked out. Maldacena points to this recent Monte-Carlo calculation, which claims to get results consistent with expectations from duality with supergravity.

Maldacena’s proposal is basically for a variant of the wormhole publicity stunt: he argues that if you have a large enough quantum computer, you can do a better calculation than the recent Monte-Carlo. In principle you could look for quasi-normal modes in the data, and then you would have created not a wormhole but a black hole and be doing “quantum gravity in the laboratory”

seeing these quasinormal modes from a quantum simulation of the quantum system under discussion, would be a convincing evidence that we have created something that behaves as a black hole in the laboratory.

This isn’t a publicity stunt like the wormhole one, because the only publicity I’ve seen is a New Scientist article, and this is just a proposal, not actually executed. Maldacena estimates that to reproduce the recent Monte-Carlo calculation you’d need 7000 or so logical qubits, which the New Scientist reporter explains would be something like one million physical qubits. So, there’s no danger Quanta magazine will be producing videos about the creation of a black hole in a Google lab any time soon.

Maldacena has been chosen to give the presentation tomorrow at the SLAC P5 Town Hall about a vision for the future of fundamental theory, no idea whether creating black holes in the lab using quantum computers will be part of it.

At the other extreme of respectability and influence in the physics community, Michio Kaku has a new book out, Quantum Supremacy. I took a quick look yesterday at a copy at the bookstore. I’ll leave it to others to discuss the bulk of the book, which seems to be about how “There is not a single problem humanity faces that couldn’t be addressed by quantum computing.” The last few pages are about string theory, beginning with the usual bogus pro-string theory arguments, working up to the ending of the book: “So quantum computers may hold the key to creation itself” (i.e. they will “solve all the equations of string theory”). His argument for the relevance of quantum computers to string theory is that they will calculate paths in the landscape:

One day, it might be possible to put string theory on to a quantum computer to select out the correct path. Perhaps many of the paths found in the landscape are unstable and quickly decay, leaving only the correct solution . Perhaps our universe emerges as the only stable one.

This is justified by a bizarre paragraph about lattice gauge theory, which explains that since we can’t solve QCD analytically, here’s what theorists do:

One solves the equations for one tiny cube, uses that to solve the equations for the next neighboring cube, and repeats the same process for all that follow. In this way, eventually the computer solves for all the neighboring cubes, one after the other.

This pretty conclusively shows that the explanation for the Kaku phenomenon is simply that he has no idea what he is talking about.

]]>I want to make up for linking to something featuring Michio Kaku yesterday by today linking to the exact opposite, an insightful explanation of the history of string theory, discussing the implications of how it was sold to the public. It’s by a wonderful young physicist I had never heard of before, Angela Collier. She has a Youtube channel, and her latest video is string theory lied to us and now science communication is hard.

Instead of going on in detail about the video and what’s great about it, I’ll just give you my strongest recommendation that you should go watch it, now. It’s as hilarious as it is brilliant, and you have to see for yourself.

]]>According to this article, string theory is going to be tested using quantum computers, by doing a lattice QCD calculation:

The way string theory is tested involves ‘lattice quantum chromodynamics’: a calculation problem far beyond what digital computers can achieve. ‘Quantum computers,’ he writes, ‘may be the final step in finding the Theory of Everything.’

‘I’m not a computer person. I’m a theoretical physicist,’ he says. ‘But I got into quantum computers because I realised this may be the only way to quantitatively prove that string theory is correct. String theory exists in the multiverse. That is, we exist perhaps in parallel states which are bizarre, with new laws of physics, but we coexist with them. The way to prove it is with a quantum computer.’

I suppose you need to buy the book to find out more.

]]>Yesterday afternoon there was an event at CUNY featuring a panel discussion on Chern-Simons terms. Nothing new there, although it was interesting to hear first-hand from Witten the story of how he came up with the Chern-Simons-Witten theory. One piece of news I heard from Nikita Nekrasov was that he was missing a talk that day at the Simons Center in Stony Brook by Maxim Kontsevich, who would be arguing that the Hodge and Tate conjectures were not true. The video of that talk has now appeared, see here.

I’m way behind in preparing for my class for tomorrow, so haven’t had time to watch the full video and ask experts about it. Will try and learn more tomorrow after my class, but it does seem that if Kontsevich is right that would be a dramatic development. If you are able to evaluate Kontsevich’s arguments, any comments welcome. Tomorrow I’ll also try and at least find some good references to suggest for anyone who wants to learn the background of what these conjectures say.

]]>