There were 1,115 press releases posted in the last 24 hours and 400,949 in the last 365 days.

We still don't understand the measurement problem

Progress in the foundations of physics has been slow since the 1970s. The Standard Model of particle physics is both very successful, but also displays a fundamental weakness: it can’t make sense of gravity. Physicists have pursued a number of pet projects, like supersymmetry and string theory, claiming they held the key to overcoming the Standard Model’s shortcomings, but at this point they seem like failed projects. So how can theoretical physics become unstuck? According to Sabine Hossenfelder, any improvement on the Standard Model goes through understanding the main conundrum of quantum mechanics: the measurement problem.

 

Physicists have been promising us for more than a century that a theory of everything is just around the corner, and still that goal isn’t in sight. Why haven’t we seen much progress with this?

Physicists made good progress in the foundations until the 1970s when the Standard Model of particle physics was formulated in what would pretty much be its current form. Some of the particles were only experimentally confirmed later, the final one being the Higgs-boson discovery in 2012, and neutrinos became masses, too, but the mathematics for all that was in place in the 1970s. After that, not much happened. Physicists pursued some dead ends, like grand unification, supersymmetry, and string theory, but nothing has come out of it.

The major reason that nothing has come out of it, I think, is that physicists haven’t learned from their failures. Their method of inventing new theories and then building experiments to test them has resulted in countless falsifications (or “interesting bounds”, as they put it in the published literature) but they keep doing the same thing over and over again instead of trying something new.

But not only do we have plenty of evidence that tells us the current method does as a matter of fact not work, we also have no reason to think it should work: Guessing some new piece of math has basically zero chance of giving you a correct theory of nature. As I explained at length in my 2018 book “Lost in Math”, historically we have made progress in theory development in physics by resolving inconsistencies, not by just guessing equations that look pretty. Physicists should take clue from history.

 

It’s no secret that the Standard Model is in some serious trouble. What do you see as its weakest point?

I wouldn’t say the Standard Model is in serious trouble. It’s doing an amazing job explaining all the data we have. In fact, for particle physicists the biggest trouble with the Standard Model is that it’s working too well.

Its weakest point is that it doesn’t include gravity. We’ve known of this problem since the 1930s, but we still haven’t resolved it. A secondary issue is the origin of neutrino masses. Those two problems though will not require a solution until we hit energies at least a billion times above that of the Large Hadron Collider. This is why I don’t think building a bigger collider is the right thing to do at this moment in time: It wouldn’t get us anywhere near to testing the interesting parameter range anyway, it would just waste a lot of time, money, and effort.

I guess your question was alluding to some data anomalies that don’t seem to fit into the Standard Model. Anomalies always exist, they come and they go. I don’t think any of the current anomalies are particularly convincing evidence that something needs to be changed about the Standard Model.

___

I think at this point it’s fair to say that as an approach to a theory of everything, string theory failed.

___

What did you make of the recent W boson measurement? Is it just another small data anomaly among many, not enough to overthrown the Standard Model on its own?  

The recently published new result from the CDF collaboration about the W-boson mass is certainly interesting. The W-boson mass has been somewhat systematically off the Standard Model prediction for 20 years or so, and they have significantly decreased the error bar on that. It remains to be seen though whether this will be reproduced with the LHC data. You see, the major reason for their claim that there’s a tension with the Standard Model is that they estimate their measurement error to be very small. If they’ve underestimated this error, then the result is well compatible with the Standard Model. This isn’t a problem that will resolve any time soon, it’ll require a lot of follow-up studies to figure out what is going on.

 

Is theoretical physics trapped in a number of seemingly popular alternatives (like supersymmetry and string theory) that, while they promise solutions to the problems of the Standard Model, they themselves have nowhere near the experimental support much of the Standard model has?

Supersymmetry doesn’t solve any actual problem, it “solves” a number of pseudo-problems, which come down to a perceived lack of beauty of the Standard Model. Besides thinking it’s pretty, the major reason theoretical physicists like it is that’s it’s an enormously rich theoretical approach that employs the mathematics we all learn as undergrads. That is, I think theoretical physics got stuck on it because publishing papers on it is easy and fast, so you increase your publication list quickly. That they think it’s beautiful adds to its appeal.

String theory is a different story. For one, it’s one of the reasons supersymmetry became popular in the first place, because string theory needs supersymmetry to work properly. But more importantly, string theory is believed to resolve the inconsistency between the Standard Model and gravity. However, it’s never been shown that it actually does that. I think at this point it’s fair to say that as an approach to a theory of everything, string theory failed.

But the mathematics of string theory is so general, maybe it’ll turn out to be useful for something else. (Some physicists might argue it is already useful for something else, but I guess I have somewhat higher demands on “usefulness” than they do…)

___

Some physicists just think if a theory isn’t pretty, they don’t want to work on it. I think such people shouldn’t have become scientists in the first place, they’d be better off in the arts.

___

Why Phsyics has made no progress in 50 years SUGGESTED READING Why the foundations of physics have not progressed for 40 years By SabineHossenfelder As you mentioned, one of your critiques of physicists supporting supersymmetry and string theory is that they are simply being seduced by beautiful mathematics. What do you say to those who argue that all our great theories have a degree of mathematical elegance and symmetry, and therefore those things are a guide to good theories?

Well, I’d like to think that the argument I’ve made is a little bit subtler than that. String theory and supersymmetry came out of a very valid attempt in the 1970s to continue the successes that had led to the Standard Model. They weren’t born out of beauty requirements. The issue is that when these theories ran into problems, physicists didn’t give up and moved on to something else. Instead they doubled down on those failed ideas, amended them over and over again to avoid conflict with observation, and then argued they’re too beautiful to be wrong.

Economists know this behavior as loss aversion. The more effort you’ve put into making something work, the harder it becomes to give up on it, because that means you’d have to admit you wasted your time (or money, as it were). This is one of the reasons I say that it should be part of the education of every scientist to learn about social and cognitive biases. It would help physicists recognize the bias in their thinking and also funding agencies could put in place incentives to counteract this behavior.

If you read my book you’ll also find that some physicists just think if a theory isn’t pretty, they don’t want to work on it. I think such people shouldn’t have become scientists in the first place, they’d be better off in the arts.

What I say to those who argue that all our theories have a great degree of mathematical beauty is three things.

First, there are a lot of theories that physicists considered beautiful at some point in time that however turned out to be just wrong. (For examples, read my book, or McAllister’s “Beauty and Revolution in Science”.)

Second, if you’re arguing that it worked for Dirac and Einstein then you (a) might want to read up the literature on that because they only started talking about beauty after their great successes, and (b) you’re cherry picking data – you also have to look at all those cases where arguments from beauty did not work. To make a long story short, historically there doesn’t seem to be any correlation between the perceived beauty of an explanation and its chances of success. Somewhat disturbingly, plenty of scientists I have met believe the opposite without a shred of evidence.

Third, it’s one thing to find beauty in a mathematical description that we have confirmed describes observation, it’s another thing entirely to use a very specific notion of mathematical beauty for an attempt to predict new observations. Concretely the problem is that those notions of mathematical beauty are based on theories we already know, so the new theories are always similar to what we have tried already. Physicists who do this are putting the carriage before the horse. We discover beauty in correct descriptions of nature. We don’t get to tell nature what type of beauty it’s supposed to realize.

(Hence the somewhat cryptic ending of my book that has puzzled a lot of readers. It’s supposed to say whatever we find, if it works, we’ll come to find it beautiful.)

___

In quantum mechanics, we just postulate that a “detector” does certain things to quantum objects, but we don’t know why and we can’t even tell exactly what a “detector” is.

___

One of the things you’ve said that have surprised me is that part of the problem with our inability to overcome the problems of the Standard Model is that we still don’t understand quantum mechanics and in particular the measurement problem. How do you think progress in this area is likely to take place, is it a conceptual/philosophical shift that needs to happen, or do we simply have to learn more physics about how the quantum world works?

The Standard Model is a quantum field theory, which is basically a more difficult version of quantum mechanics. But it still requires the basic ingredients of quantum mechanics, notably what’s known as the “measurement postulate” (aka the “collapse of the wave-function”). Particle physicists often forget about this because it doesn’t explicitly appear in their mathematics – for a particle physicist the experiment is done when the thing is in the detector, no need to update the wave-function. They just calculate probabilities and that’s that. But it’s still the case that those particles which we produce in an LHC collision go into a detector and yet the detector never picks up the quantum properties the particles had. We don’t know why. Somehow detectors make quantum effects disappear.

In quantum mechanics, we just postulate that a “detector” does certain things to quantum objects, but we don’t know why and we can’t even tell exactly what a “detector” is. Notice how odd this is: We think that the Standard Model describes all elementary particles, and everything is made of those particles. It should tell us what a detector does. And yet it can’t. It’s not just that we don’t know how to do it, we know that quantum mechanics can’t describe it correctly. We know from observations that the measurement process is non-linear, whereas quantum mechanics is a linear theory. Where does the non-linearity come from? (For the same reason, btw, quantum mechanics has problems reproducing classical chaos.)

What we need to solve this problem is a better theory that underlies quantum mechanics, one that allows us to calculate what happens in a measurement and one that explains what is, and what isn’t a detector. (And that theory has to be non-linear.)

The thing is though that this distinction between detector and not-detector doesn’t become relevant at high energies, it becomes relevant between the macroscopic and microscopic. You don’t test this parameter range with big colliders, instead you test it in the laboratory by miniaturizing measurement devices and by creating larger quantum objects in controlled conditions. Both of these directions are currently being pushed already, so I am hopeful that we will find evidence for deviations from quantum mechanics in the near future.

The theory-development for physics beyond quantum mechanics is severely lacking at the moment. There’s a lot of opportunity here for young physicists to leave their mark.

___

That’s basically what’s been going on in the foundations of physics for the past 50 years, we’ve just been repeating the same mistakes because we didn’t learn from history.

___

You’ve previously talked about how the contempt that many physicists have towards philosophy and the philosophy of science more specifically, blinds them, making them unable to see how they are misusing concepts like “evidence” or “proof”. Could you give some examples of what you have in mind here?

The naturalness disaster in particle physics is a good example. Naturalness is a particular type of mathematical beauty that was the reason why particle physicists thought the LHC would see dark matter particles and supersymmetric. You may remember the headlines about this.

Somewhat distressingly, particle physicists are still claiming that there is some “evidence” speaking in favor of naturalness, ironically often in one breath with admitting that it’s led to countless wrong predictions. The “evidence” that they refer to, upon closer inspection, doesn’t actually have anything to do with naturalness. I believe that if they’d be more careful stating the assumptions of their arguments, as mathematicians do, then this disaster wouldn’t have happened in the first place and they wouldn’t now all look like idiots because all those particles they’ve been talking of never materialized at the LHC. As I keep saying, no proof is better than its assumptions, and the issue with arguments from naturalness is that they put the conclusion into the assumption without noticing it.

Since I am rambling already, let me mention that a lot of people who only read the title of my book “Lost in Math” came to believe I am criticizing physicists for using too much mathematics. To the very contrary, I think physicists do not take mathematics seriously enough. They should make much more effort to lead clean arguments, then they wouldn’t “get lost”.

It’s often said that one of the main differences between science and philosophy is that philosophers have a lot to gain by studying the history of philosophy, whereas scientists have nothing to gain by studying the history of science. Do you agree with that?

No, I don’t agree. It’s also often said that those who don’t learn from history are forced to repeat it and I think that’s correct. That’s basically what’s been going on in the foundations of physics for the past 50 years, we’ve just been repeating the same mistakes because we didn’t learn from history.

 

One of the things that Kuhn revealed by looking back at the history of science, was that scientists often end up postulating the existence of various entities or phenomena in order to cling on to the received paradigm. Do you see this happening with contemporary physics?

Sure, I guess dark matter and dark energy are examples of this. I don’t a priori think that’s a bad thing to do. But all this talk about paradigm shifts is somewhat useless for scientific practice. I think it’s more useful to ask, like Lakatos did, what research programs are progressing and which aren’t. The dark matter research program, unfortunately, isn’t progressing at all at the moment, because whatever data come up, astrophysicists can find a way to “explain” them in hindsight. So the dark matter paradigm has become unpredictive. Doesn’t mean it’s wrong, but it’s no longer progressing.

 

Where do you see the future of theoretical physics? Are we going to continue being stuck for a few more decades, or are we edging closer to something new?

I am hopeful that in the next two decades we will either find evidence for weak field quantum gravity or for deviations from quantum mechanics, or both. Both of these are required due to severe shortcomings in the current theories, and both of them should become testable in the near future if technological trends continue.

There is also some truth of course in the quip that scientific ideas don’t die, it’s just that the people who defend them die. I see that the students and young postdocs today want to try new things and that gives me hope, too. 20 years ago, the phenomenology of quantum gravity was basically a non-existing field, and quantum foundations was the realm of philosophers that, as a physicist, you’d better stay away from. Both those things have changed dramatically and there’s a lot of potential in that development.

 

*Questions by Alexis Papazoglou, editor for IAI News, the online magazine of the Institute of Art and Ideas, and host of the podcast The Philosopher & The News.*