Is the cat in the box alive or dead? In fact, until we look at it, both.

By the mid-1920s, Albert Einstein, Neils Bohr, Werner Heisenberg and others had constructed a cohesive new theory of physics called quantum mechanics. In this revolutionary new understanding of the interactions of fundamental particles, properties like momentum and position could no longer be fully specified at all times. It is also the case that we cannot know exactly what the outcome of a measurement will be and can only give a probability. For instance, we can’t say with certainty that an electron will be at location L but can say the probability it will be at L is 50%.

For the first decade or so of quantum mechanics, the theory’s founders argued about how to interpret the peculiarities of the theory. Einstein and others including Erwin Schrödinger (of cat killing fame) strongly disliked the probabilistic nature of the theory and did their best to cast it in as classical way as possible. They thought everything should be deterministic and predictable, provided we had enough information.

In 1935, Einstein, Boris Podolsky, and Nathan Rosen (collectively called EPR) published an argument that quantum mechanics had to be an incomplete theory. They derived a contradiction within the predictions of the framework, which they argued implied there was more to quantum mechanics yet to be found. They used a phenomenon called quantum entanglement to show that quantum mechanics seemingly allows a measurement in one place to influence a measurement made far away from it. They said this instantaneous influence would violate the universal speed limit, the speed of light. This feature is referred to as ‘nonlocality’.

The missing parts of quantum mechanics, they implied, were so-called ‘hidden-variables’. Imagine a die being thrown to a table. The outcome seems random to us, but in fact the way the die will fall is perfectly predictable by classical physics. The hitch is that we cannot get enough information about it to predict how it will land. We would need to know the velocity and angle at which it was thrown, the height from the table, the orientation of the die, the hardness of the table to know how much it will bounce, and so on. Because it is logistically impossible to collect all those data, the outcome appears random, but in reality, these ‘hidden variables’ are calling the shots. Einstein believed that such a thing was also happening with quantum mechanics, and to the cat.

Fast forward to the 1960s, and most physicists have forgotten the woes of those early quantum mechanics, and have given up attempting to understand the theory, for they have realised just how useful it is. They were prepared to accept that it just is and set out to create lasers and transistors and other devices, creating a digital revolution. Quantum mechanics has paid off handsomely as an experimental theory.

Then, in 1964, John Bell, a physicist doing respectable but unrelated work at CERN, published a paper in which he showed that quantum mechanics cannot simultaneously have hidden variables and support locality. His result, Bell’s Theorem, has been hailed as one of the most important scientific discoveries of the 20th century, and it reignited the debate around interpreting quantum mechanics. What Bell had shown, in effect, was that quantum mechanics (a theory with vast experimental verification) is truly uncertain at the most fundamental level. It isn’t just that we aren’t sure what the outcome will be when making a measurement, it is that there genuinely is no outcome until we make the measurement.

So, before we measure the cat, it truly is both alive and dead. Peculiar.

Alex Paviour
University of Wollongong

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text.