P1020265.jpg

Proton Decay

1. Stability


Everything flows, and nothing lasts - Heraclitus

The universe is estimated to be around 13.8 billion years old [1], which is a long time by any reckoning, long enough for multiple generations of stars to come into being, shed their light into the darkness, and burn out in various ways. Long enough for galaxies to form and even meet and merge with others, forming further generations of galaxies. And long enough, of course, for life to emerge in at least our own corner of the cosmos, evolution lumbering forward to eventually allow us to exist and marvel at the sheer amount of time and effort it took to get us here.


Why is there anything at all, anyway? This is one of the biggest questions, perhaps the first on the list, but one that is often superseded by others in an increasing level of practicality and a decreasing level of abstraction. Also, importantly, the less abstract and ambitious the question, the more straightforward and easy coming is the answer, so that first question might well be on the list for a good while to come. But a close second question, given an acceptance that stuff came into being at some point for some reason, is just why did that stuff hang around? And why did it hang around for so long? Is it going to hang around forever? If it started, could it end? If it came into being, could it disappear?


The cycles of nature are readily apparent in everyday life. The sun rises and sets, the moon follows suit, the seasons come and go, flora and fauna live and breed and pass things on to a new generation. Everything appears to be steadily based around cycles, supplemented by reliable constants such as the dutiful sun. What has become apparent particularly over the last century is that this is not quite true, in fact far from it. The cycles and constants that we see are only because of the narrow time range that we witness, and that the truth is actually more about decay and disintegration.


Any process that uses energy could only continue for infinity with an infinite energy source. Either that or you need a perfectly closed energy cycle that feeds back into the source whatever is taken out of it. But even though energy can change from one form to another and back again, in practice it is difficult if not impossible to make a perfectly closed cycle, which is usually due to entropy. Essentially, it is simply not possible to gather energy together into an accessible source once it has been used, as using it usually involves fine scale dispersion. So when it comes right down to it, the source is always trickling away, no matter how slow that might be. And why is this? Given half the chance, the energy of a system will always decrease. Put a ball on the side of a hill, and it will roll down. Leave a cup of hot water out, and it will cool down. Set a candle burning, and it will eventually burn out. Energy is a tendency to action, and so by its nature it can’t stay still if not perfectly confined. And if it were perfectly confined, locked up in various impenetrable reservoirs, then nothing would ever happen.

The last century has made science more aware of the various decays of nature, such as radioactivity, and of the various energy reservoirs associated with these decays. Different reservoirs have different characteristics - some can hold energy back for only a short time, while others are more secure and don’t let much slip at all. It has been known since Einstein’s great work on special relativity that mass is another form of energy [2, 3], and decades of subsequent research has led to the field known as particle physics. The best picture that we have today of the building blocks of the universe is known as the Standard Model (SM), in which there are seen to be three generations of basic matter particles, each generation being of a different mass range but otherwise identical to the others, and then there are a number of force particles that enable the matter particles to interact.

As mass is another form of energy, the matter particles are energy reservoirs, and a reservoir will always shed its contents if the option is available, down to the level of the lowest leak. More massive particles have more energy than less massive particles, and so, providing that the essential characteristics of the system remain unchanged, a heavier particle can and always will decay into a lighter particle. If there is no route to decay that can conserve the essential characteristics of the system, then the decay cannot proceed. Different matter particles are therefore inter-related and can transmute into one another, but only under certain conditions.

For such reasons we do not see the heavier two generations of matter in the natural universe, as if produced in a fleeting high energy process, they always rapidly decay to particles of the lightest generation. Particle accelerators allow us to put enough energy in one place at one time to produce these particles of excessive energy just long enough to study, and many of them last for so short a time that they can only be inferred from their decay remnants, but even so, from this evidence we know that they can exist.

What we are left with, in what could be called the ground state of the universe, are four types of matter particle - up quarks, down quarks, electrons and neutrinos. The only stable quark combinations are in sets of three, which are called baryons, and of the possible options of up and down quarks we only have the up-down-down combination and the up-up-down combination that make it into the matter of the everyday world, the former being the neutron and the latter being the proton. Only these options survive due to reasons of energetic stability. The up-up-up and down-down-down options are not favoured, being as they are comprised of quarks all of the same electric charge, which mutually repel and increase the overall energy of the composite state. In fact the neutron is not truly stable either. Neutrons and protons together combine to form the nuclei of atoms, which are completed by the negatively charged electron clouds that are attracted to the positive protons, and it is only due to the energetics of binding neutrons and protons together that the neutron can even exist. On its own, the neutron will decay to the lighter proton typically after about 15 minutes [4].


The neutrinos are by far the lightest type of particle, so why don’t the others decay to it? This introduces our first conservation law, namely the conservation of electric charge. The neutrino is electrically neutral, but electrons, up quarks and down quarks are not, so decays of the latter three particles to neutrinos are forbidden. Quarks are heavier than electrons, but they are mismatched in electric charge and also have an additional conserved property that both electrons and neutrinos do not, to be discussed later on. The known force particles are insufficient to maintain all conserved properties and allow the up and down quarks to decay into electrons, and so they don’t. By this reckoning, the neutrino, electron and proton are all absolutely stable.


It was Ernest Rutherford who discovered the atomic nucleus in 1911 [5], and subsequently, in 1917, identified the component proton in work that also involved the first recognised nuclear transmutation [6]. The word "proton" first appeared in scientific literature in 1920 [7], and by 1929 it had been declared as an absolutely stable particle by theorist Hermann Weyl [8]. Originally Ernst Stuckelberg in 1938, and then Eugene Wigner in 1949, suggested that the proton carried a new conserved quantity in order to explain its stability [8]. This came to be known as the "baryon number", and no process was seen in which this quantity altered, although there was no fundamental reason why this was so. In 1954 a team led by Maurice Goldhaber used a sample containing some ~3×10^{28} protons to look for signs of this particle’s decay, but found none [9]. This gave the first experimental lower limit on the proton lifetime, calculated to be ~10^{21} years for free protons and ~10^{22} years for protons bound within nuclei.


All seemed fine with the proton until 1966, when Andrei Sakharov realised that the asymmetry between matter and anti-matter in the universe suggested that baryon number was not truly conserved, which opened up questions of proton stability [10]. This work was extended by Jogesh Pati and Abdus Salam in 1973 [11], and continues to be an important and unsolved question to this day. To see how proton decay could even be an option, we must consider on the deepest level what makes certain interactions possible and others not. At face value this whole issue appears to be somewhat trivial - there are obviously a great number of protons that have been around for a very long time, and there is no indication that this scenario is due to change. But if there’s one thing that we’ve learned about nature, it’s that appearances can be deceiving.


2. Impossible Possibilities


Everything is theoretically impossible, until it is done - Robert A. Heinlein

There are four basic interactions, or forces, that are currently known to exist on the present energy scale of the universe, which are the electromagnetic (EM) force, the weak force, the strong force, and gravity. Every matter particle (fermion) has certain properties, certain charges, that make them participate in the various interactions via the exchange of a force particle (boson). The EM force is related to electric charge, often simply called the charge, and the boson for mediating the EM force is the photon, the particle of light, which lets charged particles interact but does not transfer charge between them. This is because the photon itself is electrically neutral, it does not carry the electric charge. The weak force is related to what could be called the weak charge, although this is often not explicitly spoken of, and this force has electrically charged bosons, labelled W+ and W-, and also an electrically neutral boson, labelled Z. Due to the W bosons the weak force can permit interactions that alter electric charge, unlike the EM force. The strong force is related to what is called colour charge, and there are eight types of boson called gluons that mediate it. These gluons themselves carry the colour charge, so interactions via gluons alter colour charge, just like interactions via W bosons alter electric charge. Gravity is related to mass, and mass is therefore the gravitational charge. No mediator is known for gravity, it has not been isolated in any experiment, although it is assumed to exist as we assume that gravity must be a force that is fundamentally the same as any other, and so mediated by some particle which is referred to as the graviton.


Matter particles are subdivided into "quarks" (e.g. the up quark and the down quark) and "leptons" (e.g. the electron and neutrino). Quarks carry all of the four charge types and so interact via all of the four forces, but leptons carry all except the colour charge, so they participate in all except the strong interaction. It is this difference that distinguishes the quarks from the leptons - particles are what they are by the nature of their charge types and so the interactions they obey.


All charges except mass are conserved in particle interactions, amongst the fermions and the bosons together, and we have multiple routes of particle transmutation. A single W boson can carry away the electric charge difference between final and initial states of some fermion, with the W boson itself then decaying into fermions that present the charge back in the material world once again. It is precisely because these bosons carry electric charge that such occurrences are possible. If a boson does not exist with the right charge combination required by a certain fermion transition, then that transition cannot happen. All force bosons that are known are those that have been mentioned, and the interactions they allow have been well studied. Nothing seems to be amiss, on first inspection.


But we are allowed to speculate. We can speculate that there is more to the rules than is readily apparent, that there are other bosons, and so other interactions, that would allow for certain processes to occur that our known set of bosons do not. That’s not necessarily to say that there are other types of charge, we could already know the full set of charges - the electric charge, weak charge, colour charge and mass charge. That might be it, or it might not. But even so, even with this known charge set we could still have unknown interactions, given the possibility of there being unknown bosons with different combinations of the various charge types already in hand. But why even think about considering such things? We must because we know that higher energies than those of the known interactions are possible, and so, perhaps, there could be other as-yet-unknown physical processes that occur at higher energies. Much of the possible energy scale is unexplored, and that opens up a wealth of possibilities.


It takes a certain amount of energy to make a certain type of particle, depending on its mass. Photons are massless, and so can be made over a wide range of energies. A massive particle, however, has a very specific rest mass, and so a very specific creation energy. If enough energy is concentrated at a point, then various particles of the equivalent total energy can be produced, either fermions or bosons, with production occurring in such a way as to balance all conserved quantities as necessary. Given the right amount of energy then we have a chance of producing any of the fermions alongside the EM photons, the weak force W and Z particles, and the strong force gluons.


The greater the energy a certain boson would take to be created, then the less the chance of the associated interaction taking place between a pair of fermions. That’s not to say that a boson is actually created when its interaction takes place, as such. It is often said that energy is "borrowed" from the vacuum to enable a virtual boson during an interaction, with the borrowing time being inversely proportional to the boson’s required energy of real production, as dictated by the so-called energy-time uncertainty principle. As the virtual boson’s existence time is limited, so too is the distance it can travel, and so interactions with more massive bosons have a more limited spatial range, meaning that fermions must be closer to give them a better chance of interacting. As two fermions approach one another, the chance of interactions associated with less massive bosons is greatest at any given separation, but there is an increasing level of probability for interactions with more massive bosons the closer the fermions become.


Given these arguments, we can conceive of the possibility of bosons much heavier than those presently known, which therefore require fermions to be much closer to have a reasonable chance of interacting via such new mechanisms. The probability of the fermions getting to such small separations without having undergone one of the known interactions, with their lighter bosons that enable a virtual reach over a larger range, is exceedingly small, but still possible. In other words, it is possible that there are processes which are so rare that we simply don’t notice them.


Let’s take this chance, and make a proposal. Let’s say that there could be a boson that carries the charges associated with both quarks and leptons, which is so super heavy that it hardly ever provides an interaction, but when it does it acts to carry away the quarkness from a quark or the leptonness from a lepton, replacing one characteristic for the other and so either changing a quark into a lepton or a lepton into a quark. This would be a game changing particle indeed, as what may once have been taken for granted as established fact, such as the stability of the particles that make up the material universe around us, may simply not be true at all. It may well be that the proton actually does have a decay channel to lighter particles, which would require such a quark-lepton transformation. The material universe itself would then not truly be stable, but rather just burning a very long fuse.


3. Grand Ideas


If knowledge and foresight are too penetrating and deep, unify them with ease and sincerity - Xun Zi

Some of the most powerful statements to come from physics research are often as subtle as they are profound, and it takes great effort and insight to find such diamonds in the rough. When they are found though, they are truly paradigm shifting, and our understanding is fundamentally altered to never be the same again, sometimes with completely unforeseen consequences. What is often a key element to a great discovery is a connection between two things that previously seemed unrelated. Each such discovery acts to unify the laws of nature into something more crystalline, which appears to be tending towards an underlying pattern that may be even more simplified, even more symmetrical.


It is well known that electricity and magnetism come together as electromagnetism, both being aspects of the same basic force mediated by the photon. The photon itself is what we recognise as forming the spectrum of visible light, and we now know that this is just one part of a much broader electromagnetic spectrum, and so the field of optics, once a field of study in its own right, also comes under the electromagnetic category. Special relativity taught us that time and space are interrelated, which brought with it the relationship between mass and energy [2, 3]. No-one could have foreseen such a connection prior to Einstein’s insight, and our view of nature was changed again. Quantum mechanics brought with it a blurring of the concepts of wave and particle, debated for centuries past as being separate absolute truths, but giving another profound shift when brought together.


Then came work on mathematical symmetry, which started to outline just why things are as they are, why certain conservation rules simply must exist if physics is to be constant throughout time and space [12]. Symmetry has led the way as the underlying concept of unification, as the pattern behind the rules that float on top and present themselves in the world around us as physical law. In terms of the fundamental forces, the unification of electricity and magnetism as one was just the first of what came to be understood as a symmetry of nature in terms of "group theory". Electromagnetism is said to have so-called U(1) group structure in its mathematical description, the weak force was found to have SU(2) group structure, and the strong force has SU(3).


Then came the next paradigm shift. Work by Sheldon Glashow, Steven Weinberg and Abdus Salam in the 1960s led to the understanding of unified EM and weak forces, these being different aspects of the same thing from a certain perspective [13–15]. The EM photon and the weak force W and Z bosons were identified as members of the same collective family, and so, at a high enough fermion interaction energy, the chance of the fermions undergoing an EM-type interaction is the same as undergoing a weak-type interaction. The weak force, in other words, is not so weak when the interaction energy is high enough, and comes to match the EM force that otherwise dominates at lower energies. In mathematical terminology, the U(1) EM symmetry group and the SU(2) weak symmetry group are different aspects of an overarching SU(2)×U(1) symmetry group, which is said to be a "broken" symmetry at low enough energy, thus presenting different aspects of its character in different ways i.e. having the strengths of its EM-force and weak-force components being distinct at different interaction energy scales.


But how is the SU(2)×U(1) symmetry broken? Why does it fragment at all into different aspects? As mentioned previously, the range of a virtual boson, and so its associated interaction strength and, therefore, its associated interaction probability, is determined by the energy that would be required to produce that boson as a real, non-virtual particle. Forces are seen as distinct if their interaction strengths/probabilities are distinct, from having bosons with different production energies, that is to say different masses. It was realised that the EM photon and the weak force W and Z bosons could nonetheless be described as members of the same family, just that the former particle was massless and the latter three particles were massive, thereby making them appear to be distinct and unrelated. The symmetry breaking mechanism by which the W and Z bosons gain their mass is the now infamous Higgs mechanism, with its own associated boson, the Higgs boson, recently confirmed as having been produced as a real particle in experiments at the Large Hadron Collider (LHC) at the CERN lab in Geneva [16, 17]. This experimental result is the last piece of the puzzle needed to verify the electroweak unification theory, which now all fits together spectacularly.


It was not long after the proposal of unifying the EM and weak forces that attention turned to the strong force. It was the next natural step to suggest that there could, and perhaps should, be an interaction energy at which the electroweak and strong forces have the same interaction strength, and that these forces are indeed just two aspects of another level of force unification [18]. The problem is that, whereas the electroweak weak force acts on both quarks and leptons, the strong force acts only on quarks. If there is to be true force unification then there would have to be a boson of mixed character, so carrying electric charge, weak charge and colour charge, in order to enable a unified dialogue between all fermions [19]. Furthermore, this boson would have to be super heavy in order for its effects to be so small as to be completely unseen at the energy scales currently probed by experiment. So whereas the EM and weak forces are separated by boson mass and electric charge, the electroweak and strong forces may be separated by boson mass and electric charge and colour charge. Electroweak symmetry is said to be broken by the Higgs mechanism, and the electroweak-strong symmetry is said to be broken by the parallel concept of the so-called super-Higgs mechanism [20]. At least in one stream of theoretical thinking, that is.


A theory that proposes to unify the electroweak and strong forces is referred to as a Grand Unified Theory (GUT). The first such GUT was created by Howard Georgi and Sheldon Glashow and was published in 1974 [21]. They looked at a number of different symmetry groups, specifically rank-4 local Lie groups that had a chance of encompassing the SU(3)×SU(2)×U(1) symmetry group of the Standard Model, which was itself still being established at the time, and, by a process of elimination, they concluded that the SU(5) symmetry group was the simplest option that could work. This model utilises the following representations for the fermions [22]:



and the group generators, describing the interaction bosons, are [22]:



The important thing to note about the generator matrix is that the top left quadrant represents the strong interaction, the bottom right quadrant represents the electroweak interaction, and the remaining two quadrants represent the new grand unified interaction. We see that there are 12 objects for this new interaction, labelled as X and Y, which are the new form of boson and are called leptoquarks. Six of the leptoquarks in the matrix are particles and the other six are their antiparticles. Three of the former set have electric charge -1/3e and three have electric charge +4/3e, and the antiparticles are the same except that they carry equal and opposite charges [18, 23].


In the previous section we discussed the hypothetical situation of having massive bosons that enable the interchange of quarks and leptons, and we see now that this is justified as a necessity for grand unification in the Georgi-Glashow scheme. But not only that, as it turns out that most other realistic GUTs actually have this feature too. We are left with one inescapable conclusion - should such theories be true of nature, then the proton is an unstable particle, for the leptoquarks give it a number of valid decay channels [24–26]. This would all be a fine thing for the mathematical symmetry that a GUT would give us, and our experience so far suggests that nature does tend towards such greater symmetry under higher energy conditions, but this is a somewhat biased view of nature, which has no obligation to fulfil any such promises that mathematics offers on its behalf.


But there are other suggestive clues based on experimental observations, the strongest of which being that interaction strengths do indeed vary with energy in the way suggested by grand unified theories [24, 26]. Certain models predict that the EM, weak and strong coupling strengths tend towards the same value at an energy somewhere in the region of 10^{15}GeV, at which point these forces can be considered as unified. It is therefore these sorts of energies that would be required to produce real leptoquarks. Although large by the energy standards of the universe today, it is supposed that such values would have been found 10^{−43} - 10^{−35} seconds after the big bang, at which time electroweak-strong force unification would have been so [18]. Not only that, but certain models suggest that it was the decay of the primordial leptoquarks that gave the first quarks and leptons, which then became the material universe [18].


By comparison with the ~80GeV W boson and the ~90GeV Z boson of the weak force, we can see that ~10^{15}GeV leptoquarks are a true leap away in their sheer magnitude. As discussed earlier, the probability of an interaction is proportional to its strength, which itself is inversely proportional to the energy that would be required to produce a real version of its boson. Given the disparity between the weak bosons and the leptoquarks in this respect, it can intuitively be seen just how unlikely a leptoquark mediated reaction is. Crucially, the known longevity of the proton fits right in with this picture.


There is one more piece to unification, which is another symmetry argument and one that seems to be necessary. We know that the quarks can transmute amongst themselves, as can the leptons, under currently known Standard Model physics, and that the EM and weak forces are coupled. We have proposed that a higher symmetry exists between the electroweak and the strong forces, enabling the interchange of quarks and leptons via leptoquark bosons. But another possible reflection in the mathematics that seems unfulfilled is that between the fermions and the bosons themselves.


Fermions and bosons are distinguished by their inherent angular momentum, their spin, with fermions having a half integer multiple of the fundamental spin unit and bosons having a whole integer multiple. So let's make another proposal, that the universe can produce particles like the currently known fermions and bosons but with the opposite spin properties, thereby giving every currently known fermion an equivalent boson partner and vice versa. This symmetry is called "Supersymmetry", or "Susy" for short. The new particles that supersymmetry requires would allow for prompt proton decay channels, giving the proton a lifetime of around a second, which we know is not true [24]. But we can introduce a new conservation law called R-parity that inhibits such decays, while still allowing us to retain the idea of supersymmetry as viable. But why should we even want to complicate things further with such an additional unproven symmetry? The reason is that only with Susy in the mix do the coupling strengths of the EM, weak and strong interactions actually unify properly with our current understanding of how they vary with energy [24]. So if we are to have true grand unification, it would seem that Susy may be a prerequisite condition.


Figure 1: Proton decay via the supersymmetric particle \tilde{S} if R-parity is violated [24]

The Georgi-Glashow model in its original, minimal, non-Susy form does not then allow for true grand unification on its own, although it does have a viable mathematical structure [25, 27]. Furthermore, the original model predicts that neutrinos are massless, which can even be said to be a more significant problem as this is now known to be untrue [25]. The neutrino issue is one that is apparent from data in hand, which therefore has a stronger say than any speculation about what happens to force strengths at inaccessibly high energies. But despite any shortcomings, the Georgi-Glashow model is still regarded as a triumph of mathematical reasoning and simplicity, and has been used as a guide if not the basis for other GUTs.


This first SU(5) theory has therefore been ruled out as a viable option, but Susy SU(5) is still on the cards, amongst other options that have since developed. Another piece of supporting evidence for Susy SU(5), or something like it, is that it makes a prediction for a quantity known as the Weinberg angle θ_W of sin^{2}(θ_W) ≈ 0.23, which is a good match to the value obtained through experimental means, better than that from the non-Susy SU(5) theory of sin^{2}(θ_W) ≈ 0.20 [27]. For studies that deal with extremely high energy scales, any numeric prediction that is testable given current technology is highly valuable, and essential if such ideas are to have any kind of credibility. The Weinberg angle was in fact the only numeric prediction made by the original Georgi-Glashow proposal, and is therefore one of the main benchmarks by which other theories size up to it.


Figure 2: Force coupling strengths (relative scale). Dashed lines: Without Susy; Solid lines: With Susy. [24]

Aside from Susy SU(5), another notable option is a class of models with SO(10) group structure. Although being a further abstraction, these theories provide greater flexibility for incorporating the Standard Model, they offer a natural way by which neutrinos obtain mass, and they also incorporate further means of suppressing proton decay [25, 27, 28]. The greatest proton lifetimes are therefore found within many SO(10) predictions. The simplicity of the SU(5) type of model is still seen as attractive though, and this type is focussed on herein.


Other than the possibility of natural law having a more compact underlying form, rooted in a single force, grand unification could also hold the key to unlocking a number of puzzles that are far from the GUT energy scale. Proton decay is one practical effect of GUTs, but in some sense this does not present us with anything that is fundamentally new, it simply takes current ideas to greater heights, extending the spectrum of particle decays. But there is one other prediction of GUTs that tackles a much more fundamental puzzle, a new stream in its own right, and that comes back to electromagnetism, the most established force theory.


Our present understanding of electromagnetism starts and ends with the electric charge. Static charges are point sources of electric field, and charges in motion have a relativistic effect that is seen as magnetism. But applying symmetry ideas to Maxwell’s equations, the celebrated basis of electromagnetic theory, suggests the possibility of having not only electric point sources, but also magnetic point sources, called magnetic monopoles. Far from being just another type of particle to add to the list, having magnetic monopoles as part of physical law provides a mechanism by which electrically charged particles have the charges that they do [27]. The quantisation of electric charge into the exact and unwavering divisions that are always seen is otherwise not understood, it is simply taken as axiomatic within quantum electrodynamics (QED), which is the language of EM interactions in the standard model [21, 25].


This relationship between charge quantisation and magnetic monopoles was realised by Dirac in 1931 [29], and is of immense theoretical importance. However, despite a constant search in particle physics experiments not a single magnetic monopole has ever been found, but there is nothing to say that they shouldn’t exist, as far as the maths is concerned. The current status might just be how it is, which would have to be accepted. But it turns out that, as well as being linked to proton decay, most GUTs are also intertwined with the notion of magnetic monopoles, and so these ideas are mutually supportive [18, 24, 27]. Not only that, but it was shown that magnetic monopoles could actually catalyse proton decay, so witnessing proton decay could sometimes then even be a dual result [18, 30]. If GUTs tell us that we should have magnetic monopoles, and Dirac tells us that it’s good for us to have them, then all is good - that is, of course, providing that grand unification is indeed true of reality, which requires experimental proof.


And therein lies the catch. The grand unification energy of around 10^{15}GeV is far beyond the reach of today’s particle accelerators. At the time of writing, the LHC is preparing to make the highest energy particle collisions that have been produced to date, at around 13TeV, but this is still far, far below the grand unification scale. To directly probe the grand unification scale in a controlled experiment would require, for example, a linear accelerator that spans a distance equal to the Earth-moon separation, given today’s technology [18]. And, despite the dangers of predicting future capabilities, it still might be safe to say that improving technology to the point of having a reasonably sized accelerator with GUT-scale capability would be a long time in coming. If at all.


The fine prediction of the Weinberg angle from Susy SU(5) considerations is a good job, but hardly conclusive evidence of the reality of grand unification. Magnetic monopoles seem to elude detection, leaving us with the proton as the only object we have in abundance that could tell us something, if only one of them would decay when we’re looking. But the proton seems to like staying just as it is most if not all of the time.


Let’s look at the numbers. Higher boson mass means weaker interaction strength means lesser interaction probability. The leptoquark masses are of the order of the 10^{15}GeV GUT scale [18], making them far heavier than any of the known bosons, so making GUT interactions far less likely than the known interactions, as has been said. But what is that in concrete terms? It is found that the minimal non-Susy SU(5) model leads to a predicted proton lifetime of 10^{31±1} years, while non-minimal Susy SU(5) and SO(10) models predict 10^{32−35} years [22, 26, 27], with some variants being even higher. Whichever way you look at it, those are big numbers. The chance of a single proton decaying as we watch it is therefore virtually non-existent. And yet so much rides on proving this decay one way or the other, as far as we wish to understand physics, as it is one of the very few ways that we may be able to test the untestable. So do we really just have to hold our patience forever?


4. The Waiting Game


If you are not too long, I will wait here for you all my life - Oscar Wilde

We could take a proton and stick it in a box on its own and watch it for a very long time. One day, perhaps, if the ideas of grand unification are true, it will disintegrate. But our research grant will probably have run out by that time, and the meaning of the box, if still lying around somewhere in the burnt out galaxy, will be long forgotten about anyway.


Decays, though, are all about probability, and statistics therefore plays a role. By definition, the lifetime of an unstable particle is the time it would take a large number of them to decay to a factor of e^{−1} times the original number, which occurs exponentially rather than instantaneously. With N(t) being the number of undecayed particles N at time t, the lifetime τ is then defined by:


N(t) = N(t = 0)e^{−t/τ}


The lifetime reflects the probability of any one particle decaying at any one time, meaning that even well within the span of the lifetime, even within a minuscule fraction of it, there is still a chance that a particle will decay. And even though an individual particle may have only a tiny chance of decaying within a time window that is small compared to the lifetime, each and every particle has the same probability too, and all particles are constantly rolling their dice to see if their time is up. The more particles there are in a sample, the more chance there is that one of them will decay at any one time.


It is this probabilistic nature of decays that offers us a means of testing such things that, at face value, may seem to require unfathomably long amounts of time. We simply need to watch many particles simultaneously, and evaluate what fraction of them undergo the transition we’re looking for over a manageable time frame. We start with a predicted lifetime from a theoretical model, which leads to a decay probability, which in turn leads to a number of particles that the model says should decay within a given time frame. We then evaluate the practicalities of running an experiment with a smaller number of particles over a longer time frame, or a larger number of particles over a shorter time frame, which achieves the same exposure effect. Such an experiment can be built and monitored, and we wait for a decay.


Regarding the proton, it is unknown in advance exactly how long we will have to wait for two reasons: 1) we don’t know how close the predictions are to reality, and 2) we don’t know if the decay can even happen in reality, which is actually just the extreme case of the former reason. If we rule out the second of these options, then a null result for an exposure that was predicted to have a positive result tells us that the model used for the prediction is questionable. If a model predicts, say, one decay per year for a given sample size, but if we see no decays within one year, then our confidence in the model begins to falter. Statistics doesn’t rule it out entirely though. But if we we wait for two years, three years, more years, and still see no decays, then the model falls further and further out of favour.


Predictions for the proton lifetime range up to around 10^{38} years [24, 30], and, as mentioned, this quantity can be seen in a number of ways. Most usefully for experimental purposes, it tells us that with a sample of 10^{38} protons, one of them will probably decay within about a year. To evaluate this upper lifetime limit, then, we would need to monitor 10^{38} protons for one year, at least, and preferably many more. Each and every year there should be one decay on average, some years maybe none, some years maybe two or even more, it is a game of chance. The more years we monitor then the better our understanding of the statistics involved. We find that 10^{38} is a lot of protons even for a large machine, as will be seen, but equivalent experiments can be done with half the number of protons and double the time, or a quarter the number of protons and quadruple the time etc., which makes things more manageable.


And so it goes with proton decay experiments, of which there have been many since the ideas of grand unification started to arise in the 1970s. Larger and larger experiments have been built that observe more and more protons for longer and longer periods of time, ever pushing up the lower limit of the proton lifetime, as there has been none other than a continuous stream of null results. As the range of predictions is so large, there has been no definitive experiment yet done that can once and for all verify or rule out proton decay as a true aspect of nature. If there was a single clear cut theory with a precise prediction, allowing for a realistic experiment to be run and evaluated unambiguously, then there would be no issue. But that’s not the case.


There are numerous proton decay channels in GUTs, but two in particular stand out amongst the crowd as being the most likely, one for non-Susy GUTs and one for Susy GUTs. It should be noted that uncertainties of these kinds of prediction are easily an order of magnitude, at least, but even so the best estimates for non-minimal SU(5) models (i.e. those extended beyond the Georgi-Glashow model) are approximately [8]:



It can be calculated that if the proton does decay according to these sorts of lifetimes, then about 10^{20} protons should do so every year within a star the size of our sun [31]. Despite being a large number, this works out as only around a milligram of substance, which pales in significance to the annual mass loss via fusion within the sun, and which itself is insignificant compared to the mass of the sun as a whole. There’s no way that proton decay could be perceived in any stellar measurements we can make.


If around 10^{20} protons decay within a mass equivalent to the sun in one year, then one proton should decay in the same period, on average, within a mass 10^{20} times smaller. This brings things down to terrestrial scales, thankfully. To have a large enough sample of protons to be able to evaluate in a reasonable time frame, on the order of years rather than decades, requires the largest volume of the most dense substance that is abundant and practical to use. Although there have been solid state experiments, such as the Soudan experiment that watched over 770 tons of substance that was 85% iron, which was interspersed with gas ionisation detectors to seek proton decay products [32], the vast majority of experiments are instead liquid based. The most notable of these has been based near the Japanese city of Kamioka since initial construction ended in 1983, and is called the "Kamioka Nucleon Decay Experiment", or "Kamiokande" [33, 34].


The original Kamiokande experiment operated between 1983 and 1996, a relatively simple design of a ~3,000m^3 container filled with purified water and surrounded by photomultiplier tubes (PMTs), which is a type of Čerenkov detector [33, 34]. In such a machine, if a proton decays within one of the water molecules, the decay products would result in tell-tale Čerenkov emission that would be picked up by the surrounding PMTs. Čerenkov emission is given if a particle travels faster than light in a particular medium, which is reduced from the speed of light in vacuum by a factor of the refractive index of the medium [35]. Every substance therefore has a Čerenkov momentum threshold, which in water is 560MeV/c [26]. The Čerenkov emission travels through the substance as a well-directed shockwave, fanning out perpendicular to the instigating particle’s motion, and strikes the wall of the containing vessel in a ring shape [35].


Figure3: K^+ Čerenkov signals [32]. Left (a): First decay channel; Right (b) Second decay channel.

Furthermore, different types of instigating particle give different types of ring pattern, as electromagnetic showers are also produced to a greater or lesser extent depending on the type of particle and process. Rings that are more fuzzy are given by events with more showering involved, and are classed as "electron-like" due to often being given by electrons, while rings that are more sharp are given by events with little or no showering, and are classed as "muon-like" due to often being given by muons [26, 32]. By the timing of the signals in the various PMTs, it is currently possible to determine the starting location of each ring to ~30cm accuracy, allowing for a fully three-dimensional reconstruction of the decay product trajectories [26]. Different possible types of proton decay have different calculable ring patterns, distinguished by the number and kinematics of their decay products and the associated ring types, which provides a means of event identification.


Consider the Susy-favoured p → K^+ + \bar{ν} channel as an example. The neutrino has virtually no chance of undergoing a weak interaction within the detector, so is unseen. The K^+ itself would travel at a maximum of ~57% the speed of light [36], which actually puts it below the Čerenkov momentum threshold, but it would decay to lighter objects that travel much faster, which would be above the threshold [26]. It is these secondary decay products that can be used to infer the presence of the K^+ and, if the energy is acceptable and a number of other conditions are met, provide a candidate proton decay signal. The two most prominent K^+ decays are K^+ → μ^+ + ν (branching ratio of 63.5% [32]), and K^+ → π^+ + π^0 (branching ratio of 20.7% [32]), with the π^0 itself promptly decaying to a pair of photons that start electromagnetic showers [26].


As with any delicate and complex experimental setup, things aren’t perfect with this technique. About 80% of water protons are in the oxygen nuclei and the remaining 20% are in the hydrogen nuclei, which means a much higher chance of an oxygen proton decaying rather than a hydrogen proton. This in turn means a higher chance of the decay products being absorbed in the larger oxygen nucleus, or scattered in such a way as to alter the pure decay kinematics, making the decay signal difficult or even impossible to identify [18, 26]. It is estimated that about 10% of proton decays would be affected like this [26]. And in addition to the problem of actual proton decays that could be lost in such a way, the balance is tipped further by events that actually mimic proton decay, triggered by external sources. For example, incoming solar neutrinos may interact with and cause the transmutation of a proton i