|
PRINCIPIA COSMOLOGICA | AUTHOR'S NOTEWhat
follows is the first part of a book written in 2008 which summed up the
state of my research as of then. It is included here as part of the
historical record to illustrate the long gestation of the Malta
Template. While it is a considerable advance on the document of 1996
there still areas which are woefully wrong. Rome wasn't built in a
day. Nor was the Template.
It
is interesting to note that, a number of crucial findings were yet to
come. Particularly, I had not yet divined the true nature of energy,
the true status of blackholes, or why the Universe could not help but
expand at Moment Zero. However, a number of other crucial findings are
now well established: the greater size of the Universe at Moment
Zero, the early superluminal expansion, rejectivity, antimatter, and
the vergence velocity/escape velocity relationship.
Something
else worth noting is that I sent copies of these first seven
chapters to a large number of eminent cosmologists, physicists,
and astronomers and didn't get one reply. Which is not in any way
surprising. In hindsight it is plain to see that this is an
illtargeted document. Essentially it is a long and involved
philosophical argument. Hard scientists don't do philosophy any more.
Landing something like this on a scientists desk would have had a
very predictable response - "it is too long, I am too busy, it
will take far too long to read and if I do read it and think there is
something in there it is going to require a lot of work from me and
there is no way of predicting what the reward might be, thanks but no
thanks". Of course, Principia Cosmologica could have worked as a
book but finding a publisher even in 2008 was not easy - and I didn't
really have the interest to try. I suppose the book reading public was
not the audience I was after.
|
PRINCIPIA COSMOLOGICA
Peter
(Eddie) Winchester
for
Dianne (1945
– 2006)
Principia
Cosmologica - ©
2008 – Peter (Eddie) Winchester
Peter
(Eddie) Winchester Sapphire Triq
il-Qortin Mellieha MLH2504 Malta
00356
21523541 | |
INDEX
INTRODUCTION
The
chapters that follow are about the current cosmological
knowledgebase. In them, that knowledgebase is parsed, evaluated, and
reconstructed using the techniques of "Organisation and Methods
Analysis". This introduction is in two parts: the first
describes the objectives and core procedures of Organisation and
Methods Analysis: the second explains how those core procedures have
been applied to the current cosmological knowledgebase.
Background In
praise of simplicity Bottomup
and Topdown Why? Mechanics Scope Definitions The
burden of proof
CHAPTER
ONE – THE PEBBLES OF DEMOCRITUS
So
far as we can tell, everything in the Universe is made out of
something else. Big things are made out of smaller things which are,
in turn, made out of yet smaller things. It may be that this is an
infinite cycle, that this continues without end, that inside
everything, no matter how small it might be, there is something even
smaller. However, there is a logic path which suggests that there is
something so small that there is nothing smaller out of which it
could be made. That something would be the fundamental particle, out
of numbers of which everything else is composed. This chapter
attempts to identify the fundamental particle and its properties.
Facts How
fundamental is fundamental? The
glass floor Fotofit Property
One – Gravity Property
Two – Rejectivity Subproperty
One – Speed Subproperty
Two – Spin Matter
and Energy Teels The
reality check
CHAPTER
TWO – MOMENT ZERO
This
chapter deals with the very beginning of the Universe, the time when
the Universe suddenly began to expand at an incredible speed. In the
Current Paradigm, the moment is known as The Big Bang but that label
holds unjustified connotations so from hereon it is called Moment
Zero.
Facts Reversing
into simplicity How
big? The
moment of change The
exercising of logic It
lives The
reality check
CHAPTER
THREE – THE PLANCK EPOCH
This
chapter deals with what happened in the Universe during the period
that stretches, in the Current Paradigm, from the moment of the Big
Bang to 10-43
of a second after it. This period is known as the Planck Epoch.
Facts The
mechanics of expansion Deceleration The
transfer of speed Chaos The
four forces Mass
and density Escape-velocity The
interlinking of measures A
greater universe? The
reality check
CHAPTER
FOUR – THE INFLATIONARY EPOCH
This
chapter deals with the Inflationary Epoch which, in the Current
Paradigm, ran from 10-37
to 10-33
of a second after the Big Bang. During that extremely brief moment,
the Universe suddenly expanded at a rate that was many times that of
the speed of light.
Facts The
Horizon Problem Inflation Expansion Symbiosis The
reality check
CHAPTER
FIVE – THE DARK MATERIALS
This
chapter deals with two mysterious entities, darkenergy and
darkmatter. They are mysterious because, in the Current Paradigm,
nobody knows what they are. They have never been seen, felt or heard.
That they exist at all is guessed at because some objects in the far
distance behave in ways that can only be explained if vast quantities
of these dark materials are there too.
Facts The
structure of the Universe Darkmatter Darkenergy Uniflux
and Teelosphere Teelospheric
equilibrium Multiprocesses The
greater universe The
reality check
CHAPTER
SIX – PHOTONS
This
chapter is about photons, which are the simplest of all the complex
particles and thus the easiest to create. Photons are hugely
important to us in that without them the human race could not exist.
Photons emitted by the Sun are our lifeforce. Photons, in their great
variety, are almost our only means of “seeing” the Universe about
us. It is photons which, directly and indirectly, provide the power
that runs our civilisation. However, considering how important they
are to us, we know remarkably little about them. By coming at them
“bottomup” this chapter will change that.
Facts Theory A
new view Bonding Vergence The
Democratic Principle Accretions Protophotons Photons
described Black
holes The
reality check
CHAPTER
SEVEN – THE COSMIC BACKGROUND RADIATION
This
chapter is about the “Cosmic Background Radiation” – the CBR –
which is a bombardment of photons that comes at the Earth from every
direction at such a low energy level that it is barely detectable.
The bombardment is believed to date back to the “Recombination
Epoch” which took place 300,000 years after the Big Bang. If this
is so, it means that the CBR photons are the oldest objects in the
Universe today that we are currently capable of detecting.
Facts Temperature Blackbody Isotropy Colourshifting Ether Starting
again The
early uniflux Protophoton
equilibration Equilibration The
CBR blackbody curve The
CBR redshift The
CBR isotropy The
reality check
(THE
FOLLOWING CHAPTERS ARE NOT INCLUDED IN THIS EXTRACT)
CHAPTER
EIGHT – ELECTRONS
This
chapter is about electrons which are the second least complex
particle in the Universe. "Complex", though, can be a
relative term and electrons are of an order of complexity far beyond
that of the very simple photon. This is not, however, a complexity
that is recognised in the current standard model which sees electrons
as very simple indeed: as fundamental particles: indivisible and,
as long as they can avoid any destructive outside influences,
eternal. In practice, electrons have a sophisticated internal
structure which is strongly influenced by, and which in turn
influences, the Universe adjacent to them. In the present day
Universe, they are created as part of the same nucleon equilibration
processes that create modern photons. In the distant past, they were
created during the first equilibration attempts of the Universe
itself.
The
rising complexity curve Facts Electrons
in the early Universe Quarks Quark
structure Protoelectrons Electrons
described Other
possibilities Charge Antimatter Antielectrons Antielectrons
in the early Universe The
strong force The
reality check
CHAPTER
NINE – NUCLEONS
This
chapter is about the building blocks of the matter universe. All the
nucleons in the Universe today were created in the very special
conditions prevailing a short while after Moment Zero. Since then,
none have been made and only a small number have been destroyed.
Compared with electrons, nucleons are huge and immensely complex.
They are also, in our current cosmology, misunderstood and
misclassified. Nucleons are with us either in neutron or proton form.
They will adopt one form or the other according to the circumstances
in which they find themselves. The processes involved in the decay or
undecay from one form to another are as sophisticated as one would
expect from such a complex particles. They, nevertheless, abide by
well-proven laws of physics and are easy to understand.
Facts The
current picture Protonucleons The
strong force, part two Neutrons An
unstable particle Protons The
matter universe The
antimatter universe Charge,
part two Slow
neutrons Destroying
a neutron Destroying
a proton Equilibration Photon
and electron production Creating
a neutron The
reality check
CHAPTER
NINE – PROTOGALAXIES
This
chapter is about that stage in the life of the Universe when galaxies
first began to form. Galaxies are vastly more complex than the
particles out of which they are made. They are also vastly bigger.
Consequently, not only are there many more processes involved in
turning galaxies into the form we see today, those processes need a
lot more time to run their course. Many of those processes are still
not done.
Facts The
third skin Protogalaxies Protostar
dumping Quasars Heartstars The
reality check
CHAPTER
TEN – GALAXIES
This
chapter is about galaxies as we see them today. Looking out from
Planet Earth, we see galaxies in a bewildering variety of forms. For
centuries we have tried to make sense of those forms, see The Hubble
Classification, etc. However, it is impossible to understand them
without understanding their internal structure, something which has
not been seriously attempted until now. Notwithstanding their huge
disparity in sizes and forms, the internal structure of galaxies
turns out to be much the same from one to another.
Facts Inside
a heartstar Inside
a galaxy Galactic
teelospheres Continuing
development The
no-decay zone The
light-atom zone The
heavy-atom zone The
radioactive-atom zone Irregular
galaxies Dwarf
elliptical galaxies Spiral
galaxies Large
elliptical galaxies Globular
clusters Continuing
growth The
reality check
CHAPTER
ELEVEN – SUPERGALAXIES
This
chapter is about what is beyond the galaxies. It is already known
that galaxies do not exist independently of each other: that they
are gravitationally bound into groups. Yet again, however, what is
not fully appreciated is that those groups have an "internal"
structure which dictates how they act and how they grow. We are
currently only a short way through the Universe's life cycle.
Consequently, the development path for galactic clusters has a long
way to run. Once the dynamics operating within galactic clusters is
properly understood, it becomes quite obvious what they will become.
Facts The
ongoing process Walls
and bridges Clusters Superclusters Supergalaxies The
supergalactic development path Supergalactic
teelospheres The
end result? The
reality check
CHAPTER
TWELVE – STARS AND PLANETS
This
chapter is about objects which are tremendously important to humans.
We live on a planet and we depend on a star for our continued
existence. Consequently, we exaggerate their importance in the grand
scheme of things. Actually, not only are stars and planets of of
minor consequence, they are very, very, temporary.
Facts Smoke
and mirrors Debris,
detritus, and junk Coming
together again Atmospheres A
star is born Alternatively
..... The
gas theory A
certain future Putting
matters into perspective The
reality check
CHAPTER
THIRTEEN – ATOMS, FUSION AND FISSION
This
chapter, like the last one, deals with objects that are very
important to humans. Because we are made out of atoms, we are very
interested in the way they can be formed into extremely complex
organisms. However, such organisms are just a sideshow in the onward
progress of the Universe: a very minor sideshow, actually. On the
other hand, the way that atoms can join together and be split apart
is right there on the main stage.
Facts Types
of atoms Molecules Fraud The
proton-proton chain Basic
fusion Consequences Helium Carbon
– the stuff of life Likelihoods Other
fusions, other halls Fusion
in stars – the specifics The
energy balance The
neutron heart Crisis Radioactivity Fission The
end and the beginning Supernovae Neutron
stars And
beyond The
reality check
CHAPTER
FOURTEEN – VISION IN THE UNIVERSE
This
chapter is about the way that we see the Universe from the vicinity
of Planet Earth. Many of the processes that have been discussed in
earlier chapters combine to obscure the true appearance of the
Universe. This only becomes plain when we understand what those
processes are.
Facts The
doors of perception Factors: factor
a – photon creation factor
b – the uniflux factor
c - gravity factor
d – the Doppler effect Colour
blinded The
rules of the game Specific
cases: case
a – the cosmic background radiation case
b – nuclear photons case
c – quasar photons case
d – the centre and the edge of the Universe case
e – Andromeda photons The
reality check
CHAPTER
FIFTEEN – THE END OF THE UNIVERSE
Inevitably,
this chapter is speculative although that is not to say that it does
not have a sound factual foundation. By properly understanding how
the Universe works today, the future of the future of the Universe
shows itself in plain sight. What will be, will be.
Facts Our
place in space The
hypergalaxy Kings
and Queens Shrinkage
and growth The
one and only Kingstar Internal
processes Internal
structures A
wider perspective The
end The
reality check
CHAPTER
SIXTEEN – BEYOND THE END OF THE UNIVERSE
The
Universe is not a "special" place. It is ordinary. It works
by ordinary laws of physics which produce ordinary results. Those
same laws are producing the same ordinary results in many different
places. That makes the Universe just one cog in a very big machine.
Facts Decay
and stability One
among millions Ever
onward The
reality check
GLOSSARY
|
INTRODUCTION
The
chapters that follow are about the current cosmological
knowledgebase. In them, that knowledgebase is parsed, evaluated, and
reconstructed using the techniques of "Organisation and Methods
Analysis". This introduction is in two parts: the first
describes the objectives and core procedures of Organisation and
Methods Analysis: the second explains how those core procedures have
been applied to the current cosmological knowledgebase. BACKGROUND
I
am a management consultant whose specialism is “Organisation and
Methods Analysis” or "O&M". An O&M analyst is
used to review any kind of establishment: a business, an industry, a
government department, an air force, a golf club, a kitchen, almost
anything: and see whether there are ways in which that establishment
can be improved. The improvement can take many forms. The most
obvious are greater efficiency and economy but it can also be
increased customer satisfaction, greater employee loyalty, better
health levels, better ecological performance, and so on. It all comes
down to what the client wants. It is entirely possible, of course,
for a layman to carry out a review and get good results. However, a
skilled O&M analyst will bring dimensions to a review that no
layman can. Employing an O&M analyst almost always justifies the
extra cost.
Purely
as a hobby, I have been reviewing the current cosmological
knowledgebase and subjecting it to some of the methodologies of O&M
analysis. It has been a long haul in that I began the task in 1988.
It has taken that long, partly because it is a spare time occupation
and partly because the knowledgebase covers so much ground with which
I was previously unfamiliar. As to why I have been doing it: in 1988
I became aware that the then current cosmological knowledgebase was
(and remains) defective in ways that would cause any commercial
enterprise to rapidly founder.
I
am not a scientist and nor have I any wish to be one. Consequently,
there is no "new" science in these pages. All the science
here has been done by others. In a similar vein, I make no claims
that the revised knowledgebase presented here is the definitive
article. Considering the resources at my disposal, it will be a
miracle if even half of what is written here is right. However, what
I do claim is that what is written here is a lot more right than the
cosmological knowledgebase I began with.
IN
PRAISE OF SIMPLICITY
The
procedures used by O&M Analysts come larded with jargon and
technospeak but underneath all the glitz, the procedures are all
directed towards just three very basic objectives:
1
– to simplify. 2
– to simplify some more. 3
– to simplify a lot more
Never
underestimate the value of simplicity. Simplify something and it will
work better – I guarantee that. I also guarantee that any
establishment, be it a business, a hospital, a military regiment, a
government department, any establishment that has been up and running
for more than a few days, will be more complex than it needs to be.
It is a fact of life. On Day One, an establishment can have a simple
structure, simple procedures, simple rules, and probably only as many
staff as it needs, but by the beginning of the second day complexity
will already be setting in. Over time, and left to their own devices
by a weak management, subsections of an establishment will begin to
use increases in complexity to serve their own needs rather than the
needs of the establishment. If this goes on long enough, the
subsections can lose sight of, and possibly all interest in, the
objectives of the establishment.
Imagine,
if you will, that the management of an ailing business has called in
a management consultancy to conduct a review and suggest ways that it
might be returned to profitability. In such a situation, it is the
procedures of O&M Analysis that are best for identifying what is
wrong and finding ways to put things right. The first thing the O&M
analyst must do is gather information. The analyst needs to know how
the business operates, who does the work, who are the personalities
with the greatest influence, what are the economics of the business,
who are the competitors, how the competitors tackle the same tasks,
and so on. By the time all the information is safely gathered in, the
analyst should know more about the business than anyone else, living
or dead. However, impressive though all this information gathering
may be, it is just a prelude to the real job. The real job can be
divided into four phases.
-
PHASE
ONE: on
paper, the analyst breaks down the business into its component
parts. Absolutely everything is noted: all the facts, all the
procedures, all the objectives, the attitudes, and so on.
-
PHASE
TWO: the
analyst puts each of the component parts onto one or other of two
lists. One list is headed “essentials” and the other is headed
“non-essentials”. Laymen, when looking at the results of this
phase, are always amazed at how much of any business will end up on
the non-essentials list.
-
PHASE
FOUR: now
for a test run – on paper of course – to see whether the
new-look business will run. If it does, the analyst writes out the
report, writes out the invoice, and goes home. If it doesn’t –
and it often doesn’t – the analyst starts all over again, very
grateful that test run was only on paper.
Much
of this work is mechanical. It is little more than working through a
tick list, and would not seem to justify the analyst’s large fee.
The size of the fee, however, reflects the possibility of something
going wrong at phase four. That is when the consultant has to
demonstrate real talent.
BOTTOMUP
AND TOPDOWN
Within
the profession, there is a colloquial name for the above procedure.
It is called “bottomup thinking” and what makes it “bottomup”
is that the drawing of conclusions and the making of decisions is
delayed until the last possible moment. Unsurprisingly, the opposite
of bottomup thinking is “topdown thinking”. In topdown thinking,
conclusions are drawn as quickly as possible from the information to
hand and, while no O&M analyst would ever use it during a review,
people in everyday life rarely do anything else. They can’t help
themselves. The topdown approach to decision making is natural and it
is instinctive. Our brains are hard-wired to make quick decisions and
the most successful of us are always better at making quick decisions
than the rest.
Because
topdown thinking is natural, it can take years for a management
consultant to slip into automatically thinking the other way round.
In their first days on the job, the amount of self-discipline needed
to counter the hard-wiring can be considerable. The urge to wrap up a
review because the solutions have been blindingly obvious from day
one can gnaw at you like the hunger pangs that rack you seven days
into your diet. It is only later, as you see the benefits that come
from steadfastly following the procedure, that the urge to act this
very instant fades. Like a good wine, a good management consultant
improves with age.
MECHANICS
In
the normal run of things, O&M analyses are performed on some sort
of establishment or organisation. Thus, while doing a bottomup
analysis of the operation of a library would be well within the
scope, doing such an analysis on one of the library’s books would
raise a few eyebrows. However, that is not to say it can't be done.
O&M methodologies are as easily applied to an unsatisfactory
knowledgebase as they are to an ailing car factory. It doesn't happen
very often but that is more down to a lack of clients wanting it done
than to any intrinsic difficulty.
The
way this analysis has been conducted is much as in the illustration
given earlier. First the knowledgebase was subdivided into manageable
segments. The segments followed the same order as the
currently-accepted cosmological timeline, beginning with the first
moments of the universe and working forward towards the present day.
In the pages that follow, each segment occupies one chapter. Within
each segment, the available knowledge was gathered, parsed, examined,
evaluated, and finally kept or discarded. The bits that were kept
were either facts or were fact-based. The bits that were not kept
were the unproved theories, the bright ideas, the unsubstantiated
facts, and the extrapolations that went too far. Finally, the kept
bits were reassembled in the most logical way and then related to the
real universe. True to form, not every such reassembly worked the
first time round. Sometimes it was just certain aspects that were
wrong, aspects that could be fixed with a little tweaking. Sometimes
the whole thing clearly bore no relationship to the way things really
are and it was necessary to start again from scratch. Either way
there is now, at the end of each chapter, a model that works and
which is an improvement on that which is currently believed.
Each
chapter follows this pattern exactly and can, therefore, stand alone.
However, since each is only part of a much larger whole, no chapter
can be correct if it does not mesh perfectly with every other
chapter. The final chapter, therefore, is an overview of all the
preceding chapters, treating them as a single self-contained system
which can then be compared with the real universe we see all about
us.
SCOPE
The
coverage of this analysis goes far beyond what is currently
considered to be “cosmology”. Partly, this is because the current
boundaries have more to do with history than logic and partly it is
because there is confusion over what is, and what is not, cosmology.
Read a dozen different dictionaries and you will find a dozen
different definitions of cosmology. All mention it as being a study
of the origin of the Universe and some mention the study of the
Universe's large-scale structure. However, almost without exception,
the dictionaries define cosmology as being a branch of some other
discipline but without there being any consistency in this: I have
seen it labelled as a sub-discipline of such as astronomy,
astrophysics, cosmogony, metaphysics, philosophy, and physics, among
others.
The
probable reason for this is that while cosmology is an old word, its
modern meaning(s) only came into play in the second half of the last
century. The disciplines into which the dictionaries place the
subject are all far more mature, sometimes predating it by many
centuries. Another factor may be that most of those who are today
regarded as cosmologists originally trained in, and first worked in,
one of the older disciplines – and often still tend to talk of
themselves as (say) an astronomer or a physicist first and a
cosmologist second. For simplicity’s sake and to end all confusion,
so far as this analysis is concerned, cosmology is as follows:
By
that definition, cosmology becomes the study of everything with all
the other scientific and philosophical disciplines becoming
“sub-cosmological”. This is not to say that we should abandon
pragmatism. While ichthyology or vulcanology or embryology are
“technically” sub-cosmological, there is little value in
labelling them as such. On the other hand, subjects like astronomy,
chemistry, cosmogony, and physics all sit squarely under the
cosmology banner.
This
recasting of the word has practical benefits. The study of anything
covered in this review will be called cosmology unless there is good
reason to be more specific. Where it is appropriate or necessary or
relevant to refer to an individual as a high energy particle
physicist, or a radio-astronomer, or a stellar chemist, or some such,
I will do so. If it isn’t appropriate or necessary or relevant,
they will all be lumped into the handy generic of “cosmologist”.
DEFINITIONS
While
no new science results from this review, recasting the existing
knowledgebase into a new form has pressed the need for a number of
new words and terms, or for the revision of some existing words and
terms (as the word “cosmology” was redefined above). In each
case, the new definition is clearly differentiated within the text
and is then included in the glossary to be found after the last
chapter. A good example of this (and because this is the appropriate
place for them to be defined) are two terms that will be used
frequently in the coming chapters: “Current Paradigm” and “New
Cosmology”.
The
term Current Paradigm is reasonably self-explanatory. It is that mix
of facts and ideas which are believed, at the moment, to provide the
best picture of the origin and development of the Universe. The
Current Paradigm is backboned by a number of standard models: the
Big Bang Standard Model, the Particle Standard Model, the Big Bang
Nucleosynthesis Model, the Stellar Nucleosynthesis Model, and so on.
The
New Cosmology is a blanket term for the findings of this review. It
is a replacement for the Current Paradigm. It is the Current
Paradigm, taken apart and rebuilt minus all the silly stuff. It is
the Current Paradigm recast so that all its disparate parts fit
together comfortably and seamlessly.
Of
course, the lessons of history tell us that, no matter how much of an
improvement the New Cosmology might be over the Current Paradigm, it
will soon be found to be wanting. And quite right too. That is how we
progress.
THE
BURDEN OF PROOF
At
the end of an assignment, O&M analysts present a written report
to the client that lays out what has been found, what improvements
are possible, and what changes are recommended. Such reports are
written by a professional for a professional and consequently tend to
be brief, brisk and to the point – not least because, if the job
has been properly done, the client will already know exactly what the
report says.
This
“report”, however, is far from brief. The amount of study
required to compile it has been prodigious and my memory is nothing
like as retentive as it was when I was twenty years old.
Consequently, this is not just a report, it is also my aide-memoire,
my reference book, and my study log. What this means is that, while
it may be more readable than would be a conventional analyst's
report, professionals could well well find it repetitive and
longwinded. I'm sorry about that but "needs must when the devil
drives".
Repetitive
and longwinded it may be but it is still an O&M analysts report
and like any such report it has its recommendations. These, as
always, are placed at the end of the report which means that, since
this is an extract, they are not included here. However, for your
edification, the principle recommendation is this:
the
advantages that will accrue from the adoption of the New Cosmology as
the leading framework for further cosmological research are
such that it would be economically foolhardy not to set in train the
simple tests necessary to confirm its likelihood.
Note
that the recommendation is not for the adoption of the New Cosmology,
merely that it should be tested. I have every faith in the report I
have written and believe that, while it may be faulty in detail, the
overall thrust is spot on. However, human history is littered with
people who, for cultural, social, or psychological reasons, have
believed in things that were patently untrue – so some kind of
independent oversight is more than just essential, it is vitally
necessary.
Why
would any practicing cosmologist want to get involved? Why should any
cosmologist feel that applying the necessary oversight to the New
Cosmology would be worth the effort? It is because the New Cosmology
explains so much that the Current Paradigm cannot explain.
The following list of the New Cosmology's achievements is far from
complete but it gives a flavour of what is to come:
-
the
relativity and quantum theories are reconciled (or more accurately,
by-passed) in that the New Cosmology is a single model embracing
both the small and large aspects of the Universe the
mechanisms underlying e=mc2 are explained any
need for an inflation theory is obviated
-
the
true nature of both dark matter and dark energy is explained along
with why the expansion of the Universe was once decelerating and is
now accelerating
-
the
internal structure of all particles, and the processes that make
them behave as they do, is described in detail
-
the
nature of matter and antimatter, and the differences between them,
are explained
-
the
nature of charge is identified along with the mechanisms underlying
electricity and magnetism
-
the
mechanics of atomic fusion and atomic fission are explained
-
the
origin and rapid demise of quasars is explained
-
the
underlying structure of globular clusters, galaxies, and galactic
clusters is laid out
-
the
processes that lead to supernovae and other stellar phenomena are
explained
-
and
so on
Given
that the New Cosmology explains so much that is otherwise
unexplained, and because it promises so much for the future, perhaps
the question shouldn't be "why would any cosmologist want to get
involved" so much as "how could any cosmologist justify
standing back". After all, didn't most cosmologists enter the
field because they wanted to find out how the Universe worked, and
even if the answer was eventually to come with a "not invented
here" label attached, wasn't it the answer that was to be the
important thing.
Of
course, if the appeal to our natural curiosity is not enough, there
is always the most persuasive argument of all: the economic
argument. It is entirely possible that the New Cosmology is a
worthless aberration, the product of a mind stretched too far. If
that is so, however, the cost of proving it will barely trouble the
petty cash: a mathematician, a pencil, and a spare week should tell
whether it is worth carrying on with. On the other hand, if the New
Cosmology is only marginally right, it will throw up so much in the
way of reward that the cost/benefit analysis will as near to one
hundred percent benefit and zero percent cost as makes no difference. | |
CHAPTER
ONE
THE
PEBBLES OF DEMOCRITUS
So
far as we can tell, everything in the Universe is made out of
something else. Big things are made out of smaller things which are,
in turn, made out of yet smaller things. It may be that this is
infinite, that it continues without end, that inside everything, no
matter how small it might be, there is something even smaller.
However, there is a logic path which suggests that there is something
so small that there is nothing smaller out of which it could be made.
That something would be the fundamental particle, out of numbers of
which everything else is composed. This chapter attempts to identify
the fundamental particle and its properties.
FACTS
For
many centuries now, the pattern of research into fundamental
particles has been resolutely topdown, revolving around the search
for ever-smaller particles. Molecules got divided into atoms which in
turn got divided into nucleons which in turn got divided into quarks.
Alongside the quarks we found electrons, muons, tauons, photons,
neutrinos – and we postulated gravitons and strings.
It
is a characteristic of topdown research that, as facts become fewer,
the boundaries of research become wider. Constraints fall away and
imaginations are able to run free. In such conditions, science
becomes either philosophy or science fiction. As at today, the
cutting edge in the search for fundamental particles is entirely
theoretical. Partly this is because scientific cutting edges will
always be like that but it is also because, in this particular field,
attempts at verifying the theories are proving extremely difficult
and extremely expensive. So let us establish the facts. Let us write
down what we know and disregard the rest.
That
atoms exist is proved beyond any doubt. Research into them has been
going on for a long time and we have been able to chart their nature
and their behaviour with considerable accuracy. We have even been
able to take rather fuzzy photographs of them. Our knowledge of what
atoms actually are is still limited but that is no good reason for
doubting their existence.
That
atoms are composed of nucleons and electrons is also beyond
reasonable doubt. The existence of these subatomic particles has been
known for many years and we have long been able to make use of their
unique characteristics.
That
nucleons are made out of quarks has been proved to the satisfaction
of most particle physicists. There are believed to be six types of
quark with each nucleon being composed of a trio, two of one type and
one of another. All the quarks have been sighted experimentally
although the sightings of some were at the extreme edge of what is
possible with the veracity of the sightings being trusted more by
some than by others. The exact nature of quarks is unknown.
Within the atoms, anything beyond quarks and electrons is entirely
theoretical. Outside the atoms, there are six particles of a type
known as leptons. One of these is the electron which can exist either
as part of an atom or as a free-flying particle. The others are the
muon, the tauon, and three different types of neutrino. The existence
of all six leptons has been proved by a mix of experiment and
observation although the quality of the proof for each of them is not
equally strong. As with quarks, we have been able to chart the
behaviour of leptons but our knowledge of what they actually are is
lacking.
The
last fundamental is the photon. That photons exist is hardly in
doubt. Indeed, the day to day existence of human beings is entirely
dependent upon the existence of photons. They vary, one from another,
by measures such as their wavelength and their frequency. This is not
taken, though, as meaning that there are different types of photons
so much as that the measures of photons can be altered by the
circumstances in which they find themselves. Like the other
fundamental particles, we have charted the behaviour of photons with
considerable accuracy although, also like them, their true nature
remains a mystery to the point where there is uncertainty as to
whether photons are actually particles at all.
And
that is it. According to the Current Paradigm, the fundamental
particles are the quarks, the leptons, and the photon. Below and
beyond these particles, nothing has ever been conclusively proved.
HOW
FUNDAMENTAL IS FUNDAMENTAL
In
particle physics, a fundamental particle is one that has no
substructure; one that is not made out of smaller particles. In the
Current Paradigm there are thirteen fundamental particles: the ones
listed above. However, it is possible to query the fundamentality of
these particles, especially if one reverts to an earlier definition
of fundamental particle: that of Democritus. In Democritus’ view,
fundamental particles should be eternal and indivisible. They should
be
as
pebbles or as grains of sand, all much of a muchness, with
no one particle being any more important than any other.
Democritus’
idea was that there should be a single type of particle out of which
everything else would be made. As ideas go, this one was admirable
for its simplicity although, in the spirit of objective enquiry, it
has to be said that he cheated a little. He suggested that it would
be the different shapes of his fundamental particles that would allow
them to hang together to build bigger particles. That they might have
different shapes, of course, means that his particles were not “all
much of a muchness”. Nevertheless, he was so much more on the right
lines than almost everyone else of his time that the lapse is
forgiveable.
We,
of course, have thirteen fundamental particles which fatally damages
any claim that they might be truly-fundamental. There can only be
thirteen fundamental particles if there is a means of maintaining
each type in a stable condition. This implies an internal mechanism.
Without such a mechanism, there is nothing to stop the properties of
each particle, mass, electric charge, and behaviour, from continually
changing.
If
the fundamental particles have an internal mechanism, there must be
some means of containing it and this implies the existence of an
internal structure. It may be a crude internal structure, it may be
something so unfamiliar that we can’t even recognise it as an
internal structure, but it will be there – and the possession of an
internal structure implies that the fundamental particles are made of
something else.
The
thirteen particles also stray from Democritus’ ideal in the matter
of eternality. Not one of them is truly eternal. Quarks, for example,
are only eternal if they remain bound inside nucleons. Break up a
nucleon and its three component quarks will promptly decay into
photons. So far as we know, there is no such thing as a free quark.
Two
of the six leptons can be dismissed for much the same reason. Muons
and tauons have a specific mass, charge, and spin but they don’t
keep those measures for long. Only moments after their creation, they
decay into something else. A muon lasts for just 2.2 microseconds. A
tauon lasts a fraction more.
The
case against the remaining leptons is more ambiguous. As long as they
remain in open space they are, so far as anyone knows, eternal. That
however is in open space. To be eternal, they must not collide with
another particle. If they do, they will be absorbed. Whether, after
being absorbed, they keep their form is unknown but the suspicion is
that they do not.
Finally,
there is the photon. Photons are emitted by atoms and by electrons.
Photons are also the final result of the decay of quarks, muons, or
tauons. In all other respects, photons behave like the latter four
leptons. They can be eternal in open space as long as they don’t
collide with another particle. If they do, they will be absorbed.
Again, it is not known whether they keep their form after absorption
but the suspicion is that they do not.
The
inevitable conclusion, then, is that the thirteen fundamental
particles are not truly-fundamental and that they are made out of
something else. Democritus would have wanted the something else to be
one single type of particle: "as pebbles or as grains of sand,
all much of a muchness, with no one particle being any more important
than any other".
THE
GLASS FLOOR
The
possibility that the thirteen fundamental particles are not truly
fundamental is currently exercising many minds – although the
thought patterns being employed are flawed by being resolutely
topdown. In topdown thinking, that which is known is extrapolated
into the unknown. Following this line, the predominant thought is
that the fundamental particles are somehow related and that in
different circumstances, a single entity might appear in one of
thirteen different forms. However, there is no supporting proof for
this line of thinking and, given our current research capabilities,
none is likely to be forthcoming.
Topdown
thinking, when it is unhindered by facts, can produce a condition
known to O&M analysts as “the glass floor”. Particle physics
is in that condition today. The less that can be proved, the fewer
constraints there are on the number of possible research avenues. As
time passes, the research becomes more and more fragmented, with
smaller and smaller groups exploring more and more ideas.
Extrapolations become more and more distant from known facts and
themselves provide the launching point for yet more extrapolations.
With no proof likely, all that can stop the extrapolations from going
on and on is a lack of will and an absence of money. Hence the phrase
“glass floor”.
The
currently most favoured extrapolations are the “string” theories.
In these, the fundamental particles are mathematical figures known as
strings. These strings vibrate and it is the degree of that vibration
which dictates the form in which the particle will appear. True to
form, there seem to be almost as many varieties of string theory as
there are researchers. There are 26-dimensional bosonic string
theories, 10-dimensional superstring theories, 11-dimensional
M-theories, braneworld theories, supersymetric gauge theories, and so
on.
Because
much of our cosmological research is on the outside edge of what is
possible, it has become customary to accept elegant mathematics as
being almost as valid as proof by experiment or observation. Given
that the chances of any of the string theories being proved
absolutely are very low, should one of them be promoted to become the
paradigm, it is likely that mathematical elegance will be the
deciding factor. However, recognising that a particular set of sums
looks pretty is not the same as knowing that the sums are true.
FOTOFIT
What
Democritus did two and a half thousand years ago wouldn’t be called
a “bottomup analysis” by modern O&M standards – not least
because he didn’t have any more facts to work on than do our modern
string theorists. Nevertheless, what he did was a textbook piece of
bottomup thinking. He started at the beginning and moved forward. He
started at the bottom and moved up. In his conception, all complexity
grew out of simple beginnings.
We
are in a better position than Democritus was. We have facts to work
on that he knew nothing of. He drew his fundamental particle out of
little more than his imagination and his sense of logic. We have the
product of over two thousand years of scientific research to help us.
But where should our bottomup hunt for the fundamental particle
begin. There is no obvious candidate. If there was one, there would
be no string theories because they wouldn’t be necessary. The
answer is to try a different tack. Instead of trying to spot the
fundamental particle hiding in plain sight, we should take a leaf
from a modern police manual.
Policemen,
when trying to catch an unknown criminal, will often employ profiling
techniques. They will assemble what little is known about the
criminal and relate those few facts to what is known of the wider
world. In this way, they hope to build up a picture of the criminal
that will match up with a real person. So consider the logic of this:
every particle that we know of, without any exceptions, has a set of
properties that are uniquely its own: a mix of gravity, mass, spin,
charge, wavelength, and so on. However, these properties are not
common to all particles. Some are only found in one type of particle.
Others are found in only a few. There is only one property that is
found in all particles. Gravity. Let us have a look at gravity in
more detail.
PROPERTY
ONE - GRAVITY
Every
object in the Universe attracts every other object with
a force directed along the line of centres for the two objects that
is proportional to the product of their masses and
inversely proportional to the
square of the separation between the two objects.
Isaac
Newton
To
say the least, gravity is an extremely useful property. It is gravity
that keeps our feet on the ground. It is gravity that keeps planet
Earth in orbit around the Sun, the Sun in orbit around the galaxy,
and the galaxy in orbit around our galactic cluster. Gravity imposes
order on the Universe. Without it, anything that moved would do so in
a straight line and chaos would rule. Gravity is a very important
property indeed.
That
said, to human eyes, the true nature of gravity is not immediately
apparent. We think that it is the gravity of the Earth that is
stopping us from drifting upwards into space and suffering a gruesome
death from asphyxiation but it is not that simple. The Earth is made
out of billions and billions and billions of tiny particles which are
held together in the shape of a planet by their mutual gravity. It is
actually the combined gravity of all those particles that is holding
us down. In the same way, the Sun, the galaxies, and indeed the
entire Universe, are all held together by the gravity of the
extremely tiny things they are made out of.
Gravity
is a remarkably consistent force and we are able to measure its
effects with some precision. However, it is one thing to measure
gravity. It is something else entirely to know what gravity is. We
don’t know what it is. All we know is that it exists. Isaac Newton
determined the mechanics of gravity, treating it as “a force at a
distance”. He didn’t actually believe that such a “force”
could exist, regarding it as illogical, but his mechanics reflected
exactly what happened. Later, Albert Einstein revised some of
Newton’s figures in his General Relativity, treating gravity as “a
property of space itself” and this also seemed to work although
there is no more logic here than there was in Newton’s treatment.
One
day, perhaps, someone will be able to prove the matter one way or the
other; that gravity is a force at a distance or that it is a
property of space itself. Or they might prove that it is something
else entirely. Or we may never know what gravity is. Be that as it
may, we are currently aware that it is possessed by all known
particles so, logically, it should also be a property of our
unidentified truly-fundamental particle.
That
definition, of course, incorporates Newton’s definition. In part,
this is because I cannot better it but it is also here to distance us
from the refinements found in the General Theory of Relativity –
and especially to distance us from those philosophical pictures of
metal balls and rubber sheets. Right now, it is simplicity we seek.
PROPERTY
TWO - REJECTIVITY
There
is another property that all particles have – although it is not a
property in the conventionally accepted sense of the word. Nor is it
something that figures in the Current Paradigm, at least, not in this
form. It is an important property though, one that can be thought of
as almost, but not exactly, the opposite of gravity. It is called
“rejectivity” because that describes exactly what it does. It is
summed up as follows:
Rejectivity
is not a new idea. In one form or another, it has been cropping up
regularly for centuries. Probably the best known example of
rejectivity is the Cosmological Constant. This is the mathematical
device that Einstein inserted into his General Theory of Relativity
to stop the Universe from expanding or contracting and thus allowing
the Universe to conform to the paradigm of the day and be eternal and
infinite.
Rejectivity
is also with us in a more practical form as the “Pauli Exclusion
Principle”. This is one of the bedrock laws of quantum mechanics.
It was originally conceived to define conditions during processes
involving electrons and states that:
in
a closed system, no
two electrons can occupy the same state.
Since
it was first written down, uses for the Pauli Exclusion Principle
have spread wider although nothing like as widely as they are to be
used in the New Cosmology. Rejectivity, unlike the Cosmological
Constant or the Pauli Exclusion Principle, applies to absolutely
everything.
Experiments
that prove the existence of rejectivity are simple and cheap. No two
people can stand on the same spot at the same time. Two ball bearings
cannot occupy the same piece of space at the same moment. Nor can two
atoms. Nor can two of anything. No two pieces of matter can occupy
the same spot at the same time and neither, it is reasonable to
presume, can two examples of the truly-fundamental particle.
Attributing
rejectivity to the truly-fundamental particle does have consequences.
Anything that possesses rejectivity must, by default, have
dimensions. For a place in space to be occupied, that place must have
height, width, depth, and duration. For something to occupy that
space, it must either have dimensions identical to that place in
space or it must have some means of preventing anything else from
occupying that place. In the absence of any clue as to which option
might be correct, we’ll keep things simple. We’ll adopt the first
option and assume that the truly-fundamental particle fills out the
space occupied by its own dimensions.
Accepting
that the truly-fundamental particle has dimensions opens up a
philosophical problem: if a particle has dimensions, it must have
the means to maintain those dimensions. For a parallel, think of a
roadway being kept in place by its foundations. Build a road without
foundations and it will rapidly disintegrate. If the particle has the
means to maintain its dimensions, to prevent its own disintegration,
that means is likely to be some kind of internal structure – and an
internal structure will be made out of something even more
fundamental than the truly-fundamental particle.
There
is no decent solution to this problem. We do not have enough
information to allow us to solve it. Actually, we have no information
at all. What we could do is conjecture ourselves a solution,
extrapolating even further downwards in size in the hope of reaching
a final truly-fundamental particle but we should not do that. That is
topdown thinking and this analysis will not employ topdown thinking.
What we’ll do instead is leave the solving of that particular
problem until the day comes when there is enough information about to
allow a “proper” bottomup analysis. In the meantime, we have to
start somewhere and in this analysis the starting point is that, as a
consequence of its rejectivity, the truly-fundamental particle has
dimensions.
SUBPROPERTY
ONE - SPEED
Giving
the truly-fundamental particle just two properties, gravity and
rejectivity, might not at first glance seem to take us very far but
don’t be fooled. The possession of these two properties has some
very real consequences in that they supply the particle with a pair
of “subproperties”. The first of these is “speed”.
Everything
in the Universe is moving although from a human perspective it
doesn’t always look like it. If you look about you at the furniture
in your house: your easy chair, your dining table, your dishwasher,
your exotic waterbed (with heat, wave, and vibrator functions), they
will not seem to be moving much but they are. The truth is that they
are all moving at a fair old lick.
The
furniture is part of planet Earth and planet Earth is rotating. At
the equator, the Earth’s surface is moving at 1600 kilometres an
hour. Then you must take into account that the Earth is orbiting the
Sun. It is doing that at over 100,000 kilometres an hour. And the
Earth is tagging along with the Sun as it orbits around the core of
the Milky Way galaxy at something like 850,000 kilometres per hour.
Then, the galaxy itself is moving around the galactic cluster. Making
measurements at that level is proving difficult – some current
estimates put the rate as being around 1 million kilometres an hour
but I have seen some as high as 2.5 million.
All
of which means that your exotic waterbed is not simply moving, it is
moving in a number of different velocities and vectors at the same
time – and that some of those velocities are, by our Earthbound
motorway standards, extremely impressive. It is important, though, to
see all this movement for what it really is. While it is true that
the furniture in your house is moving, those pieces of furniture are
actually assemblages of particles, of atoms in the first instance but
actually of the quarks and electrons. At the most basic of all
levels, the moving is being done by the truly-fundamental particles
that the quarks and electrons are made of. Thus, every
truly-fundamental particle in the Universe possesses a quantity of
speed.
Speed
might appear to be a property in its own right but it isn’t. Speed
stems from gravity. Speed is a side-effect. It is a consequence.
Without gravity there would be no speed. Here is an explanatory
example.
-
Suppose
that the entire Universe is empty except for two particles. Two
identical particles. Two minute, perfectly spherical, and
truly-fundamental particles which are just hanging there in all that
empty space: stationary and exactly one billion kilometres apart.
These two particles, being truly-fundamental, possess only two
properties, gravity and rejectivity.
-
Since
there is nothing else in the Universe to exert any influence on the
two particles and mask the effects of their mutual gravity, even at
this great distance they will begin to draw themselves towards each
other. At a billion kilometres apart, their effect on each other is
going to be well-nigh imperceptible so their initial movement
towards each other is going to be painfully slow. As they get
closer, however, the grip of their mutual gravity will grow stronger
and they will accelerate. Eventually, they’ll be rushing towards
each other at a tremendous rate.
-
In
due course, the two particles will crash into each other and at this
moment their rejectivity will come into play. Since neither particle
can occupy the same area of space at the same time as the other, and
since they have no means of dissipating their speed, they must
bounce off each other and retreat. Their retreat from each other is
conditioned by their mutual gravity in exactly the same way that
their advance was. Therefore their retreat is a mirror image of
their advance and will deliver them back to their exact starting
points. This yo-yo dance will then continue – in, out, in, out,
in, out, in, out, and so on. Unless something steps in to stop it,
this yo-yo dance will carry on forever.
The
point of this example is to demonstrate that speed, movement,
velocity, call it what you will, is a side-effect of gravity. Without
gravity, those two stationary particles would remain stationary in
space for all eternity.
Here’s
an interesting thing about speed: it comes in two forms. In this
example each particle possesses, at any particular moment, a mix of
“realspeed” and “potentialspeed”. At their greatest distances
from each other, the particles have a zero quantity of realspeed and
a quantity of potentialspeed that equates to the amount of realspeed
they will have at the moment of collision. At the moment of collision
the situation is exactly reversed. They now have zero potentialspeed
and a quantity of realspeed that equates to the potentialspeed they
have at their greatest distance from each other.
In
their nineteenth century studies of the nature of “energy”,
Kelvin used the term “kinetic energy” for the energy of motion
and Rankine used the term “potential energy” for the energy of
position. For consistency, perhaps the terms kinetic-speed and
potentialspeed should be used here. That I don’t is an accident of
history – the concept had been long-deduced before I got around to
studying the historical background and had become so used to using
“realspeed” that I couldn’t be bothered to change.
-
COMPLEX
PARTICLE: A
complex particle is any particle that is an assembly of numbers of
the truly-fundamental particle, even if those truly-fundamental
particles are organised into subassemblies. By this definition,
photons are complex particles, as are quarks, atoms, stars, and
galaxies. The largest of all the complex particles is the Universe
itself.
Expanding
the two-particle example to encompass the whole Universe introduces
an intriguing thought. It is that every one of the truly-fundamental
particles in the Universe has a gravitational relationship with every
other truly-fundamental particle in the Universe. Between every
truly-fundamental particle pair there is a balance of realspeed and
potentialspeed. In practice, because the distance between most
particle pairs is enormous, there is a huge quantity of
potentialspeed and very little realspeed – and because the mutual
gravitational attraction between most pairs is thus negligible, the
chances of the balance ever changing much are small.
The
two-particle example also introduces a second intriguing thought. It
is that speed is conserved. Speed, by various means, can be
suppressed or subverted but it can never be destroyed or eradicated.
At any specific moment, the totalspeed possessed by the two yo-yoing
particles is exactly the same. Sometimes it is in the form of
potentialspeed and sometimes it is in the form of realspeed but the
totalspeed never changes.
Now
expand this thought to encompass the whole of the Universe. Since
every truly-fundamental particle in the Universe has a gravitational
relationship with every other truly-fundamental particle in the
Universe, every pair of truly-fundamental particles also has a
conserved totalspeed. Therefore, by implication, there is within the
Universe a finite quantity of totalspeed. Furthermore, if we presume
that the number of truly-fundamental particles in the Universe has
never changed, then nor has the Universe’s totalspeed. And nor will
it change unless there is a change in the number of truly-fundamental
particles.
While
the amount of totalspeed in the Universe might never change, it is
possible for the totalspeed possessed by a truly-fundamental particle
pair to alter. They can do this by transferring speed to another
pair. In the two-particle example, because there are no influences
other than their own gravity and rejectivity, the collisions are
perfectly aligned and, since the particles are perfectly spherical,
there is no transfer of speed from one to another. In the real
Universe however, where there are many particles and where each has a
gravitational influence on every other, collisions are never likely
to be perfectly aligned. Consequently there will be a transfer of
speed and thus the amount of totalspeed possessed by each particle
will change after each strike. Nor will it just be the totalspeed of
the particles that will have changed. Their vector will have changed
also. The nature of these changes and transfers is well understood in
the use of Collision Mechanics.
-
COLLISION
MECHANICS: Collision
Mechanics is underpinned by the notion that speed is conserved.
Immediately before two particles collide, each will possess a
specific quantity of speed which, added together, might come to a
notional speed quantity of 1.0. After the collision, and depending
on the circumstances, that speed quantity can be redistributed among
the two particles, subject to the total quantity continuing to be
1.0. Similarly, Collision Mechanics conditions the post-collision
vectors of any pair of truly-fundamental particles. Because each
particle is identical and perfectly spherical, their post-collision
vectors are predictable by the use of Euclidean Geometry, Newton’s
Laws of Motion, etc.
Because,
every truly-fundamental particle in the Universe has a gravitational
relationship with every other truly-fundamental particle in the
Universe, any change in the speed and vector of one teel will affect
its gravitational relationship with every other truly-fundamental
particle. These changes will be subject to not altering the total
amount of speed possessed by the Universe although the Universe’s
mix of realspeed and potentialspeed can alter.
Will
it ever be possible to calculate exactly how much speed the Universe
possesses? Possibly but it will be difficult. While realspeed is
readily perceptible, potential speed is not. When we detect a
particle zipping past us, subject to our being able to gather all the
necessary information, it is relatively easy to calculate the amount
of its realspeed. However, calculating how much potentialspeed it has
would require a detailed knowledge of the particle’s history.
Similarly,
to calculate the amount of realspeed currently possessed by the
Universe is relatively easy. Calculating how much potentialspeed it
has, however, will require a lot more knowledge of its history than
we currently have.
SUBPROPERTY
TWO – SPIN
The
second subproperty is “spin”. This is the “rotation” of a
complex particle around its own axis. According to the Current
Paradigm, each of the thirteen fundamental particles spins. Almost
everything else spins too. Those that don't are insubstantial objects
called mesons which still have what is known as an “integral”
spin.
What
is commonly thought of as spin is actually “intrinsic angular
momentum”. This is the spinning of a coin, or the Earth, the Sun or
the Milky Way galaxy. However, there is also “orbital angular
momentum”. This is the spin of the coin as it moves around the
Earth, the Earth as it moves around the Sun, the Sun as it moves
around the Milky Way, and so on.
Just
as with speed, spin can be a complex subproperty with a particle
spinning about a number of axes at the same time. The coin is
spinning in its own right but it is also part of the spin of the
Earth, of the Sun, and of the Milky Way. Looked at in this way,
everything is spinning, even objects that don’t at first glance
appear to be spinning, some small asteroids for example or the
toaster on the kitchen table. When considered as part of a larger
background, everything spins.
Does
the truly-fundamental particle spin? Yes and no. A truly-fundamental
particle that is part of any complex particle that is within the
Universe will inevitably have orbital spin. Intrinsic spin, though,
is a different matter. In the Current Paradigm, there are particles
which do not have intrinsic spin so we cannot say that the
truly-fundamental particle possesses it because everything else does.
Having said that, though, it doesn’t matter whether it spins. What
will happen in the coming chapters will happen anyway whether the
truly-fundamental particle does or doesn’t spin. Effectively then,
so far as the truly-fundamental particle is concerned, spin is a
“neutral” subproperty.
Like
speed, spin is a sideeffect of the gravity of the truly-fundamental
particle. In a stable complex particle, the mutual gravity of its
truly-fundamental particles is sufficient to prevent their escape.
However, while they may not be able to escape, they still possess
considerable quantities of realspeed, realspeed which is conserved.
The truly-fundamental particles therefore have no choice but to
follow an orbital path. If all the particles are orbiting in the same
direction, the complex particle is spinning.
There
is a sound argument for not regarding spin as a separate subproperty
at all: for merely treating it as speed in another guise. That I
don’t do so in this analysis is because the consequences of spin in
complex particles, especially in very large ones, is such that it is
simpler to deal with it under a separate heading. That said, there
will be occasions when it is useful to be able to refer to a
particle’s forward motion and its spinning motion as a single
measure. This is because spin acts as a kind of speed depository. If
planet Earth was not spinning, for example, the speed of all the
particles it contains would drive the planet forward at a much
greater velocity. The two together are known as “spinspeed”.
Just
as realspeed can be converted to potentialspeed and back again, speed
can be converted to spin and back again. Here is a law that relating
to this:
One
unit of spin can be converted by collision into one unit of speed. By
a further collision, it can be converted back into one unit of spin. Resulting
from a collision, one unit of spin or speed can be transformed into
any ratio of spin or speed but the combined spin and speed can
never be more or less than the original unit. Hence
the equation:
1
unit of spin = 1 unit of speed.
The
way that spin can act as a speed depository is tremendously important
to the structure of the Universe at all scales. As we will see in
coming chapters, one of the major processes underway in the
development of the Universe is the locking up of ever greater amounts
of speed as spin.
MATTER
AND ENERGY
The
truly-fundamental particle has been given two properties, gravity and
rejectivity, because every other known particle has them. Other than
that, we know nothing about the truly-fundamental particle bar that
it is an extremely insubstantial object. In the New Cosmology, it is
the least substantial object in the Universe, vastly less substantial
than the least substantial thing we have thus far managed to detect.
In truth, the truly-fundamental particle is so insubstantial that our
chances of ever finding a way to detect one directly, and thus
proving absolutely that the conjectures in this analysis are right,
are near to zero. Probably as close to zero as makes no difference.
Is there, then, any way that we can relate these truly-fundamental
particles to anything in the Universe that we already know? There
certainly is. All we need to do is take a look at some very basic
physics. According to the current wisdom, the content of the Universe
comes in two forms. There is matter and there is energy.
Let
us deal with matter first. Matter is the physical stuff of the
Universe. You and I are made of matter. Planet Earth is made of
matter. The Milky Way is made of matter. And so on. Instinctively,
most humans know the difference between matter and energy. Matter is
quantified by its mass. There are two types of mass. There is
“gravitational-mass”, which is the measure of an object’s
ability to attract other objects, and there is “inertial-mass”,
which is the measure of an object’s resistance to any change in its
motion or movement.
One
of the properties we have attributed to the truly-fundamental
particle is gravity. Because of this, all truly-fundamental particles
are attracted towards one another. Therefore, they have
gravitational-mass. This can then be extended to complex particles.
Because complex particles are made out of quantities of
truly-fundamental particles, they also have gravitational mass.
The
other property we attributed to the truly-fundamental particle is
rejectivity and from this stems inertial mass. If the
truly-fundamental particle had no rejectivity, its inertia would be
100%. Its resistance to any attempt to change its motion would be
100%. It would be impossible to move the particle by any means other
than gravitational attraction. You couldn’t push it. You couldn’t
hit it. You wouldn’t even be able to touch it. If you tried to push
it, whatever you were doing the pushing with would pass straight
through the particle because there would be no rejectivity to stop
it.
Because
our truly-fundamental particle has both gravitational and inertial
mass, it is clearly “matter”. Interestingly, however, it also
serves as a repository for energy in that energy is just another term
for speed – although the relationship between energy and speed is
not always as apparent as it might be. For instance, a battery is a
store of electrical energy that can be used to power a torch but
there doesn’t seem to be much speed on show there. The Sun provides
us with solar energy but again not much speed can be seen. We eat
food which gives us the means to get on with our lives but where is
the speed in that.
Actually,
the speed is there in all those examples. The key is heat.
Temperature has long been known to be connected with the speed at
which particles move. Take a look at a block of ice and then look at
a kettle of boiling water. In the block of ice, the water molecules
have been frozen into immobility. Their speed has been removed
so that they can be locked by gravity into a solid matrix.
Contrast this with the inside of the kettle where the water molecules
are furiously rushing this way and that. Some of them, indeed, are
moving so fast that they can break the bonds that tie them to their
neighbours and steam off into the atmosphere.
Water
molecules are an obvious example but the principle holds good, even
when the case is not so obvious. A piece of red-hot iron, for
instance, continues to look like a piece of solid metal even if it is
warm enough to burn the skin off your hand. If you look within it,
however, at the atoms it is made of, you will see that they are
behaving very differently to the way they did when the iron was cold.
They are agitated and the hotter the temperature, the more agitated
they become. If you pour enough heat/speed into the iron, the bonds
will no longer be able to hold the atoms in their place and the metal
will become molten. It will become a liquid in the same way that the
heated ice turned to water. Heat the metal enough and it will
vaporise into a steam of fast moving-iron particles.
Scientists
have put labels on many different types of energy. There is chemical
energy, gravitational energy, electrical energy, elastic energy,
kinetic energy, thermal energy, and so on. Each type of energy,
however, has its roots in the speed possessed by our
truly-fundamental particle. This is echoed in one of the principle
tenets of modern physics, the Law of
Conservation of Energy, which states
that:
energy
can never be created or destroyed, only
changed from one form to another.
This,
of course, is merely a restatement of our earlier contention that
speed is always conserved. Speed may be very apparent as realspeed,
or hidden as potentialspeed or spin. Speed can be changed from one
type of speed to another, it can be transferred from one particle to
another, but it can never be destroyed, eliminated, or eradicated.
Speed is forever.
TEELS
By
deduction, we now have a truly-fundamental particle which, although
it is hypothetical, is not out of place in the Universe as we
understand it. It has two properties. There is gravity which gives it
mass and there is rejectivity which gives it inertia. It also acts as
a repository for speed and is thus the Universe’s energy supply.
Our
brand-new particle is lacking in only one thing. It has no proper
name. I could carry on calling it the truly-fundamental particle but
that is a clumsy title that contains far too many letters for easy
typing. Ideally, it should have a name that reflects the fact that it
is the most important particle in the Universe. Unfortunately, for
nearly twenty years now, I have been calling it the “teel”. As
names go, “teel” has a weak sound but I’ve grown used to it and
am not going to start calling it something else now.
As
is often the way in such matters, I didn’t specifically choose the
name. Like Topsy, it just growed. It happened like this. Once I had
established the properties of the truly-fundamental particle, I gave
it a temporary name, something I could call it while writing things
down, all the while thinking I would give it something more
portentous later. I temporarily named it after a mathematician called
Tom Lehrer. However, the name Tom Lehrer had too many letters so I
soon shortened it to TL and from there it was only a short step to
calling it te-el and to writing it as teel. I never did get around to
finding that more portentous name so it has been “teel” ever
since.
Mr
Lehrer, by the way, has more than enough scientific credentials to
justify being honoured, albeit now in an indirect form. Those you
unfamiliar with his work might like to check out his discovery that
the names of the chemical elements can be fitted to a tune by Sir
Arthur Sullivan. Or might admire his great generosity in allowing his
fellow mathematician, Nikolai Lobachevsky, to take all the credit for
turning the Vladivostok telephone directory into a Hollywood musical.
THE
REALITY CHECK
This
chapter has been concerned with identifying and describing the
truly-fundamental particle, out of numbers of which all the material
objects in the Universe are made: the teel. Teels are nothing more
than the embodiment of two properties, gravity and rejectivity. Out
of those two properties, two sub-properties naturally and inevitably
arise: speed and spin. The two properties and their two
sub-properties provide all the matter and all the energy of the
Universe.
In
the Current Paradigm, there are thirteen fundamental particles: six
quarks, six leptons and the photon. This chapter has argued that this
is twelve particles too many although, in comparing these with the
teel we are not really comparing like with like. In the New
Cosmology, teels are the basic building blocks of the Universe, out
of which everything else is made. If we are to compare like with
like, the proper comparison for the teel model is with the
sub-thirteen particle models, the string theories.
There
are many different string theories with varying degrees of
compatibility with each other.
However, in at least one way, they are all the same. They are all
topdown deductions which have long-since dropped through the glass
floor. Consequently, there is no proof, observational or
experimental, in favour of any of them. Nor is there likely to be.
Not absolute proof, anyway. Because of this there is no consensus
among cosmologists that any one string theory is more right than any
other – and nor is there anything to prevent any one of them being
used as the basis for yet more topdown extrapolation. These are ideas
without a bottom line.
Inherent
in any bottomup model is the possibility of it being proved by
allowing it to develop naturally (and hopefully exclusively) into a
pre-existing condition. In the coming chapters, the teel model does
indeed develop into the Universe as we see it today. Does it do so
exclusively? Possibly and possibly not. It seems to me that it does –
but I am so close to this analysis that my judgement is almost
certainly clouded. I would like to think that my judgement isn't
clouded but human beings aren't perfect.
Directly,
there is no more proof that the teel exists any more than there is
for the strings. Both are nothing more than speculation. However,
there is some indirect proof in favour of the teel model in that it
provides a simple explanations for things that are unexplained in the
Current Paradigm. For example, the teel model explains the underlying
difference between gravitational and inertial mass.
In
the Current Paradigm, gravitational mass and inertial mass are
regarded as conceptually different but, in practice,
indistinguishable from each other. Einstein founded his General
Theory of Relativity on the thought that someone in a falling lift
could never tell whether the movement was gravitational or inertial
in origin. Actually, gravitational and inertial mass are
indistinguishable from each other because they are measured at too
high a level for the underlying difference to be apparent. The
measurements are made at the visible matter level and consequently
gravitational and inertial mass appear to be the same thing. At the
level of teels, however, it can be seen that they stem from entirely
different causes. Gravitational mass is due to the teel’s gravity
while its inertial mass stems from its rejectivity.
Another
useful clarification arising from the teel model is the equivalence
of mass and energy. Relativity does not explain it because, again, it
considers the problem at too high a level. It uses the photon as the
“carrier” of energy without ever explaining what “energy”
actually is. It is only at the level of teels, the teels that photons
are made out of, that it becomes clear that energy is speed. At this
level, mass and energy are equivalent only in the sense that without
gravity there can be no speed.
Something
else speaks in favour of the teel model. Something that might be seen
as unscientific but which is nevertheless important. It is that the
teel model is simple. By contrast, the string theories are much less
so, to the point whereby they are incomprehensible to all but a small
percentage of the human population. Simplicity is the holy grail for
any O&M analyst. No O&M analyst, worthy of the title, will
choose a complex solution over a simple one without having extremely
good reasons for doing so.
And
one last plus point for the teel model is that it does not contravene
any established physical law: for example, the conservation of
mechanical energy or the first law of thermodynamics. If the same
could be said for the string theories, they might be more easily
understood by non-cosmologists.
To
sum up:
-
If
the bottomup teel model is extrapolated forward to the present day,
will it exclusively produce a universe like the one we see about us:
possibly. If the string theories are extrapolated forward in time,
that is bottomup fashion, will they exclusively produce a universe
like the one we see about us: No.
|
CHAPTER
TWO
MOMENT
ZERO
This
chapter deals with the very beginning of the Universe, the time when
the Universe suddenly began to expand at an incredible speed. In the
Current Paradigm, that moment is known as The Big Bang but that label
holds unjustified connotations so in the New Cosmology it is called
Moment Zero.
Until
the middle of the 1950s, the dominant concept was that the Universe
was eternal and infinite. That concept is now dead and buried and the
Big Bang Standard Model reigns supreme. The big bang concept grew out
of Einstein’s Theory of General Relativity. The Universe, as it is,
was extrapolated backwards in time until it became a “singularity”,
a region that was infinitely hot, infinitely dense, and in which
spacetime was infinitely curved. Guesses about how long ago this
happened have been yo-yoing back and forth since the 1930s but, as at
2007, it is generally agreed that the Universe began approximately
13.7 billion years ago.
The
New Cosmology’s ideas about the very early Universe are quite
different from those of the Current Paradigm. They both agree that
the Universe had a beginning, has a middle, and will have an ending.
They also agree that the Universe began small and grew bigger. The
difference is in the details.
FACTS
For
a pedant, this would be a very short section indeed for there are no
known facts about the Universe’s first moment. Everything is
guesswork.
To
be more exact, the fact-free period is a lot longer than just that
first moment. Going backwards in time, the last available fact is at
300,000 years after the Big Bang. This is the period known as the
Recombination Epoch during which the density of the Universe fell
sufficiently to allow photons to move without the certainty of being
absorbed by matter particles. The “fact” is that some of these
early photons still exist and are detectable by us as the Cosmic
Background Radiation.
Within
the fact-free first 300,000 years, there is a fair degree of
consensus as to what happened but inevitably the picture is drawn
with a very wide brush. Going backwards, the Universe gets
progressively denser and hotter, with particles progressively
breaking down into ever more fundamental units and the four forces
becoming a single superforce.
At
10-43
of a second before the moment of the Big Bang, all sensible
extrapolation stops dead. At 10-43,
the extrapolated size of the Universe has reduced to one “Planck
length” and will keep on getting smaller and smaller. It is
considered meaningless to deliberate on such tiny sizes. However,
that hasn’t stopped cosmologists from doing so but, since such
deliberations are being made in territory where the word “fact”
has lost all meaning, their value is debatable.
If
nothing is known of the Universe before 10-43,
that means nothing is known of the Big Bang itself. Nobody knows for
certain why it happened or how it happened. So far as the Current
Paradigm is concerned, at 10-43,
the moment when we first come across it, the Universe is already
expanding as fast as it can go.
Conventionally,
there are four principle pieces of evidence which are considered to
anchor the Big Bang in reality. However, they all date to long while
after the event. There is the aforementioned cosmic background
radiation, there is the abundance of the primordial elements, the
evolution of galaxies, and the distribution of quasars. It is as
well, though, to be precise about what these four pieces of evidence
really mean. If the evidence has been correctly interpreted, it means
is that the Universe was once smaller than it is today: perhaps very
much smaller: and that is all. What it does not mean is that the
Universe at 10-43
was incredibly tiny – you will commonly find it quoted as having a
diameter less than that of the nucleus of an atom which is very small
indeed. Most especially it does not mean that the Universe in its
earliest moment was a singularity.
REVERSING
INTO SIMPLICITY It
is a general rule in the Universe that the larger a particle is, the
more complex its structure will be. The rising complexity always
follows a similar pattern with smaller particles becoming
sub-structures inside larger ones. Thus it is that we have quarks
made out of collections of teels, nucleons made out of collections of
quarks, atoms made out of collections of nucleons, stars made out of
collections of atoms, and galaxies made out of collections of stars.
However,
maintaining the structure of any complex particle, no matter how vast
and awe-inspiring it might be, still depends on the properties of the
teels out of which it is ultimately made. If the teels didn’t
possess their gravity, rejectivity, and speed, complex structures
couldn’t even form, let alone last. Especially, this is true of
speed. Remove the speed from a complex particle and there is nothing
to stop the mutual gravity of the teels pulling themselves together,
closer and closer, until their rejectivity prevents them from being
pulled in any more.
On
a lesser scale, there are examples of this happening in the Universe
today. When some stars reach the end of their lifetime, they explode,
expelling much of their matter and a much higher proportion of their
speed. What is left is a small body in which the lack of speed allows
its atoms to be crushed so closely together that all its protons have
undecayed into neutrons.
Neutron
stars are a wonderful example of what gravity can do to an object
when it doesn’t have enough speed to keep the gravity in check.
They really are extraordinarily tiny and hugely massive. Even though
they are barely a few kilometres across, they can have a mass equal
to that of our Sun. Imagine that: our Sun, currently one and a half
billion kilometres across, squashed down into something that would
fit into the Thames Estuary with space to spare.
It
has been suggested that in extreme cases, where an even higher
proportion of speed is lost during the explosion, even the neutrons
would be unable to hold their structure together and that they would
break up to create a star made of quarks. This would crush our Sun
into an area with barely the diameter of a respectable Ferris Wheel.
What
if we were now to take this process to its farthest extent? What
would happen if we were to think not of stars but of the entire
Universe. What if we were to remove not just some of its speed, as
happens with neutron stars or quark stars, but all of it. Imagine a
Universe that is entirely matter and with not one jot of energy. What
you would end up with is as simple as it is inevitable. Every complex
particle in the Universe, every galaxy, every star, every proton,
neutron, and quark, would have its structure crushed out of it so
that the Universe would become nothing more than a collection of
teels.
While
all the speed in the Universe would be gone, all the gravity would
still be there of course with its strength not diminished by one
iota. Without the brake provided by speed, the mutual gravity of all
these billions of teels would pull them as closely together as their
rejectivity would allow: so close to each other that they could not
be pulled any closer. Such a Universe would be an incredibly dense
and geometrically perfect sphere.
HOW
BIG?
Picture
this. The Universe is a ball of teels, hanging motionless in space.
It is completely dead. There is not one trace of speed to give it the
slightest show of movement. All it has is its gravity to hold it
together and its rejectivity to give it form and size.
How
big is this Universe? The Big Bang Standard Model gives a diameter
for the Universe at 10-43
of one Planck Length, less than the nucleus of a proton. That size
however takes no account of rejectivity. Rejectivity gives this
Universe limits as to how small it could ever have been. How small
that could have been is well-nigh impossible to calculate without
more information than we actually have but it was certainly a lot
larger than the nucleus of an atom.
We
may not have enough information to calculate the size of the
pre-Moment Zero universe with accuracy but there is a way to work out
a "ball-park" figure. We can use the one percent rule which
is a rule of thumb, a very rough and ready rule of thumb, that says
that 99 percent of every structure in the Universe is actually empty
space. Structures may look solid but they are not. The truth is that
nothing is really “solid”. What is more, structures tend to be
vastly bigger than they are given credit for.
This
is certainly true of galaxies. There are many different types of
galaxies and they are all bigger than they appear to be. Let us take,
as an example, the sort most commonly shown to laymen: the spirals.
Photos of spiral galaxies feature more commonly in cosmology
popularisers than any other types of galaxies because they are the
prettiest. They look like vast catherine wheels, billions of
kilometres wide, floating in space. However, the photographs lie.
What we see in those pictures are the galaxy’s stars, gas, and
dust. What we don’t see are the fingers of the galaxy's gravity,
probing far out into empty space.
The true size of a galaxy is that area over which it is the dominant
influence. To get a proper idea of what you are seeing, you have to
think of the catherine wheel, not as being the galaxy but merely as
being a disc of stars set within a galactic sphere. The sphere is
invisible to us but it is there nevertheless. What is even harder to
grasp is the sheer size of the galactic sphere. The gravity sphere of
a galaxy extends far beyond what can be seen. At the very least, it
doubles the apparent diameter of any galaxy. It turns those catherine
wheels into relatively small objects, beautiful maybe but small
nevertheless, sitting inside a truly vast area of apparent
nothingness.
If
the one percent rule is true of the galaxies, it is also true of each
of the stars within a galaxy. Our solar system serves as a fine
example. When compared to some other stars, our Sun is not very big
but it is still almost a million and a half kilometres from one side
to the other. However, there is a lot more to the Sun than the big
yellow ball and the few minuscule planets that rush around it.
Through its gravity, its magnetic field, and its solar wind, the Sun
controls an area in excess of 20 billion kilometres in diameter. The
one percent rule certainly applies here. At the very least, 99
percent of our solar system is invisible to us.
The
one percent rule doesn’t stop with stars. Stars are made up of
atoms and, as Rutherford found out, atoms obey the one percent rule
as well. All the matter in an atom is contained in either the nucleus
or in its electrons. These, together, occupy less than one percent of
the area of an atom. You’ll often hear it said that we human beings
are 75 percent water. It is less frequently mentioned that we are 99
percent empty space.
The
nucleus of an atom is not a solid piece of matter either. It is made
out of a mix of protons and neutrons. I have never seen an estimate
of the percentage of matter to empty space within an atomic nucleus
but there’s no good reason that I can think of to suppose that it
doesn’t also obey the one percent rule.
And
so it goes on. Each proton and neutron contains three quarks and,
although we don’t know how much empty space there is between those,
there is experimental proof that there is at least some. Do these
obey the one percent rule? Given that everything else does, can we
justify supposing otherwise. Ultimately, of course, quarks are made
out of teels: of very large numbers of teels and as we will see in
the coming chapters, the one percent rule does indeed apply to the
teels that quarks are made of.
What
this demonstrates is that the Universe’s capacity for compaction is
remarkable. Think on this. The visible part of our galaxy, the Milky
Way, is roughly 100,000 lightyears in diameter. The galaxy’s
influence is felt far beyond that however so its true diameter is at
least double that, making the Milky Way into a sphere over 200,000
lightyears in diameter. This gives the Milky Way a volume of
approximately 105,000 cubic lightyears (calculated by using the
formula V= 1/6πd3).
If
all the stars in the Milky Way galaxy can be enclosed in just one
percent of its volume, that gives a volume of just under 1100 cubic
lightyears. If each of the stars can then be drawn together into a
ball of atoms, the volume comes down to 11 cubic lightyears. If the
atoms, in turn are drawn together to become bare nucleons, the volume
now comes down to just over one cubic lightyear. There is still
capacity for compaction left, however. Squeezing the nucleon ball
into a quark ball, gives the Milky Way a volume of one tenth of a
cubic lightyear and squeezing that into a teel ball brings it down to
one hundredth of a cubic lightyear.
By
breaking down the matter of the Milky Way into its constituent teels,
we have reduced it to a ball with a diameter of just 0.02 of a
lightyear. In astronomical terms, this is a mere footstep. The
nearest star to our Sun, Proxima Centauri, is 4.2 lightyears away.
This means that we have condensed all the matter in the Milky Way
into a ball that has a diameter equal to less than a two hundredth
part of the distance between the Sun and Proxima Centauri. That ball
could sit inside the orbit of the planet Pluto.
In
between each of the Universe’s galaxies, there is space, space and
yet more space. There are billions of galaxies out there beyond the
Milky Way and all of them, in that fraction of a second before Moment
Zero, were crushed down into their component teels and then crushed
even further so that they became this single, incredibly densely
packed ball of teels hanging there, motionless, in space. Which
brings us back to our original question: how big was this ball of
teels?
In
1999, a survey using the Hubble Space Telescope estimated that the
part of the Universe that is visible to us contains at least 125
billion galaxies. For
the sake of simplicity, let us assume that each of these galaxies is
like the Milky Way and thus has a “compacted” size of one
hundredth of a cubic lightyear. Doing that creates a sphere with a
diameter of 436 million lightyears. That estimate though is based on
the Universe that is currently visible to us. The assumption among
cosmologists is that the whole Universe is a much larger than that so
all we need to do is assume that the galactic density is the same all
the way through and scale up the 436 million lightyears to take
account of what is currently unseen.
Unfortunately
that is not easy because nobody even knows the diameter of the
visible Universe, let alone that of the visible and the invisible
together. There are all sorts of estimates that have been calculated
in all sorts of ways. Some are clearly more likely to be accurate
than others but even among the more likely ones there is no
consensus. All of them suffer by being calculated from too little
starting information.
To
make matters even worse, it is logically possible that the diameter
of the whole Universe is actually smaller than the diameter of the
visible Universe due to light circumnavigating the Universe in less
time than its the age.
If
someone can supply a pair of authoritative diameters I’ll gladly
use them but, in the meantime, not knowing what they actually are
will not harm the onward progress of this analysis. For the time
being, I will presume that the current whole Universe is bigger than
the current visible Universe. This means that the initial starting
diameter of the teel ball would have been more than 436 million
lightyears. I am going to assume that it was one billion lightyears
for no better reason than that it is an easy number to write.
THE
MOMENT OF CHANGE
At
Moment Zero, all the speed that we currently have in the Universe was
suddenly introduced into the teel ball. It is important, though, to
be clear about the way in which that speed was introduced. The speed
had to be given to each of the teels individually. It could not be
given to the teel ball itself. It is a matter of collision mechanics.
If you strike a stationary pool ball with a second pool ball, speed
will be transferred from the second to the first. This speed, however
is given to the ball as a whole. The atoms it is made of will move in
concert with each other, still bound by their mutual gravity, so that
the ball remains a ball. On the other hand, if that speed is given,
not to the ball as a whole but shared equally among the atoms; and
if there is enough of it to break the gravitational bonding; and if
the direction in which the atoms consequently attempt to move is
random; the effect is very different. The atoms will fly apart. The
ball will disintegrate into a dust of departing particles.
That
is what had to happen to the Universe. The speed that came at Moment
Zero could not be given to the teel ball as a whole. It had to be
given to the individual teels and there had to be vastly more than
enough speed to overcome their mutual gravity. Only in that way,
could the teel ball have sprung apart without, at the same time,
acquiring a velocity and a vector.
THE
EXERCISING OF LOGIC
If
you are following this chapter well enough, I’m pleased for you but
I am also sorry because a spanner now has to be inserted into the
cosmological works. The Moment Zero universe may sound to be a
reasonable conjecture but it didn’t happen that way. There never
was a ball of teels, incredibly dense and incredibly still, just
hanging there in space, all those billions of years ago, waiting for
all that speed to be injected into it. There couldn’t have been and
to recognise that this is so, all you have to do is apply a little
logic.
If
the Universe was previously in a state whereby it was able to hang
motionless and perfectly spherical in space with no trace of speed
anywhere, was this because it had always been like that?
Philosophically, this runs counter to the idea that everything,
without exception, has a beginning, a middle, and an end – and
practically, it raises the question: where did all the speed come
from? Speed is always conserved so, if it wasn’t contained within
the Universe, it must have been somewhere outside it. So where
exactly was it? And how was it being contained?
Consider
an alternative. What if this Universe hadn’t always been hanging
there, motionless and spherical in space? What if it had, in an
earlier time, been a speedy and energetic Universe, perhaps one that
resembled its current form. If that was the case, though, how did it
manage to get rid of its speed? In truth, I cannot think of any
mechanism whereby a body composed of vast numbers of teels, with each
teel having the properties of gravity and rejectivity, can be
dispossessed of ALL its speed. Every last jot of it.
Of
course, the fact that I can’t think of an appropriate mechanism,
doesn’t mean that one doesn’t exist. However, if one does exist
and the Universe did manage to get rid of every last jot of its
speed, we are brought back to asking where did all that conserved
speed go – and after it went, how did it manage to come back again
at Moment Zero.
Capping
all those posers, is the granddaddy of them all. In the moment of the
Big Bang, what was the mechanism that delivered the speed to each
individual teel so that the stationary spherical teel ball could
suddenly disintegrate? Remember, this is a ball that contains only
teels. It is an extremely simple ball. There is no mechanism inside
it. It is not like an atom or a star or a galaxy which has internal
mechanisms that produce all sorts of different effects. The Universe
at this stage has all the complexity of a bag of ball-bearings –
and that is no complexity at all. If you want a bag of ball-bearings
to explode, you have to inject some explosive from outside. The same
is true of this teel-ball Universe – but what might that explosive
be.
IT
LIVES
What
I am trying to create is a bottomup analysis of the Universe. Such an
analysis needs a starting point and, as you now realise, we don’t
have enough information to be able to work out what the conditions at
that starting point might have been like. Consequently we are
beginning with this idea that the Universe originated as a speedless
ball of teels. This is an idea that clearly bears no real resemblance
to any understandable reality. In its earliest moments, the Universe
cannot have been like this.
Nevertheless,
for the moment, I want to stick with this particular extrapolation. I
want to stick with it because I am endowed with the benefits of
hindsight. I know what will happen. I know because I have already
worked it through many times. I know that this highly hypothetical
ball of teels will evolve into something that very much resembles the
Universe we see all about us – and while that is, in itself, a good
justification for carrying on, there is more.
As
we follow the story beyond the present day, into extrapolations about
the future of the Universe, we will begin to see processes at work
that push the raging Universe into a form of decay. Arising from that
decay, we will see other processes come to dominate on a mighty
scale, processes that might indeed have been the cause of the Big
Bang.
If
those extrapolations are correct, the Universe before Moment Zero was
not a simple ball of teels. It was a complex structure, a raging
place with highly sophisticated processes and mechanisms that were
all working to tear it apart. The speed turns out not to have been
suddenly injected into the Universe. It was there all the time.
Interestingly, this picture of the Universe before Moment Zero turns
out to be rather familiar. The Universe in the seconds before Moment
Zero was obeying the laws of physics as we know them today and was
being powered by processes that have been well-known for decades.
The
killer process, the one that finally triggered the breakup of this
earlier Universe, turns out not to have been some bizarre enaction,
dragged from the fact-free mind of an over-imaginative theoretician.
It turns out to have been an old friend – or to be more exact,
something with which the human race has had an ambivalent
relationship since the early years of the twentieth century. The Big
Bang was an explosion. An ordinary explosion. The only thing
different about it was that it was on an almost inconceivably huge
scale.
All
that is for future chapters, however. For the moment, I’d like to
begin the story of the Universe with our hypothetical teel ball, one
billion lightyears in diameter, just hanging there in space,
incredibly dense and perfectly round. Then we’ll have an entirely
hypothetical big bang. At Moment Zero, in the tiniest fraction of an
instant, we’ll inject into that incomprehensible teel ball,
incomprehensible amounts of speed.
THE
REALITY CHECK
For
this chapter, reality checking is inevitably an inconclusive
procedure. The chapter deals with that minute fraction of time when
the Universe as we know it began. Unfortunately, we know of no facts
about that minute fraction of time so anything we might have to say
about it is always going to be conjectural. In truth, we can’t even
say for certain that the Universe began at all. There is scientific
evidence that the Universe may once have been smaller than it is
today but that is not the same as saying “it began”. More
convincing are the philosophical arguments suggesting that the
Universe is more likely to be finite than infinite but how much more
convincing the arguments are depends on how much credence can be
given to philosophical arguments – and scientists are notorious for
giving very little.
Because
of its topdown nature, the Current Paradigm, prior to 300,000 years
after the Big Bang, finds itself well-below the glass floor and is
therefore nothing more than extrapolative speculation. Prior to 10-43
it acknowledges defeat and ceases to have anything specific to say.
The string theorists, et al, are making valiant attempts to push
beyond the 10-43
barrier but their’s is such an evidence-free environment that
anything can go – and it does.
The
New Cosmology is more forthcoming about the first moment. It proposes
that the Universe, immediately before Moment Zero, was a ball of
teels that was completely devoid of any speed/energy. Then, at Moment
Zero, all the speed/energy that is currently in the Universe was
injected into that Universe by some unknown mechanism that managed
the neat trick of giving exactly the same amount of speed to each
individual teel.
Unfortunately,
I then go and spoil things by saying that the Universe at Moment Zero
was not actually like that at all: that the picture I have drawn is
just a simple model for use as a starting point for the more complex
model that is to come: that the real early Universe was a much more
complex object. I justify this by saying that as the Universe
naturally develops in the chapters to come, it will become an object
of such suitable complexity that it could well, at Moment Zero, have
exploded in such a way as to produce the Universe that we see today.
Probably
the strongest argument in favour of the New Cosmology, as opposed to
the Current Paradigm, is that, is that by being a bottomup analysis
it does not suffer from any glass floor problems. It doesn’t
stutter to a halt because it has run out of facts. Quite the
opposite, actually, in that as it goes on it draws more and more
facts to itself. Starting with a hypothetical Moment Zero it proceeds
to build itself into particles and processes that are exactly what we
see about us.
Or
does it. Let us be properly sceptical for a moment and question why
it is that the New Cosmology, starting with a billion-lightyear wide
ball of teels, can evolve into our present-day Universe. Does it do
it because it has no choice – or could there possibly be another
reason.
Actually,
it is entirely possible that I am deluding myself. It is entirely
possible that, in my desire to produce an improved picture of the
Universe, I have subconsciously manipulated my research so that the
picture I’ve produced is the one that I want to see rather than the
one that is really there. This is entirely possible. I hope it isn’t
but it is something that happens so frequently in all walks of human
activity that the possibility cannot be denied. If it is what has
happened, I’ll not have been the first to do it and I certainly
won’t be the last. |
CHAPTER
THREE
THE
PLANCK EPOCH
This
chapter deals with what happened in the Universe during the period
that stretches, in the Current Paradigm, from the moment of the Big
Bang to 10-43
of a second after it. This period is known as the Planck Epoch.
At
this stage in the lifecycle of the Universe, the Current Paradigm and
the New Cosmology are not directly comparable. Not least, this is for
reasons of scale. In the Current Paradigm, the Universe is “smaller
than the nucleus of an atom” at the END of the Planck Epoch. In the
New Cosmology, the Universe is a billion lightyears across at the
BEGINNING. Inevitably, then, the timescales are going to be
different.
Actually,
they are so different that the New Cosmology doesn’t really have a
timespan that can be identified as a Planck Epoch equivalent at all
and that means that this chapter is really an artificial contrivance.
Having said that, there are new measures and mechanisms to be
introduced into the New Cosmology and this chapter is as good a place
as any.
FACTS
The
term “Planck Epoch” comes from Max Planck, the physicist who
first defined its boundaries. Max Planck was a leading light in the
creation and promotion of Quantum Theory, an alternative to the
classical form of physics typified by the work of Isaac Newton.
Quantum Theory comes into its own when dealing with the extremely
small.
The
Planck Epoch stretches forward in time from the moment of the Big
Bang to 10-43
of a second later. In that time, the Universe expanded to a diameter
of one “Planck Length”. A Planck Length is about 1.6 X 10-35
metres and equates to about 10-20
the size of a proton. 10-43
is the time it would take for a photon, travelling at lightspeed, to
cross a distance equal to the Planck Length.
In
the Planck Epoch, the density and temperature of the Universe was
such that the four forces of nature, the gravitational force, the
electromagnetic force, the weak nuclear force, and the strong nuclear
force, could well have become one single force. Exactly what that
force was, how it would have manifested itself, and what its
properties might have been, is unknown but it is assumed that it
would have been some form of gravity.
Today
we use two theories of gravity, both of which have proved to be
satisfactory in specific circumstances. There is Newton’s theory
which is resolutely classical and there is Einstein’s General
Theory of Relativity which is geometrical. It is thought that a
proper understanding of “gravity” during the Planck Epoch may be
possible if a “quantum” theory of gravity can be deduced. Thus
far however such a theory has not been forthcoming and not for the
want of trying - for much of the last century, the creation of a
quantum gravity theory has been one of the prime goals for physicists
the world over.
What
this all means is that nothing is “known” about conditions during
the Planck Epoch. There are no more “facts” here than there were
in the last chapter. Not that this has stopped cosmologists having
“ideas”. Ideas are currently sprouting faster than weeds in a
rose garden. This is what happens when a topdown construction runs
out of facts. A lack of facts equates to a lack of constraints which
means that extrapolations can range ever more widely. Leave such a
situation to run for long enough and the extrapolations become a lot
more like wishful thinking than scientific endeavour.
Contrast
this with the bottomup picture that comes with the New Cosmology.
THE
MECHANICS OF EXPANSION
Immediately
before Moment Zero, the teelball that was our Universe was hanging
there in the emptiness of space: alone and dark and still. This was
a ball that contained no energy because not one of the billions and
billions and billions of teels in the ball had even the smallest
smidgeon of speed. This teelball was the deadest object there has
ever been.
At
this time, each teel in the Universe had just two properties;
gravity and rejectivity. Then, at Moment Zero, each teel was suddenly
given a sub-property: speed. Exactly how much speed each teel was
given, we don’t know but it was enough to give each one a velocity
that was many times that of light. Speed can come as realspeed or
potentialspeed but at this instant it was all realspeed.
It
is a universal truth that you cannot have velocity without a vector.
When an object moves, it has to move in a direction. When each of the
teels in the Universe received that vast amount of speed, a vector
came with the package and, since the Universe itself didn’t take on
a vector, we can only assume that the vectors were randomly
distributed, with there being no bias in favour of one direction as
opposed to any another.
Not
that the randomness of the vectors would have lasted very long. Not
only was each teel completely surrounded by other teels, they were
all as closely packed together as their rejectivity would allow. This
meant that, even though each teel was suddenly endowed with a
colossal amount of speed, they couldn’t move. No matter which way
their vector might want to take them, there was another teel blocking
the way. Here was a conflict that had to be resolved because speed is
conserved. Each of these teels was possessed of a phenomenal amount
of speed and yet it was being prevented from moving. Something had to
give.
The
deadlock was broken by yet another universal truth: a stressed
object will always move in the direction of least resistance. The
Universe was a sphere and, exactly like any other spherical object,
it had a “centre” and a “surface”. It is the surface of a
sphere that is its point of least resistance. While every other teel
in the Universe was completely and densely surrounded, those at the
surface had one side open to outer space. For these teels, there was
a way out. As long as they moved exactly outwards from the Universe’s
centre point, they were free to move at the colossal speed they had
been given.
Suddenly,
the surface teels were leaping away from the Universe, closely
followed by the teels immediately below them, closely followed by the
teels immediately below them, and so on, all the way down to the
teels at the exact centre. In an instant, the Universe had become an
expanding Universe, with all the teels vectored outwards at almost
exactly the same velocity.
Just
as suddenly, however, the teels began to slow down.
DECELERATION
The
speed the teels received at Moment Zero was realspeed but being
forced to follow a vector away from the centre of the Universe meant
that the speed immediately began converting to potentialspeed. There
is no secret to this. As the teels raced away from the centre of the
Universe, each had more than fifty percent of the other teels behind
it. Not necessarily directly behind of course. Some would have been
but most would have been at an angle and some would even have been
alongside. Nevertheless, at least 50% would have been “not ahead”.
The
significance of this lies in the way that each teel possessed an
equal amount of gravity. Thus, a teel very close to the centre of the
Universe would have had 51% of the other teels behind it and
decelerating it and 49% ahead and accelerating it. In contrast, a
teel on the surface of the Universe would have had 100% of the other
teels decelerating it and none ahead to do any accelerating.
In the balance between acceleration and deceleration, the bias for
every teel was in favour of deceleration, no matter how slightly.
Even the teel at the exact dead centre of the Universe, beginning
with 49.9 (recurring) of the other teels in all directions, would
have taken the deceleration bias as soon as it began to move.
This
slowing down of the Universe’s expansion has continued, so far as
we can tell, to this day. That it is still slowing after 13.7 billion
years indicates just how enormous was the amount of speed injected
into the Universe at Moment Zero.
THE
TRANSFER OF SPEED
At
Moment Zero, all the teels raced directly away from the Universe’s
dead-centre at exactly the same velocity. They immediately began to
decelerate by converting realspeed into potentialspeed. However, they
didn’t all decelerate at the same rate because the nearer a teel
was to the centre of the Universe, the more teels it had ahead of it,
accelerating it, and the less it had behind it, decelerating it.
Effectively, the deceleration rate would have been greatest at the
surface and least at the centre. This was not a recipe for an
harmonious expansion of the Universe. Inevitably, those behind were
going to bump into those ahead.
The
bumping followed the laws of collision mechanics. Each bump would
transfer speed from the teel behind to the one ahead. The one ahead
would accelerate and the one behind would decelerate. This, in turn,
would ensure that the one ahead bumped into the one ahead of it, and
that the one behind was bumped into. And so on. The inevitable
consequence of this was that speed was rapidly transferred outwards
from the centre of the Universe to the surface, counteracting to a
greater or lesser degree, the gravitational slowing-down.
There
is nothing mystical about this outwards transfer of speed. It can be
replicated here on Earth with ease. Place a large pile of flour on
the ground and explode a stick of dynamite in the middle of it. Once
the bang is over and done with, you’ll have a large circle of flour
on the ground. The flour grains that form the edge of the circle will
be those that received the greatest amount of speed. Those that
remain near the centre will have received the least. The speed, of
course, will have started inside the dynamite at the centre of the
pile and reverberated outwards through the flour to the surface. The
flour will only have been stopped from moving outwards forever by a
mix of the Earth’s gravity and air resistance.
CHAOS
To
recap, the expansion of the Universe passed through three phases as
quickly as would have been possible with a starting diameter of a
billion lightyears across.
I
have separated the phases for clarity but in practice the phases
would have passed almost simultaneously. Teels would have been racing
outwards at the velocity originally given while, at the same time,
being decelerated by the mutual gravity and being both accelerated
and decelerated by the collision activity.
There
was also a phase four and this was likewise taking place
simultaneously, turning a relatively tidy outwards expansion into a
turbulent and chaotic affair. This phase stemmed from the supposition
that the entirely theoretical teel, in possessing rejectivity,
entirely filled out its dimensions, making it into a perfect sphere.
Any pool player will tell you that passing exactly the same vector
from one pool ball to another requires that the one ball strikes the
other at exactly the right spot. The pool player will also tell you
that this takes some doing.
Out
of all the billions and billions and billions of teel collisions
taking place during the first expansion of the Universe, the number
that were precisely spot-on would have been small. Consequently, the
colliding teels would have been springing away from each other at
increasingly wild angles. The expansion of the Universe was still
outwards and at an astonishingly rapid rate, the combination of
incredible speed and incredible density would have seen to that, but
it was by no means as fast as it would have been had the orderly
progression of phases one, two, and three continued.
THE
FOUR FORCES
In
the timeline of the Current Paradigm, all four forces were originally
part of a single superforce, the properties of which may, or may not,
have resembled gravity as we know it.
Towards
the end of the Planck Epoch, at 10-43,
the declining temperature and density of the Universe separated
gravity out from the superforce so that there were now two forces:
gravity and another which was a mix of the Strong, Weak and
Electromagnetic. At 10-36,
the Strong Force separated out so that there were now three forces.
Finally, at 10-12,
the Weak Force separated out of the Electromagnetic forces so that,
in less than a second after the Big Bang, the single gravity-like
superforce had broken down into the four forces that we recognise
today.
In
the New Cosmology, there is no “superforce”. During the New
Cosmology’s equivalent of the Planck Epoch, gravity was still the
same gravity that we know today. The other three forces were not
merged with it (or, as in some interpretations, indistinguishable
from it) because at this time they did not exist. In one sense, they
never will exist, not even today, for while the gravitational force
is an inherent property of the teel, the other three forces are not
inherent properties of anything. Actually, they are not properties at
all. They are manifestations of specific processes that involve the
gathering together of impressive numbers of teels into very small
areas in very specific ways.
Exactly
how these processes work will be described in detail in the coming
chapters as their related particles make their appearance. Suffice it
to say, here, that they were nowhere to be seen during the Planck
Epoch and they will not appear for some time to come. They certainly
didn’t, any of them, appear in the first second of the Universe’s
existence.
MASS
AND DENSITY
Here
are new versions of a pair of old measures. First: mass. Every object
in the Universe, from teels upward, has a gravitational mass and an
inertial mass. As described in the last chapter, gravitational mass
stems from an object’s gravity and inertial mass stems from its
rejectivity. From hereon, for clarity, any reference to “mass”
will mean gravitational mass while inertial mass will be referred to
as rejectivity.
Here
is a typical current definition of mass:
Mass
is a measure of the amount of matter contained in or
constituting a physical body. Objects
that have mass interact
with each other through the force of gravity.
Every
object in the Universe has mass. Likewise, every object in the
Universe has a degree of density. Here is a typical current
definition:
Density
is a measure of the mass of a substance per unit volume. Most
substances (especially gases such as air) increase
in density as their pressure increases or
their temperature decreases.
There
is nothing especially wrong with either of these definitions but they
are topdown. Processes that involve mass and density are easier to
understand if we look at them bottomup. Thus, while some objects in
the Universe are very large by the standards of Planet Earth,
galaxies for example, the the mass and density of every object, large
or small, is still ultimately down to the teels it is made out of and
is best understood when seen in that light. Here are revised
definitions of mass and volume, along with two brand new measures
that will be useful in the chapters to come:
Since
all teels are identical, it might seem logical to suppose that
calculating the mass and density of a particle with (say) a teelmass
of one million (a “megateel particle") would simply be a
matter of multiplying the mass/density of one teel by one million.
That would be logical but it would be wrong. It would be ignoring the
teel’s rejectivity. The mass and the density of our megateel
particle could only be one million times one if all the teels were
occupying exactly the same spot and that is not possible. Our
megateel particle is a sphere of teels that are packed to the limits
of their rejectivity. Its mass, therefore, is the mass of each teel
adjusted by way of the Inverse Square Law to take account of the
distance of each teel from the surface – and its density comes down
to “into how small a bag can you get a million teel-sized marbles”.
Both the mass and the density, then, are somewhat less than one
million times one.
Having
said that, being packed to the limits of its rejectivity, our
megateel is still as massive and dense as a megateel particle can be.
It is impossible to make a particle denser or more massive than this
one. It is also a highly unlikely particle because, in practice, a
megateel particle would be spinning and this would inevitably lessen
both the mass and the density.
Every
object in the Universe, bar the teel itself, spins to a greater or
lesser degree. Or does it? Actually, even the act of spinning is
better understood at the level of the teels that an object is made
of. The spinning of any complex particle is the orbiting of its teels
around the particle’s axis – and the faster the teels are moving,
the wider their orbits will be.
Consequently, the faster a complex particle is spinning, the “bigger”
it is – and since bigger equates to the same number of teels in a
larger volume, the complex particle is both less dense and less
massive.
That
a megateel particle would become less dense by becoming bigger is
easy enough to understand and fits in well enough with current
dictionary definitions. That it might also become less massive,
however, is counterintuitive and does not fit comfortably with the
current concept – and is actually contrary to some definitions.
“Mass, in physics, is the quantity of matter in a body, regardless
of its volume or of any forces acting on it” which is, of course,
the Galilean concept whereby a kilo of feathers equates to a kilo of
iron, notwithstanding one is rather bigger than the other.
That
particular definition, in this new scheme of things, actually equates
to “teelmass”. “Mass” is gravitational mass, the
gravitational strength of a complex particle at its surface. By way
of an illustration, suppose that the volume of planet Jupiter, a gas
giant, was to be reduced, without any loss of matter, by reducing the
rate at which the planet was spinning. Do this and Jupiter’s mass
would increase along with its density. In other words, two otherwise
identical megateel particles will have a different density/mass if
they have a different spinrate.
Of
course, a megateel particle is just a simplification for
demonstration purposes. In practice, complex particles are a lot more
“complex” than that. Even relatively simple particles consist of
a lot more than just a bunch of teels moving around the axis and once
they get to any size, they will consist of cascades of complex
particle subassemblies: quarks inside nucleons, nucleons inside
atoms, atoms inside molecules, molecules inside planets, all the way
up to the Universe itself. All of these complex particle
subassemblies are spinning in their own right and the rate at which
they are spinning directly affects their own mass and density – and
the mass and density of the larger particle of which they are a part.
ESCAPE-VELOCITY
Escape-velocity
is a measure possessed by every material object in the Universe, from
the insubstantial teel, all the way up to the most massive galactic
cluster. Even the Universe itself has an escape-velocity. The
escape-velocity is a very important measure because the form of all
complex particles is maintained by it. The continued existence of a
complex particle depends on whether or not any of its component teels
can escape – and to escape, its teels have to be moving faster than
the complex particle’s escape velocity. Here is a typical
definition:
Escape
Velocity is the minimum speed that Object A needs
to possess in order to, without any power, escape
from the gravity field of Object B.
The
principle underlying escape-velocity is this: the gravitational
strength of Object B will decline with distance from its surface at
the rate of the Inverse Square Law. The velocity of Object A, should
it be rising vertically from the surface of Object B, will also
decline with distance from the surface of Object B at the Inverse
Square Law rate. Both are therefore interlinked and effectively
decline at the same rate.
The
escape-velocity at the surface of Object B can be calculated from its
mass which is in turn a product of its teelmass and its teeldensity.
If Object A moves away from Object B at less than the
escape-velocity, the gravity of Object B will eventually bring it to
a halt, after which it will then fall back to the surface. If it
moves faster than the escape-velocity, because the decline of the
gravitational strength and the decline of the acceleration are
interlinked, the gravity of Object B will never bring object B to a
halt and it will carry on moving away, either forever or until
something else intrudes to stop it doing so.
The
principles underlying escape-velocity are simple but in the real
Universe that simplicity can be somewhat muddied. The real Universe
contains a lot more than just an Object A and an Object B. There are
billions and billions of teels in the Universe and there are billions
and billions of complex particles that are made out of assemblages of
teels. Every one of those teels and every one of those complex
particles has a gravitational relationship with everything else –
and with every gravitational relationship comes a pair of
escape-velocities.
Distance
though is a great leveller. The gravitational relationship of a pair
of teels, one on each side of the Universe, is as near to zero as
makes no difference. Even the gravitational relationship of vast
objects is reduced by distance to negligibility. The Milky Way galaxy
will “escape” from a galaxy on the other side of the Universe
with almost no velocity at all. Getting away from nearby Andromeda
will take a little more effort. This is not to say that the Milky Way
is not influenced by objects on the far side of the Universe, merely
that distant objects have to club together to make their influence
meaningful. A pocket torch one mile away would be well-nigh
invisible. A million pocket torches one mile away would be very
visible indeed.
The
definition of escape-velocity, as given above, is more than adequate
so:
A
brief digression. escape-velocity is conventionally calculated as at
the surface of an object. However, the “surface” doesn’t always
mean the same thing in all places. On planet Earth, for instance, it
has always been calculated as at sea-level for the entirely practical
reason that sea-level is the nearest thing we have on Earth to a
surface “constant”.
Unfortunately,
using sea-level as a constant is not of much use when we consider
extraterrestrial objects. None (apart from, possibly, Titan) has a
readily identifiable “sea level”. Planet Jupiter, for instance,
is a gas giant which, so far as we know, doesn’t have a sea at all.
Some say it doesn’t even have a solid surface that we would
recognise as such. By convention, Jupiter’s surface for
escape-velocity purposes is its cloud tops. With the Sun and the
stars, the situation is similarly vague. The surface that we can see
is certainly neither liquid nor solid and, the closer you get to it,
the less surface-like it appears to be. Does the Sun, somewhere down
inside, have a liquid or solid surface? Possibly, perhaps probably,
but no-one knows for sure. By convention, the surface for
escape-velocity purposes is the “apparent” surface.
For
consistency, the best measure of escape-velocity would be as at the
centre of any object. All objects have a centre and consequently
direct comparisons would be easy to understand. That, though, would
sit uncomfortably with most people’s awareness of escape-velocity
which typically has to do with the firing of rockets from Cape
Canaveral or Baikonur or Kourou.
The
nature of this analysis is such that it doesn’t have much need of
new calculations, certainly not of any that involve an object’s
escape velocity. The need for consistency in calculating
escape-velocity is merely pointed out here as something that should
be addressed. For the purposes of this analysis, all escape
velocities will be taken as at what passes for a surface.
THE
INTERLINKING OF MEASURES
Having
looked at the different measures we can apply to the early Universe,
it now becomes possible to draw up a more detailed picture of the way
things were. First, though, consider the Planck Epoch itself. In the
Current Paradigm it has a specific meaning. It is the period between
the Big Bang and 10-43.
It is the period when the backward extrapolations of the Big Bang
Standard Model ceases to have any practical connection with the
Universe as we currently know it. In the New Cosmology, the Planck
Epoch is more arbitrarily defined. It corresponds to the Current
Paradigm Planck Epoch only in that it concerns the moments
immediately after Moment Zero. The length of the Planck Epoch in the
New Cosmology is currently unknown but, in that the Universe began
with a diameter of one billion lightyears, it was almost certainly
longer than 10-43
of a second.
As
to the measures, in order to see how they apply we’ll look at three
snapshots of the Universe: the first is immediately before the
beginning of the Planck Epoch, the second is at Moment Zero when the
Planck Epoch officially began, and the last is at the end of the
Planck Epoch, the New Cosmology equivalent of 10-43.
-
Diameter: one
billion lightyears. -
Mass: unknown
but the most massive it has ever been.
-
Density: unknown
but, being to the limits of rejectivity, the densest it has ever
been.
-
Escape-velocity: unknown
but, with the Universe being the most massive and the most dense
it has ever been, the escape velocity would also have been the
fastest it has ever been.
-
Totalspeed: zero.
-
Spinspeed: zero.
-
Teelmass: unknown
but as high as it has ever been.
-
Teeldensity: unknown
but at this time it would have been exactly the same as the
density – and that was the densest it has every been.
-
Diameter: one
billion lightyears plus – the Universe was suddenly expanding at
many times the speed of light. This expansion rate was the
fastest it has ever been.
-
Mass: from
the most massive that the Universe has ever been, the mass was
now falling rapidly as the surface of the Universe moved away
from the centre. The mass fall-rate at this moment was the
fastest it has ever been.
-
Density: this
too was falling rapidly as the Universe expanded, spreading its
teels over a wider area. The density fall-rate was the fastest
there has ever been.
-
Escape-velocity: falling
rapidly as the mass and density of the Universe fell. The
escape- velocity fall-rate was (surprise, surprise) was the
fastest there has ever been.
-
Totalspeed: at
Moment Zero, the totalspeed of the Universe shot up from zero to
being as high as it has ever been. All of the totalspeed was in
the form of realspeed.
-
Spinspeed: at
Moment Zero, the amount of spinspeed in the Universe shot up from
being zero to being as high as it has ever been. All of that
spinspeed was in the form of speed.
-
Teelmass: Before
Moment Zero, the teelmass was as high as it has ever been.
Whether this was so at Moment Zero depended on whether any teels
were given enough speed to exceed the Universe’s
escape-velocity. The implications of this will be explored in
the next section.
-
Teeldensity: like
the teelmass, before Moment Zero the teeldensity was as high as it
has ever been. It was also identical to the density. At Moment
Zero it began dropping rapidly and, given that the Universe at
this time still contained no spinning substructures, was still
the same as the density.
-
Diameter: one
billion lightyears plus – and still expanding at many times
lightspeed. However, the expansion-rate was slowing. There were
two reasons for this. Firstly, the mutual gravity of all the
teels was having its effect and converting realspeed to
potentialspeed. Secondly, an increasing degree of chaos was
setting in as the vectors of the colliding teels changed from
straight lines, directly out from the centre, to ellipses.
-
Mass: the
mass of the Universe continued to fall as surface of the Universe
moved ever farther away from its centre. However, as the rate of
the Universe’s expansion slowed so too did the fall-rate of
its mass.
-
Density: the
density of the Universe continued to fall as the teels occupied an
ever larger area. However, the fall-rate of that density was
slowing as the expansion-rate of the Universe was slowing.
-
Escape-velocity: given
that the mass and density of the Universe were both falling, the
escape-velocity also had to be falling. However, as the
fall-rate of those two measures was slowing, to too was the
fall-rate in the escape- velocity.
-
Totalspeed: if
we assume for the moment that the teelmass of the Universe had not
changed, then the totalspeed would likewise not have changed.
However, its character was changing rapidly as realspeed became
potentialspeed – although the change-rate was slowing in line
with the slowing of the Universe’s expansion-rate.
-
Spinspeed: the
amount of spinspeed was falling in line with the changing of
realspeed to potentialspeed. However, the character of that
spinspeed was changing. It had all previously been in the form
of speed. As a result of the increasing collision activity, some
of the teels would have adopted closed ellipses around the
Universe providing increasing amounts of “orbital” spin.
Given that the ellipses were random, though, the Universe itself
could hardly be described as spinning.
-
Teelmass: the
key to the constancy of the Universe’s teelmass lay in its
escape- velocity. If any of the teels had enough realspeed to
exceed the Universe’s escape-velocity the teelmass would
reduce. There is no direct indication as to whether this
happened/is happening but the implications of the possibility
are considered in detail in the next section.
-
Teeldensity: the
teeldensity was falling rapidly but the fall-rate was slowing in
line with the Universe’s slowing rate of expansion. At this
time the teeldensity and the density were still the same.
The
changing measures in these snapshots are not expressed in numbers but
that doesn’t mean they have been arbitrarily arrived at. Gather a
specific number of teels together into a teelball, give them each a
specific amount of speed, and what will happen is inevitable. What is
especially notable about the measure changes is that they are
interlinked. Any change in one will result in predictable changes in
others. This is something that we will see again and again as this
analysis progresses although it will never be so clearly apparent
again. As the Universe becomes a more complex object, the measures
will become different at local, global, stellar, galactic and
universal levels and the interlinking will become less obvious.
Nevertheless, the principle that no measure-changes ever happen in
isolation will still apply. It will just be a matter of identifying
the right links.
This
could be seen as just a broader version of “for every action there
is a reaction” but there is more to it than that. Seen on a
universal scale, there is a form of balance that overarches
everything. The measures of the Universe today are different from
what they were five billion years ago and they are different again
from what they will be in five billion years time. Nevertheless, the
balance remains. The root of that balance lies in the most
fundamental measures of all – two measures that do not change. At
Moment Zero, the Universe had a specific teelmass and a specific
totalspeed. Any other measures you might care to make are all
subsidiary to those two. The subsidiary measures can change, can
appear in many different forms, can change from one measure to
another, but the Universe's teelmass and totalspeed do not change.
Subject to the conjectures in the next section of this chapter, they
are the same as they have always been.
On
a philosophical level, this interlinking of measures is something
that many researchers have sought, presumably because it implies that
there is more order in the Universe than at first there appears to
be. Certainly, in his Principia Mathematica, Newton interlinked as
much as he could. Darwin’s theory of evolution is just interlinking
on a local level. Almost the whole of psychiatry is based on the idea
of action/reaction/action. Most significantly of all, the idea of
interlinking underlies the whole of Einstein’s General Relativity
theory. Space, time, mass, energy, and velocity – change any one of
them and you will change all the others. In the Einsteinian
interpretation there is nothing outside the Universe, the Universe is
all there is so, while the measures can change, the interlinked
balance remains.
The
interlinked balance provided by General Relativity is at the heart of
the Current Paradigm. An interlinking balance is similarly the most
important feature in the New Cosmology but there is a major
difference between the two. Because the Current Paradigm universe is
all there is, the measures at their most fundamental cannot change.
If the Current Paradigm Universe had a value of “one” at its
beginning, it will have a value of “one” at its end. In the New
Cosmology Universe, the value at its beginning and at its end do not
have to be the same. This is because it has a measure that the
Current Paradigm does not have. It has an escape velocity. Giving the
Universe an escape-velocity means that, potentially at least, its
teelmass and its totalspeed can change and changing these measures
will change all the others. The interlinking balance remains but the
numbers at the end can be different.
A
GREATER UNIVERSE?
In
snapshot one, during that tiny fraction of a second before Moment
Zero, the Universe had a specific teelmass. Then, at Moment Zero,
each of those teels was given an enormous amount of speed. However,
we don’t know exactly how enormous that amount of speed was. Was it
enough to push some, or perhaps all, of the teels over the Universe’s
escape-velocity? Does the present-day, much expanded, Universe still
have the same teelmass? Does it still contain the same number of
teels, that it started with?
In
the Current Paradigm, with its central Einsteinian idea of the
Universe being all there is, the amount of energy/matter in the
Universe is constant. Since there is nothing outside the Universe,
nothing can leave the Universe. Even in its most extreme form, that
of an “open” Universe in which the Universe continues to expand
forever, the Universe is not expanding out into something else
because there is nothing for the Universe to expand out into. In the
New Cosmology, the Universe is a much less exotic, much more mundane,
object. It exists in “space” and it contains “space”. The
space it contains is the emptiness in between its teels. The space in
which it exists is the emptiness beyond its surface.
Exactly
how big the empty space beyond the Universe's surface might be is
unknown but it will have conventional dimensions and could well be
very extensive. There is always the possibility that it is infinitely
big although the spirit of the New Cosmology suggests that is
unlikely. What we can guess is that the empty space is not as empty
as all that. For a start, if some teels have ever been able to move
faster than the Universe’s escape-velocity, the emptiness will
contain them.
Health
warning alert: by moving outside our Universe we are moving beyond
anything that is remotely likely to be verifiable for a very long
time to come. If ever. Of this region, this analysis can suggest
nothing for definite. It can provide no answers. It can only ask
questions. Questions like: if teels can escape from the Universe out
into the surrounding space, would they be the only teels out there?
Might it be possible that there are already free-flying teels in the
surrounding space that did not originate in our Universe? Is it
possible that the teels contained in our Universe actually originated
in the surrounding space?
Earlier
chapters have briefly touched on the thought that the Universe might
not be the only object in the “Greater Universe”. What if our
particular Universe is just one of many? What if the structure of the
Greater Universe is not unlike the Universe itself – an assemblage
of complex objects (galaxies in the case of the Universe, Universes
in the case of the Greater Universe)?
If
the Greater Universe is an assemblage of complex objects, does it not
become difficult to believe that they do not have a gravitational
relationship with each other? Wouldn’t they be exchanging material
in the same way that galaxies do? And wouldn’t there be a range of
different types of Universes, just as there is a range of different
types of galaxies?
There
is no current way to answer those questions but the logic underlying
them is strong. Just because we cannot see outside our Universe,
doesn’t mean that there is no outside. And if there is an outside,
isn’t it more likely than not that it is working to the same rules
as is the inside.
As
this analysis proceeds we will, from time to time, come back to this
subject. As we find out more about the workings of our Universe,
harder conclusions will suggest themselves about whether there are
any other Universes in the Greater Universe. The conclusions won’t
be definitive, they won’t be proof, but they will be interlinked
with what we know about our own Universe. By knowing what is “in
here” we will have a greater understanding about what is “out
there” – and by understanding what is “out there”, we will
better understand what is “in here”.
THE
REALITY CHECK
In
this chapter, as in the last one, there is very little to perform a
reality check against. In the Current Paradigm, the Planck Epoch is a
fact-free zone. There are ideas of course, dreams and fantasies, as
to what the Universe might have been like in its earliest moments but
there are no facts at all. The following quotation is as good a
summary of the situation as any, given that there is not really very
much to summarise:
-
The
Planck epoch is the earliest period of time in the history of the
universe, from zero to approximately 10-43
seconds, during which the quantum effects of gravity were
significant. At this period approximately 13.7 billion years ago the
force of gravity was as strong as the other fundamental forces,
which hints on the possibility that all the forces were unified.
Unbelievably hot and dense, the state of the universe during the
Planck epoch was unstable or transitory, tending to evolve and
giving rise to the familiar manifestations of the fundamental forces
through a process known as symmetry breaking.
Cosmologists
everywhere are waiting for someone to produce a quantum theory of
gravity. The feeling is that the appearance of an acceptable theory
will answer a lot of questions and put what is currently guessed at
onto a firmer footing. However, such a theory has been a long time
coming and doesn’t appear likely to be with us in the near future,
if at all. The Planck Epoch, therefore, is a hole. An empty hole. If
the implications of the Big Bang Standard Model are pushed backwards,
through the Planck Epoch to their apparent end, the Universe will
become a singularity: infinitely dense, infinitely massive, and with
time going infinitely slowly. The glass floor in extremis.
If
the Planck Epoch in the Current Paradigm is an empty hole, in the New
Cosmology it is nothing of the sort. In the New Cosmology, things are
most certainly happening. Sensible and practical things. The ball of
teels that is the Universe is expanding at a tremendous lick. As it
does so, processes are grinding into action, and new measurements are
becoming possible. Most of these processes/measurements have already
been identified or quantified by others and so are not being seen for
the first time here.
It
is worth repeating, though, that the picture of the Universe as a
rapidly expanding ball of teels is not an accurate one. It is not
what happened. It is a convenience, a simplification of the beginning
of our Universe. However, as a means of illustrating the processes
and mechanisms that drive the Universe, of showing how they came into
being and how they operate at the most fundamental of levels, it is
entirely accurate.
Immediately
before Moment Zero, a ball of teels, with each having only the
properties of gravity and rejectivity, cannot help but have measures.
It will have a diameter and a volume. It will have a mass and a
density. It will have an escape velocity. It will have a totalspeed
and a spinspeed. If then, at Moment Zero, each of the teels is given
an amount of speed, those measures will progressively and inevitably
change. As they do so, processes and mechanisms will slide into
action. Teels will begin to collide and they will react according to
the well-established laws of collision mechanics. These collisions
will produce an increasing level of chaos within the ball of teels
and this, along with the mutual gravity of all the teels, will slow
the expansion down.
While
the Current Paradigm description of the Planck Epoch is fact-free and
fanciful, that of the New Cosmology is most certainly not. Given
these starting parameters, what is described in the New Cosmology
will happen in exactly that way. And in contrast to the Current
Paradigm description, the physics underlying the New Cosmology
description are well-understood and have been proved right here on
Earth, many times and in many ways.
| |
CHAPTER
FOUR
THE
INFLATIONARY EPOCH
This
chapter deals with the Inflationary Epoch which, in the Current
Paradigm, ran from 10-37
to 10-33
of a second after the Big Bang. During that extremely brief moment,
the Universe suddenly expanded at a rate that was many times that of
the speed of light.
So
far as the progress of the New Cosmology is concerned, this chapter
is irrelevant. If you were to jump straight to the next chapter, if
you were never to see a word of what is written here, your knowledge
of the real Universe wouldn’t be missing a thing. What you would be
missing, however, is some knowledge of the way that cosmological
progress has been pushed forward in the recent past.
The
Inflationary Epoch is a solution to a problem that is unique to the
Current Paradigm. In the New Cosmology, there is no Inflationary
Epoch because it doesn’t need one. The Current Paradigm, on the
other hand, is in deep trouble without one.
FACTS
There
are no facts in this chapter. There is just a problem, which may or
may not be real, and a solution, which may or may not be correct. The
problem is the “Horizon Problem” and the favoured solution is the
“Inflation Theory”.
THE
HORIZON PROBLEM
To
understand the Horizon Problem, you have to keep in mind that the Big
Bang Standard Model is backboned by the following factors:
-
during
the subsequent growth of the Universe, the velocity of matter and
energy is assumed to have always been limited by the cosmological
speed limit, the speed of light, 300,000 kilometres per second:
-
the
Universe has been expanding due to the movement of matter and energy
away from the site of the Big Bang but, at the same time, the space
in between the matter has also been expanding. The rate of that
expansion is defined by the “Hubble Constant” although the exact
rate of that constant is unknown:
-
the
photons that we currently detect as the Cosmic Background Radiation
date back to the Recombination Epoch, to the time when the density
of the Universe had fallen sufficiently for photons to exist without
the certainty of being absorbed by matter particles. The
Recombination Epoch was 300,000 years after the Big Bang:
-
the
diameter of the Universe visible to us today is assumed to be
smaller than that of the whole Universe.
Estimates made in 2004 put the diameter of the visible Universe at
156 billion lightyears:
-
the
most favoured current estimate of the age of the Universe is based
on data from the Wilkinson Microwave Anisotropy Probe of 2002 and is
13.7 billion years, give or take 200 million years.
In
an ideal world, all these factors should mesh together into a
seamless whole. Unfortunately, they don’t. They don’t because the
Horizon Problem gets in the way. The Horizon Problem is described in
the following quotation:
-
At
each instant in the history of the Universe, there is a
characteristic ‘radius’ of the Universe which is set by the
distance that light could have travelled since the birth of the
Universe (recall that light travels at 300,000 kilometres per second
for all observers). Thus, if the Universe were only one second old,
then an observer cannot see things which are more than 300,000
kilometres away; there has simply not been sufficient time for this
light to propagate that far. Since no observer can see beyond this
distance, the surface at this distance is called the ‘horizon’
for the observer. As the Universe ages, the horizon expands outwards
because there is more time for light to travel. An important side
effect is that if we cannot see beyond the horizon, then neither can
we be affected by any physical effect from beyond the horizon.
Regions of space in the Universe which are separated in distance by
more than the horizon simply do not know about each other and cannot
influence each other’s physical conditions. If we calculate the
size of the horizon in the sky for the Universe at the epoch of
decoupling,
it turns out to be approximately one degree (about twice the angular
diameter of the Moon). The fact that the spectrum and intensity of
the CBR are essentially the same for patches much larger than this
size is very hard to explain since our scenario does not allow these
patches to communicate with each other and conspire to determine
their physical characteristics.
from
an article entitled “Horizon Problem” on
the US Government website, science.gov.
Unfortunately,
descriptions of the Horizon Problem vary widely in quality and
finding one that can balance accuracy with understandability is about
as easy as finding a dumb crow in a coal mine after your candle has
blown out. This particular one is not wrong but nor is it complete.
To properly understand the Horizon Problem, you need to know that it
is rooted in a philosophical concept known as the “Causality
Principle”. Effectively, no Causality Principle, no Horizon
Problem.
Causality
is the idea that for every effect there is a cause. This is not a
difficult idea to grasp, more a matter of common sense really, and an
idea against which there are no sensible exceptions. However, to be
truly exact, the roots of the Horizon Problem are not so much in the
Causality Principle itself as in an extension to it. The extension is
the idea that the same effect, observed in two different places, will
have the same cause. This is a much less well-founded concept than
the Causality Principle itself and it is possible to think of
exceptions. Nevertheless, so far as most cosmologists are concerned,
the extension is taken as wholeheartedly valid.
In
the Horizon Problem, causality centres on the way that the CBR
photons come at us from all directions at a temperature that is the
same to within 0.01%. This is an extraordinarily tiny temperature
variation when we consider that these photons have all been
travelling, by different routes and through wildly different
conditions, for over 13 billion years. It is felt that this
temperature similarity has to be down to more than mere chance: that
it is a widely spread “effect” for which there must be a single
“cause”. The presumed “cause” is that the CBR photons were
once so close to each other that their temperatures were able to
equalise.
And
there's the rub. That particular “cause and effect” cannot be
reconciled with the factors with which we began this section. If the
cause and effect is correct, then some and perhaps all of the factors
must be wrong. If the factors are correct, the cause and effect is
wrong. Here is a very basic definition of the Horizon Problem:
HORIZON
PROBLEM: The
CBR photons are extremely similar, no matter from which direction
they come. This suggests that they were once so close together that
they could equalise. The earliest measurable diameter for the
Universe is one Planck Length, Taking account of lightspeed, and
assuming an age for the Universe of 13.7 billion years, this gives
the current diameter of the Universe as 27.4 lightyears. However,
actual measurements seem to show that the diameter of the visible
Universe alone is 156 billion lightyears. How, then, could the CBR
photons have once been so close together that they could equalise?
In
the normal run of things, solving the Horizon Problem would require
the acceptance that something was wrong with the current perception.
In the event, however, no such acceptance was necessary because a
bright young man came up with an idea which enabled both the CBR
cause/effect and the factors to co-exist.
INFLATION
The
name of the bright young man was Alan Guth and his bright young idea
was that for a very brief period, somewhere around 10-36
of a second after the Big Bang, the Universe expanded exponentially
rather than linearly.
This exponential expansion pushed the Universe, in that tiny fraction
of a second, from being smaller than the nucleus of a proton to maybe
the size of a grapefruit. In one bound, Alan Guth wiped away the
Horizon Problem and earned the deeply felt gratitude of cosmologists
everywhere.
Guth
first made the idea known in 1979 and published a paper on it in
1981. He called the exponential expansion “inflation” with the
period in which it happened soon becoming known as the “inflationary
epoch”. At first, the reasoning underlying the inflation theory was
quite crude – that for a minute fraction of a second, gravity
became a repulsive force rather than an attractive one – but the
crudity didn’t matter too much because the reasoning was only there
to provide some authority for the exponential expansion.
The
reaction of the cosmological community was remarkable.
Notwithstanding that there wasn’t then, and isn’t now, a scrap of
proof that the Inflationary Epoch actually took place, the Inflation
Theory was rapidly taken aboard. Within five years of the idea being
first floated, it was well on its way to being an integral part of
the Current Paradigm.
Getting
a new scientific idea accepted is usually an extremely long process.
Max Planck’s famous dictum describes the situation succinctly (and
probably conservatively): “A new scientific truth does not triumph
by convincing its opponents and making them see the light, but rather
because its opponents eventually die and a new generation grows up
that is familiar with it.” Many of today’s paradigms, even though
they seem obviously true to us, have taken hundreds of years to be
generally accepted. How then, did Guth’s unproved, and probably
unprovable, idea move so quickly into the mainstream.
Exactly
why, isn’t easy to identify. It is said that if you can remember
the 1960s, you weren’t there. Much the same situation probably
applies to the arrival of Guth’s new idea. If you weren’t
involved in cosmology when the idea first became common knowledge,
you’ll probably never know for certain why it was taken up so
quickly – and those who were there probably never realised how
unusual the situation was through being, as the saying goes, too
close to the trees to see the wood.
Probably,
a combination of factors came together in the unusually heady
atmosphere of the time. One may have been “war-weariness”. The
cosmological community had just emerged from a thirty-year war
between the Steady Staters and the Big Bangers. Now, suddenly, the
Horizon Problem was raising serious doubts about the very Big Bang
Theory that the war had been fought over. Guth’s bright idea
allowed a quick resolution.
Then
there was the bogey that has haunted science for as long as anyone
has been keeping records – hero worship. By the 1970s, the two men
whose ideas provided the foundation for the Big Bang edifice, Albert
Einstein and Edwin Hubble, had been raised to the Pantheon. They were
now the twin godhead around which the cosmological establishment
revolved. The easiest and most obvious way to solve the Horizon
Problem was to challenge some of their ideas but this was not
acceptable. Even today, thirty years later, it is not acceptable. Any
cosmologist expressing doubts about Einstein’s and Hubble’s ideas
will have a seriously damaged career. Guth’s bright idea managed to
leave the ideas of Einstein and Hubble untroubled.
Then
there was the mindset that had come to regard topdown thinking as a
satisfactory way to conduct cosmological business. The Big Bang
Standard Model is a very topdown model. Getting to the root of the
Horizon Problem would have necessitated a very bottomup approach
which, by the 1970s, had come to be seen as neither productive nor
attractive.
For
good or ill, Guth’s bright idea rapidly became part of the
cosmological paradigm and is today deeply entrenched in it. There are
token acknowledgments that it is still unproved and it is accepted
that parts of it don’t work out as they should but, for all
practical purposes, it is taught in our schools and universities as
fact. There is quibbling over the exact processes and mechanisms
involved, and over the timings and the distances, but students are
taught that the idea is what actually happened.
Over
the intervening years, the reasoning underlying Inflation Theory has
become a lot more sophisticated but a lot less focused. Because there
is no proof of any of it, imaginations have been able to run free and
produce a number of different explanations for that sudden burst of
growth. The leading explanation is that it was the result of a “phase
transition” not dissimilar to the phase transitions that occur when
a solid changes to a liquid, or a liquid changes to a gas. In this
case, the phase transition took place when the strong nuclear force
separated out from the previously combined four forces.
EXPANSION
The
expansion of the Universe, as envisaged in Guth’s Inflation Theory,
was not an expansion of the Universe’s matter but an expansion of
the space in between the matter. It was as though two people were
casually walking away from each other at a comfortable walking pace
but found themselves moving apart at the rate of a TGV.
The
idea that space, that nothing, that emptiness, that an absence of
everything, can get bigger or smaller is counterintuitive. Guth,
however, was not the first to come up with it. Indeed, by the time he
got round to it, the idea had been well-established as a scientific
likelihood for many decades. The idea first became prominent,
somewhat obliquely, in General Relativity in 1915, when Albert
Einstein accounted for gravity by suggesting that “space” would
curve in the presence of matter. He also suggested that, given the
right circumstances, lengths could stretch and shrink, that masses
could increase and decrease, and that time could go faster or slower
– so he was quite thoroughly putting the boot into pretty much
everything held to be “normal” at the time. He held back from
suggesting that space might expand, though, because he believed the
Universe was eternal and infinite. To ensure that this was so, he
invented the Cosmological Constant.
Now
Edwin Hubble stepped into the frame. During the 1920s, he and his
colleagues had been working out the distances to galaxies beyond the
Milky Way, something that had been beyond everyone up till then. A
side effect of these workings was that they were able to plot how
fast these other galaxies might be moving towards or away from us.
What they found was that almost all galaxies seemed to be moving away
from us, with only a proportion of nearby galaxies coming our way.
What was more, there seemed to be a relationship between distance and
velocity; the farther away a galaxy was from us, the faster it would
be going. This research resulted in Hubble, in 1929, publishing
“Hubble’s Law” which stated that:
any
two points, which
are moving away from their point of origin in straight lines and with
a speed proportional to their distance from the point of origin, will
move away from each other with a speed proportional to their distance
apart.
It
is one thing to write a law. It is another to impose it on real life.
Why should all the galaxies be moving away from us and from each
other? The logical explanation was that the Milky Way was at the
centre of the Universe but that offended “The Law of Universal
Modesty”, the centrepiece of the Eternal and Infinite Universe
Paradigm which argued that the Milky Way, the place where mankind
lives, could not be a "special" place. It had to be an
ordinary place, just like lots of other places, and it most
definitely couldn’t be the centre of the Universe, the most
"special" place of all.
The
solution was to suppose that the Universe itself was expanding at a
uniform rate. This would mean that, not only would we see every other
galaxy moving away from us but that any observers on those other
galaxies would see exactly the same thing. This is a very
mathematical solution, and not all that easy for a human imagination
to get a hold of, but it does work out. It also results in another
law:
In
a uniformly expanding Universe, every
observer seems to be at the centre of the expansion.
Of
course it did leave us with those nearby galaxies which seemed to be
moving towards us but the explanation for this was fairly
straightforward. The expansion of the Universe could be countered by
gravity if it was strong enough. Thus, gravitational hotspots like
galaxies, stars, and planets, did not expand. Nor did gravitationally
bound clusters of galaxies. The Milky Way is part of a cluster called
the Local Group. Whether our neighbours in the cluster were moving
away from us or towards us was due to our gravitational influence
upon each other.
All
this slotted in handily with Einstein’s General Relativity view of
things; with its ideas that space would curve in the presence of
matter, that masses would increase with velocity, and that time could
speed or slow down if the circumstances were right. Especially, it
fitted very nicely with George Lemaitre’s suggestion that the
Universe grew out of a primordial egg. All the foundation pieces for
the Big Bang Standard Model were now in place.
The
key part of Hubble’s solution to all those galaxies moving away
from us was that the expansion of the Universe should be “uniform”.
This allowed the rate of that expansion to be defined as happening at
a specific rate by the use of what is known as the “Hubble
Constant”. However, calculating what the Hubble Constant might be
is not easy and no one has yet come up with a definitive number. What
is more, there is evidence that suggests that the Constant has not
always been the same and that the expansion rate is currently
increasing. The current best estimates are that the Constant is
between 70 and 75 kilometres per second per megaparsec.
What
all this has produced is a Universe that is expanding in two
different “planes” at the same time. In the one plane, it is
expanding as its density decreases; as its photons of energy and
particles of matter move outwards from the original Big Bang site.
This rate of this expansion is conditioned by the cosmological speed
limit in that matter and energy cannot move faster than the speed of
light. In the other plane, the space in between the photons and
particles is also expanding at the rate of the Hubble Constant.
The
Hubble expansion is at a linear rate. Linear is the “natural”
rate at which things accelerate, decelerate, expand, shrink, and so
on. Think of a digital clock which ticks out the minutes at sixty to
the hour. The number sequence is linear, with each new number
appearing after exactly the same time interval – 1,2,3,4,5,6,7, and
so on. Now consider how useful that clock would be if it showed the
passage of time according to an exponential sequence –
1,2,4,8,16,32, and so on. For our everyday purposes, this number
sequence is “unnatural” and would make this particular digital
clock about as useful as a rubber hypodermic.
Alan
Guth fixed the Horizon Problem by inserting into the BBSM timeline a
brief period of exponential expansion – a brief period of
“unnatural” expansion.
SYMBIOSIS
Even
though Inflation Theory remains unproved to this day, and contains
rather too many unsolved problems for comfort, there is little
serious work being done to find an alternative. Of course, there are
always people on the fringe who will automatically disagree with what
everyone else believes in. Ten minutes on the internet and you’ll
find droves of them, all pushing their own pet theory. However, in
mainstream research establishments, the primary focus is on refining
and expanding Inflation Theory, not on replacing it.
Not
that this is really surprising. Any well-funded attempt to replace it
would be a well-funded attack on the status quo and on the
cosmological establishment. The great joy of Inflation Theory has
always been that it didn’t disturb the status quo, that it didn’t
undermine any idols, that it left untouched that which everyone
already “knew”.
The
relationship between Inflation Theory and the Horizon Problem is a
peculiar one. Symbiotic, perhaps even incestuous. Given that the
former is entirely theoretical and that the latter, is based on a
less than one hundred percent sound philosophical concept, each
depends on the other for survival in much the same way that the algal
and the fungal parts of a lichen depend on each other. Put simply,
without the Horizon Problem, there would be no need for Inflation
Theory. Conversely, the elimination of Inflation Theory would cause
such pain that the Horizon Problem is necessary to justify its
continued existence. One needs the other – and the other needs the
one. Hmmm.
THE
REALITY CHECK
In
the Current Paradigm, the age of the Universe is estimated by
combining astronomical observations with cosmological models. The
interpretations of the astronomical observations contain presumptions
and the cosmological models are theory. The current best guess, as at
2002, is that it is somewhere around 13.7 billion years old. This
could well be right; but it would only need one of the presumptions
to be wrong or one of the theories to be off-beam, for that 13.7
billion to be out – and perhaps a very long way out. And should the
Universe actually be sufficiently older than we currently suppose,
there would be no need for Inflation Theory anyway.
In
the New Cosmology, the age is unknown. Until better measurements are
available, 13.7 billion years old will do.
In
the Current Paradigm, the size of the Universe is again estimated by
combining observations with cosmological models. And again the
interpretations of the astronomical observations contain presumptions
and the cosmological models are theory. The estimates are further
complicated by the assumption that there is a part of the Universe
that is invisible to us and about which we can have no knowledge. The
current best guess for the visible Universe is 156 billion lightyears
across. Should the visible part of the Universe actually be
sufficiently smaller than we currently suppose it to be, there would
be no need for Inflation Theory.
In
the New Cosmology, the diameter of the visible universe is unknown.
Until better measurements are available, 156 billion will do.
In
the Current Paradigm, the speed of light in open space in the near
vicinity of the Earth is well-established at 300,000 kilometres per
second. It is assumed that this value of lightspeed applies
everywhere and always has done. It is also assumed that this value of
lightspeed constitutes a cosmological speed limit that cannot be
exceeded. Should anything have once been able to travel sufficiently
faster than 300,000 kilometres per second, there would be no need for
Inflation Theory.
In
the New Cosmology, the speed of light in open space is a constant,
always has been and always will be. It is not a speed limit, however.
Teels can and do exceed it.
In
the Current Paradigm, the size of the Universe first becomes
measurable at 10-43,
when it has a diameter equating to a Planck Length. Should the
Universe at 10-43
have been sufficiently larger than a Planck Length, there would be no
need for Inflation Theory.
In
the New Cosmology, the diameter of the Universe at Moment Zero was a
billion lightyears. By 10-43,
given that the Universe’s teels were moving outwards at many times
lightspeed, it would have been a lot larger.
In
the Current Paradigm, “space” can expand and contract. This idea
is counterintuitive. It is also unproved. There has not, to date,
been any observation or experiment that has proved conclusively that
“space” can expand, either linearly or exponentially. Discount
the ability of space to expand and Inflation Theory no longer works.
In
the New Cosmology, space is “nothing”. It cannot expand or
contract because there is nothing to expand or contract.
In
the Current Paradigm, Inflation Theory is underpinned by the Horizon
Problem. In turn, the Horizon Problem is underpinned by an extension
to the Causality Principle. In associating a known effect with a
specific cause, it is always a good idea to make sure that the right
specific cause has been identified. If the similarity of the CBR
photons is not due to their having once been in close proximity,
there is no Horizon Problem and no need for Inflation Theory.
In
the New Cosmology, the CBR photons come to us as they do for
mechanically sound reasons which will be explained in coming
chapters. Meanwhile, across the world, there are many thousands of
Olympic-sized swimming pools. In each of these, the amount of water
is much the same. This is because an Olympic committee insists upon
it. It is not because all the pools were once close together and thus
able equalise their water levels.
| |
CHAPTER
FIVE
THE
DARK MATERIALS
This
chapter deals with two mysterious entities, darkenergy and
darkmatter. They are mysterious because, in the Current Paradigm,
nobody knows what they are. They have never been seen, felt or heard.
That they exist at all is guessed at because some objects in the far
distance behave in ways that can only be explained if vast quantities
of these dark materials are there too.
If
the dark materials do exist, their quantity is staggering. They are
currently believed to make up 95% of the mass of the Universe and,
given that another 4% is accounted for by free hydrogen and helium,
that leaves just 1% to make up everything else: ourselves, the
planets, the stars and the galaxies. To say the least, such figures
should be forcing a revaluation of the what is important in the
Universe. Thinking that the 1% is the most important sounds like a
case of the tail wagging the dog.
The
Current Paradigm view of the structure of the Universe has been
deduced in a steadfastly topdown manner. Consequently, we are having
difficulty in revising that view to incorporate the dark materials.
Viewed bottomup, however, the nature of the dark materials is clear
and the way that they behave is obvious and inevitable.
FACTS
Darkmatter
is believed to make up approximately 30% of the mass of the Universe.
It is currently invisible to human observers because it neither emits
nor reflects photons. Its presence is inferred by gravitational
anomalies in the motion and distribution of galaxies.
The
supposition that darkmatter exists stems from our interpretations of
phenomena. These interpretations do not constitute proof that there
is darkmatter out there. Nevertheless, a substantial body of
circumstantial evidence only seems explainable by assuming its
existence.
While
the nature of darkmatter is unknown, there are many suggestions as to
what it might be. It has been put down to non-luminous gas, dust,
planets, brown dwarfs, white dwarfs, burnt-out stars, black holes,
and the like. Others hold that it is made out of elementary particles
like neutrinos or of theoretical particles like axions or WIMPs.
Darkenergy
is believed to make up approximately 65% of the mass of the Universe.
Like darkmatter, the presence of darkenergy is inferred by
gravitational anomalies, in this case the Universe’s
expansion-rate. Until the 1990s it was assumed that the expansion of
the Universe was slowing due to the mutual gravitational attraction
of the matter it contained. Then evidence was found suggesting that
for the past five billion years, the expansion-rate has actually been
accelerating. To account for this, it was hypothesised that the
Universe was infused with some kind of negative energy –
darkenergy.
As
with darkmatter, the nature of darkenergy is unknown. However, it is
felt likely to be less conventional in origin than darkmatter. There
are two current frontrunners in the explanation stakes. The first is
that Einstein was wrong to remove the cosmological constant from
General Relativity: that the cosmological constant actually
represents a real property of space, a negative pressure that can
counteract and in some cases overwhelm gravity. The other frontrunner
is “quintessence” which is likewise a negative pressure but
differs from the cosmological constant in that it can vary in space
and time.
THE
STRUCTURE OF THE UNIVERSE
If
the current ideas are correct, darkmatter and darkenergy make up 95%
of the Universe, leaving just 5% to account for everything else.
Accepting this requires a retake on our traditional ideas about the
structure of the Universe. The traditional ideas have developed
Topsy-like over many centuries. Progress has always been resolutely
top-down with new ideas and new discoveries being incorporated into
the already existing picture with as little disruption as possible.
In all that time, there have only been two paradigm-shifts which
seriously changed the way the structure was seen to be.
The
first was in the eighteenth century. Until then, ideas about the
structure of the Universe had been a succession of variations on the
“bowl of night” theme. Men, looking up into the night sky, saw
the bright dots sprinkled randomly across the blackness and attempted
to impose some sort of order on them. Given that no one star seemed
nearer or farther away than any other, guesses about the structure of
the Universe tended to feature them as part of an enveloping dome.
Then,
in 1750, Thomas Wright suggested that the Earth was embedded in a
disc of stars. A galaxy. Five years later, Immanuel Kant suggesting
that the Universe, rather than being a single galaxy, might actually
consist of large numbers of galaxies. These radical suggestions were
not loved and took a long time to catch on. Even into the twentieth
century, Kant’s ideas were still being disputed and it was not
until the 1920s that Edwin Hubble settled the matter by using a huge
new telescope to resolve individual stars in distant galaxies.
The
picture of the Universe that is taught today is still, essentially,
the one put together by Wright and Kant. New discoveries have hugely
expanded the scale of the picture, both upwards and downwards, but it
is still theirs. Upwards in scale, what we believe today is that the
Universe is composed of galaxies which clump together to form
galactic clusters, which in turn, clump together to form
superclusters. Going downwards, we believe that the Universe is
composed of galaxies which are made out of stars, which are made out
of atoms, which are made out of fundamental particles.
Notwithstanding
the pre-eminence of the Wright/Kant picture, there has been another
paradigm shift which has overarched their Universe to provide an even
more fundamental structure: a structure that encompasses absolutely
everything. The roots of this paradigm-shift are in Einstein’s
General Relativity although it didn’t reach its present form until
the second half of the twentieth century. In it, the Universe is the
ultimate in closed-cycle mechanisms. It is all there is because
nothing exists outside it. This is a Universe that can expand or
contract but which, at the same time, has no edge and no centre.
This
Einsteinian universe is probably beyond human imagining. Something
that has no edge and no centre, that has no outside and therefore no
inside, is a long way beyond anyone’s experience and very difficult
to visualise. Nevertheless, among cosmologists it is generally
accepted as a description of the way things really are – or, at the
very least, as the best description we have until something better
comes along.
It
can be sensibly argued that we are now at the beginning of a third
paradigm-shift. The existence of the dark materials, with their
implication that there is another 95% of the Universe that we cannot
see, feel, or hear, has to alter the picture somewhat. The dark
materials don’t interfere too much with the Current Paradigm
picture at the Wright/Kant level but there is at least one major
problem at the Einsteinian level.
DARKMATTER
The
first hint of the existence of darkmatter came in 1933 when the
Swiss astrophysicist, Fritz Zwicky was studying a group of galaxies
known as the Coma Cluster. He calculated the mass of the cluster by
two different methods and compared the results. The first calculation
was done by measuring the velocities and vectors of the galaxies on
the edge of the cluster and then working out how much mass was needed
to hold those galaxies within the cluster. The second calculation was
done by counting how many galaxies there were in the cluster and
using the brightness of each one to work out how much mass they
represented.
The
two calculations didn’t match up. The first produced four hundred
times more mass than did the second, a difference so huge that it
couldn’t be explained away as a mathematical glitch. The first
calculation, especially, was thought to be sound because without that
much mass the Coma Cluster would just dissipate, with its galaxies
drifting away into space. The inevitable conclusion was that there
was a more to the Coma Cluster than that which we could see.
It
took time for Zwicky’s findings to move into the mainstream. For a
while, they were regarded as something specific to the Coma Cluster
and thus of only parochial interest. Over the years, though, as more
and more evidence was gathered in, it came to be seen that the
motions of all outer galaxies in all clusters were uniform, one to
another, and were moving too fast for them to be held together by
their cluster’s apparent mass. And worse, the same seemed to apply
within the galaxies themselves. The velocities of their outer stars
seemed to be higher than they should be. Again, there was more to
each galaxy than that which we could see.
It
is now calculated that 90% of the mass of every galaxy, and of every
galactic cluster, is invisible to us. However, we are only able to
detect the presence of that mass by observing its gravitational
effects on the stars within the galaxies and the gravitational
interplay between galaxies that are sufficiently close to each other.
For
a while, this invisible material was known as “missing mass” but
it is now more commonly called “darkmatter”. The effect of
darkmatter on visible matter structures like galaxies and galactic
clusters has been well-established over the years and, since it
appears to operate according to laws of physics that we have known
and trusted for a long time, it is reasonably well-understood. The
nature of darkmatter, however, is still unknown.
Today’s
mainstream thinking is that darkmatter comes in two forms of
undetectable particles. Einstein’s Special Relativity requires
nearly massless particles to move at nearly the speed of light which
is fine for powering clouds of hot gas but doesn’t work so well if
you want the cooler gas clouds that are necessary for the formation
of stars. Consequently, it is hypothesised that dark matter can come
as “hot” particles (very fast and almost massless) or “cold”
(slower moving and weighty). The current estimates are that dark
matter provides somewhere around 30% of the mass of the Universe.
However, because that calculation is so crudely made, the actual
figure may be lower or higher. Nevertheless, given the present state
of out knowledge, 30% is as good a figure as any other.
DARKENERGY
The
“darkenergy” concept first appeared in 1998 when some astronomers
did a survey of Type 1A supernovae in distant galaxies. What they
found was that these supernovae were dimmer than they should have
been, leading them to conclude that the galaxies containing them were
farther away from us than had been previously supposed. This led on
to the further conclusion that that the Universe’s expansion-rate,
then commonly supposed to be slowing down, was actually accelerating.
The
simplest explanation for darkenergy is that it is something we
already know about but in a new form – to see it as darkmatter
operating on a universal scale. Just as darkmatter in a galaxy
provides the gravity that allows the outer stars to move faster
without being able to escape, darkmatter in the outer reaches of the
Universe would move galaxies outward at a greater velocity than
otherwise.
That
simple explanation has the great advantage of obeying laws of physics
that have been known and accepted for centuries but, unfortunately,
it runs counter to the Einsteinian picture of the Universe. It
presents the Universe as a kind of “supergalaxy” but the
Einsteinian universe is nothing like that. Because the Einsteinian
universe doesn’t have a centre and an edge, and is expanding
because space itself is expanding, there are no outer reaches where
darkmatter can lurk and pull the galaxies outwards like a siren
attracting ships.
If
the Universe cannot have darkmatter in its outer reaches PULLING the
galaxies outwards, the only other option is to have a dark energy
within the Universe that is PUSHING the galaxies apart. This unknown
energy has to have the remarkable property of somehow counteracting
the gravitational attraction that galaxies have for each other. It
becomes, effectively, an “antigravity” energy.
While
the nature of gravity is poorly understood, the effects it has on
matter are well-known. Antigravity, on the other hand, is neither
understood nor known. The mechanisms by which it might work are at
best theoretical and mostly little more than cosmological doodling.
You might suppose, then, that the concept of darkenergy as an
antigravity force would have some trouble forcing itself into the
mainstream – but you’d be wrong about that.
In
the last chapter, there were some wry comments on how quickly
Inflation Theory was absorbed into the Current Paradigm. That
rapidity, however, was tortoise-like when compared to the speed at
which the idea of darkenergy – the antigravity force – was taken
on board. It barely seemed to merit any discussion at all. Out came
the paper detailing what the astronomers had found when surveying
Type 1A supernovae, then out came the “obvious” conclusion, and
that was that.
Part
of the explanation has to be that technology, especially computer
technology, is becoming more capable at an ever accelerating rate.
New and better telescopes, all of them now computer aided, are coming
into use all the time. Ever more powerful computers mean that
calculations that would have taken a year to complete a decade ago
can now be made in minutes. Similarly, the use of computers now means
that communication between researchers world-wide can now be as
instant as anyone wants them to be. Put simply, new ideas don’t
take so long to be dreamed up and thereafter don’t take so long to
get around.
However,
a far more important part of the explanation is that the ground had
already been prepared. The idea of an antigravity force just wasn’t
new. Until the early years of the twentieth century, the cosmological
paradigm had been that the Universe was eternal and infinite and that
was what Albert Einstein believed when he was preparing General
Relativity. When he found that General Relativity would not naturally
produce a stable and static Universe, he inserted the Cosmological
Constant into the theory to make sure that it did. The Cosmological
Constant, of course, is an antigravity force.
By
1933, the Big Bang idea was beginning to take a hold and Einstein was
declaring the Cosmological Constant to have been a heresy. However,
the damage was done. Einstein may have declared the Cosmological
Constant to be dead but it wouldn’t lie down. It lived on, not as a
support for the eternal and infinite universe concept but as a
vehicle for the idea that the vacuum of space could be an energy in
its own right, a negative energy to oppose gravity’s positive one.
This felt especially comfortable as the century wore on and it became
accepted that most particles had an antiparticle. Now we had gravity
and antigravity.
Then,
in 1979, along came Inflation Theory. This posited that the
exponential expansion of the Universe was due to it being suddenly
suffused with a “negative pressure vacuum energy density”. The
negative pressure vacuum energy density was not exactly the same as
Einstein’s antigravity, not least because it only lasted for an
extremely tiny fraction of a second, but it was certainly cut from
the same cloth. Consequently, the acceptance of Inflation Theory into
the mainstream had the effect of bringing the Cosmological Constant,
and the idea that space could be a form of energy in its own right,
out into the open once more.
Thus
it was that when the concept of darkenergy was first mooted as an
explanation for the apparent acceleration in the Universe’s
expansion rate, it was not a new idea at all and it certainly wasn’t
radical. It was just an extension of ideas that were already part of
the mainstream. It may have been a bizarre idea, it may have been
counterintuitive, it may have been beyond any real human
understanding, but the cosmological community had already been
softened up and took to the idea with barely any murmur of dissent.
A
year or two later, data from the Wilkinson Microwave Anisotropy Probe
was interpreted as showing that the Universe had “critical density”
and was therefore “flat”. Since other observations, such as
galaxy surveys, the determination of cluster abundances, baryon
density calculations, etc, implied that matter in all its forms
provided only a third of the mass density required to make the
Universe “flat”, the logical supposition, therefore, was that the
remaining two thirds of the Universe’s mass was provided by this
mysterious, but accepted, darkenergy.
UNIFLUX
AND TEELOSPHERE
In
the Current Paradigm, the structure of the Universe consists of a
succession of substructures within substructures, beginning with the
very tiny fundamental particles and going all the way up to galactic
superclusters. At the level of atoms and below, these substructures
are “managed” by the strong, the weak, and the electromagnetic
forces. The larger substructures are managed by gravity and by the
antigravity of dark energy. Overlying all the substructures, is the
Einsteinian structure whereby the Universe is something in which
space can expand and shrink and bend and curve; in which the
Universe has no centre and no edge, and in which time has no fixed
span.
In
the New Cosmology, the structure of the Universe is somewhat
different. It agrees that the Universe is a succession of ever larger
substructures. However, these substructures are “managed”
differently. In the New Cosmology, the managing force at all levels
is gravity and only gravity – ultimately the gravity of the teels
out of which the Universe is made. There are no other forces and the
only antigravity is that supplied by the rejectivity of the teels.
Most
definitely, there is no Einsteinian overlay. In the New Cosmology,
space is just space. It is nothingness and nothingness cannot expand
or shrink or bend or curve. As for the Universe itself, it is a
paragon of ordinariness. It has a middle and an edge just like
everything else. It looks like it does and does what it does because
of the most basic laws of physics, logical laws that cannot help but
be what they are.
The
dark materials have their place in the New Cosmology universe –
although they have no need for names like darkmatter and darkenergy
for they are not mysterious entities. They are not even different
entities. Darkmatter and darkenergy are the same thing but in
different places. We have met them already, of course. The dark
materials are teels, the vast numbers of teels that pervade and
infuse every part of the Universe. The succession of ever larger and
ever more sophisticated substructures within the Universe are, at
base, just teel concentrations.
The
Universe may be a ball of teels but there is a lot more to it than
just that. The atmosphere of Planet Earth is mostly invisible to
human eyes but that doesn’t stop it being an extraordinarily
complex concoction. It has jetstreams and columns and throats and
inversions and levels and layers and hundreds of other forms and
behaviours. The form of the teel-Universe is not dissimilar to the
atmosphere of Planet Earth. It too has jetstreams and columns and
throats and inversions and levels and layers and hundreds of other
forms and behaviours. It is just the scale that is very different.
All
the teels in the Universe are moving and they are all regimented into
streams by gravity: by the mutual gravity of the streams themselves,
by the gravity of the Universe’s substructures, and by the gravity
of the Universe itself. No matter where in the Universe you might
find yourself, you will always be in a stream of teels that has a
vector and a velocity.
Thus
it is that while every one of the Universe’s substructures is
moving through a teel stream, at the same time it has a teel streams
moving through it. As an aid to understanding how this works, here
are two new words for the cosmological vocabulary. They are “uniflux”
and “teelosphere”. Both words are contractions. Uniflux comes out
of “universal teel flux” and teelosphere is a shortened version
of “teel atmosphere”.
-
TEELOSPHERE:
A
teelosphere is made of teels whose principle gravitational
relationship is with a substructure embedded within the Universe.
All substructures, from photons up to galactic superclusters, have a
teelosphere.
As
will be seen in the coming text, the word “uniflux” is enormously
convenient. On a technical level, it can be validly argued that there
is no such thing as a uniflux: that the Universe is just a
succession of ever larger teelospheres: that the teelospheres
surrounding quarks are within the teelosphere of a nucleon which is
within the teelosphere of an atom, which is within – and so on all
the way up to the biggest teelosphere of them all, that of the
Universe. However, as I say, the word really is enormously
convenient.
TEELOSPHERIC
EQUILIBRIUM
Humans
are currently accustomed to seeing the Universe as a collection of
matter objects, of atoms and stars and so on. This makes visualising
it as a succession of teelospheres somewhat difficult. Nevertheless,
that is what the Universe is really like.
This
is not to minimise the importance of those atoms and stars and so on:
they are the solid lumps at the centre of every teelosphere,
providing the gravitational focus that maintains its form. However,
in seeing only the solid lumps we are seeing only 5% of the Universe,
by current estimates, and we are certainly not seeing it as it really
is.
Teelospheres
are not independent structures. While each owes its first
gravitational allegiance to the solid lump at its centre, every
teelosphere is reacting and adjusting itself to the uniflux through
which it is moving. The teelospheres of the planets in our solar
system are constantly adjusting themselves to the teelosphere of the
Sun in the same way that the teelosphere of the Sun is constantly
adjusting itself to the teelosphere of the Milky Way. This is
“teelospheric equilibration”.
The
key act of equilibration takes place at the “surface” of a
teelosphere, at the interface between the teelosphere and the
uniflux. The teels inside the surface are gravitationally bound to
the solid lump at the centre and those outside are not. A teelosphere
and the adjacent uniflux are in equilibrium if the velocity of the
teels just inside the surface is the same as those just outside.
The
key measures in achieving teelospheric equilibrium are teelmass and
teelspeed. If a teelosphere is equilibrated to its adjacent uniflux
and absorbs (say) a thousand teels from it, this sets in train a
number of processes which result in the subsequent ejection of a
thousand teels back out into the uniflux. This returns the
teelosphere’s teelmass and teelspeed to their original values.
In
practice, the chances of any teelosphere ever being in perfect
equilibrium with its adjacent uniflux are small – and the chances
of maintaining a perfect equilibrium for any sensible length of time
are nil. As a teelosphere voyages on, the measures of the uniflux
through which it is moving, velocity, vector, and density, are
constantly changing. The changes may not necessarily be large, they
may not necessarily be sudden, but they will be there and teelosphere
is constantly equilibrating itself to match them.
Equilibration
is only possible because every teel has exactly the same mass but can
have any amount of speed between zero and as fast as it is possible
for anything to go.
Expressed
in that way, the equilibration process looks simple. There is more to
it than that, however. What we actually have here are a number of
processes all combining to produce a single result, each of them
interlinked. Some of the processes, indeed, work against each other
to produce counteracting results. This is the first example we have
come across of a “multiprocess”.
MULTIPROCESSES
This
analysis has come upon the equilibration multiprocess bottomup.
Because of this, it is easy to identify the different processes
involved and see how they mesh with each other and combine to produce
a single result. In contrast, when approaching a multiprocess
topdown, it is often difficult and sometimes impossible to
disentangle the separate processes. More often than not, it isn’t
even apparent that it is a multiprocess and, in order to progress at
all, forces or mechanisms or particles are invented to account for
results that are otherwise inexplicable. Good current examples of
this are the colourshifting of photons (in the Current Paradigm,
attributed to a form of Doppler shifting) and the bonding of quarks
within nucleons (in the Current Paradigm, attributed to the Strong
Force).
The
multiprocess that powers teelospheric equilibration is not only the
first to appear in this analysis, it is also a fine example of the
breed. To see what actually happens, consider a teelosphere that is
moving from a region of slow-moving uniflux to a faster one. All
teelospheres are constantly absorbing and ejecting teels and it is
this which enables the equilibration to happen. At least four
processes are underway here.
-
Each
incoming teel adds one unit of teelmass to the teelosphere.
-
The
additional teels increase the gravitational strength of the
teelosphere.
-
The
increased gravity contracts the teelosphere.
-
The
contraction increases the density of the teelosphere.
-
The
increased density increases the escape-velocity of the teelosphere.
-
The
increased gravity makes it easier to capture teels from the
uniflux.
-
The
increased escape-velocity makes it harder for teels to escape.
-
The
teelmass of the teelosphere increases.
-
Each
incoming teel adds one unit of teelmass to the teelosphere.
-
The
additional teels increase the gravitational strength of the
teelosphere.
-
The
increased gravity contracts the teelosphere.
-
The
contraction converts some potentialspeed to realspeed.
-
The
increased realspeed increases the spinspeed of the teelosphere.
-
The
increased spinrate decreases the density of the teelosphere.
-
The
decreased density decreases the escape-velocity of the teelosphere.
-
The
decreased escape-velocity makes it easier for teels to escape.
-
The
teelmass of the teelosphere decreases.
-
Each
incoming teel increases the teelspeed of the teelosphere.
-
The
increased teelspeed decreases the density of the teelosphere.
-
The
decreased density decreases the escape-velocity of the teelosphere.
-
The
decreased escape-velocity makes it easier for teels to escape.
-
The
teelspeed of the teelosphere decreases.
-
Each
incoming teel increases the increases the teelspeed of the
teelosphere.
-
The
increased teelspeed increases the volume of the teelosphere.
-
The
increased volume converts some realspeed to potentialspeed.
-
The
increased potentialspeed reduces the spinspeed of the teelosphere.
-
The
decreased spinrate increases the density of the teelosphere.
-
The
increased density increases the escape-velocity of the teelosphere.
-
The
increased escape-velocity makes it harder for teels to escape.
-
The
teelspeed of the teelosphere increases.
While
this may seem complicated, the sum consequence is simple. In this
instance, where a teelosphere is moving from a slower uniflux to a
faster one, teelmass decreases and teelspeed increases. Where a
teelosphere is moving from a faster uniflux to a slower one, the same
processes grinds into action but to opposite effect with the teelmass
increasing and the teelspeed decreasing. The multiprocess thus
maintains an equilibrium between a teelosphere and the immediately
surrounding uniflux.
Seeing
the Universe as a succession of teelospheres within teelospheres is
to see it as the ultimate in self-regulating machines. Each
teelosphere is being affected by, and is at the same time affecting,
the greater teelosphere within which it moves. Each teelosphere is
equilibrated with, or is in the process of equilibrating with, its
adjacent uniflux.
These
acts of equilibration impose another pattern on the Universe. It is
that, as successive teelospheres grow larger, their teelspeed
increases along with their teelmass. The teelspeed of a small
teelosphere will not be as high as that of the teelosphere within
which it moves. And the teelspeed of the larger teelosphere will, in
turn, not be as high as that of the even larger teelosphere within
which it moves.
This
is echoed in the visible lumps at the centre of each teelosphere. The
speed at which Planet Earth moves around the Sun is slower than the
speed at which the Sun moves around the Milky Way, which is in turn
slower than the speed at which the Milky Way moves around our
galactic cluster, and so on.
If
there are patterns in the Universe, there is also purpose, albeit an
unconscious one. The purpose is to filter out speed. Teelospheres are
speed-filters. In the act of equilibration, it is the teels with the
most totalspeed which are ejected and the ones with the least which
are retained. This is repeated again and again as the scale goes up.
A small teelosphere will eject its fastest teels out into a larger
teelosphere which, in turn, will eject its fastest teels out into an
even larger teelosphere, and so on, all the way up in size to the
very Universe itself. As time passes, the tendency is always to
equilibrate at a higher teelmass and a lower teelspeed. This tendency
has major implications for the future of the Universe.
THE
GREATER UNIVERSE
The
Einsteinian Universe has no escape velocity. Since the Universe is
all there is, even an escape-velocity of zero will not allow the
escape of anything because there is nothing to escape into. In the
New Cosmology, however, the Universe is just a larger-scale version
of what is inside it, working to the same rules and laws. It
therefore has an escape-velocity.
The
New Cosmology definition of escape-velocity is “the minimum speed
that Object A needs to possess in order to, without any power, escape
from the gravity field of Object B”. Since the strength of gravity
declines with distance in accordance with the Inverse Square Law, if
Object A begins its journey at a speed faster than escape-velocity,
it will never fall below it, no matter how much it is decelerated by
the gravity of Object B, and consequently will escape.
This
triggers a conjecture. In the early moments of the New Cosmology
universe, speed moved out from the centre to the surface so that the
outermost teels became for a while the fastest things there have ever
been. Was their velocity higher than the Universe’s
escape-velocity?
Here’s
another conjecture. Once the teelospheres had formed, they began
filtering speed and pumping it out towards the surface of the
Universe. The teels that reached the surface would have been carrying
the highest totalspeed of any in the Universe. Much of that
totalspeed would be in the form of potentialspeed but there would
still be prodigious quantities of realspeed. Would any of those
surface teels have had velocity enough to exceed the Universe’s
escape-velocity?
Is
the current teelmass and the current totalspeed of the Universe the
same as it was a fraction of a second after Moment Zero? Who knows?
At present, there are no facts that can guide us to a “yes or no”
answer. All that can be done is to apply logic to the matter and
assess the balance of advantage.
The
amount of totalspeed given to the Universe was enormous. It was
enough to fling matter so far outwards that the visible Universe
alone is now believed to have a diameter of 156 billion lightyears
and be still expanding fast. It was also enough to ensure that every
complex particle inside the Universe, every atom, star and galaxy,
could never have enough teelmass to prevent at least some teels
escaping during at least some phase of their existence. is it safe,
then, to assume that no teels have ever escaped from the Universe?
Here
is yet another conjecture. If teels can escape from the Universe,
what might they be escaping into? The obvious answer is that they are
escaping into empty space. If that is so, how big is the empty space?
Is it infinite or are there boundaries? Of course, if teels are
escaping from the Universe into the empty space, it is no longer
empty.
A
more satisfying approach to that conjecture is to assume that what is
outside the Universe is an echo of what is inside. Every teelosphere
in the Universe is inside another teelosphere. The largest
teelosphere of them all is the Universe itself. Is there any good
reason for supposing that the Universe teelosphere is not, in turn,
inside an even larger teelosphere. Or, at the very least, is just one
Universe of many, each of which is equilibrating within a greater
uniflux.
To
put things into perspective, here is a history lesson. Prior to
Copernicus, the Earth was regarded as a “special” place around
which everything else in the Universe revolved. Copernicus altered
that view by seeing the Earth as just one of a number of bodies that
revolved around the Sun, by seeing the Earth as a place that most
definitely wasn’t special. From then on, the “not special”
mantra expanded outwards with the power of our telescopes. In turn,
the Earth, the Sun, the Milky Way, the Local Group, etc, all became
“not special”.
The
mantra now encompasses everything in the Universe. Because the
Einsteinian Universe has no centre and no edge, there can be no
special places within it. even the way that almost everything else of
any size in the Universe seems to be rushing away from the Earth
doesn’t make the Earth special because it is also rushing away from
almost everything else.
There
is an irony here. While the Law of Universal Modesty is imposed upon
everything inside the Universe, it doesn’t apply to the Universe
itself. The Einsteinian Universe is all there is. There is nothing
outside because there is no outside. Being the only one of anything,
and being everything at the same time, is about as special as you can
get.
In
the New Cosmology, the Universe is not a special place. It may never
be possible to know that there definitely are, or are definitely not,
other Universes out there: to know whether the Universe is, or is
not, part of an even greater teelosphere: but the New Cosmology does
not preclude those possibilities.
THE
REALITY CHECK
The
Wright/Kant internal structure of the Universe, the notion that
planets circle stars, stars circle galaxies, and so on, is not
seriously doubted today – and with every justification. We may not
yet have the technology to send craft out to cruise between the
galaxies and thus prove the structure absolutely but the amount of
confirming observational data gathered here on Earth is vast and is
growing at an accelerating rate.
Darkmatter
slots remarkably well into the Wright/Kant structure. Explaining the
aberrant behaviour of stars in the outer reaches of galaxies, and of
galaxies within clusters, becomes easy by supposing the existence of
darkmatter. However, determining the nature of darkmatter is
currently impossible given that it cannot be directly detected by any
of the detectors we have invented so far. Inevitably then, any ideas
as to the nature of darkmatter are entirely theoretical.
The
overarching Einsteinian structure of the Universe, the notion that
the Universe is all there is, has no middle and no edge, is likewise
not seriously doubted today although with a lot less justification.
There are indications that it might be so but there is no absolute
proof of it, observational or experimental. Belief in the Einsteinian
Universe, therefore, is more a matter of faith than fact, and is
mainly given credence by the success of some of Einstein’s other,
more provable work.
Current
ideas about darkenergy are not as well-developed as those about
darkmatter. The observations that fired the darkenergy concept are
less than ten years old and few in number. Any interpretation of the
meaning of those observations is constrained by the general
acceptance that there is an overarching Einsteinian structure to the
Universe. Consequently, darkenergy can only push from the inside and
cannot pull from the outside. It has to be antigravity (which we do
not know about) rather than gravity (which we do).
So
far as the Wright/Kant structure, and its associated darkmatter, is
concerned, the Current Paradigm and the New Cosmology do not
disagree. Actually, they are entirely complementary with the bottomup
New Cosmology filling in gaps that the topdown Current Paradigm has
left unfilled. The New Cosmology identifies what darkmatter is and
describes the way it behaves. It provides a consistent structure for
the Universe consisting of successions of ever-larger teelospheres
within ever-larger teelospheres with the largest being the Universe
itself.
There
is no such complementarity with the overarching Einsteinian Universe
and its associated darkenergy. In the bottomup New Cosmology, the
Universe is merely the biggest teelosphere of all, one that still
obeys the most basic laws of physics, the laws that everything else
has to obey. In this super teelosphere, time doesn’t stretch and
space doesn’t curve. All it has is empty space filled with teels
which have gravity, rejectivity, speed, and not much else. Unlike the
Current Paradigm Universe, the New Cosmology Universe is not
“special”. As for darkenergy, in the New Cosmology it isn’t
some mysterious antigravity force. It is just darkmatter in a
different place, a place that in the Einsteinian Universe, cannot
exist.
| |
CHAPTER
SIX
PHOTONS
This
chapter is about photons, which are the simplest of all the complex
particles and thus the easiest to create. Photons are hugely
important to us in that without them the human race could not exist.
Photons emitted by the Sun are our lifeforce. Photons, in their great
variety, are almost our only means of “seeing” the Universe about
us. And it is photons which, directly and indirectly, provide the
power that runs our civilisation. However, considering how important
they are to us, we know remarkably little about them. By coming at
them “bottomup” this chapter will begin to change that.
This
is the first of two chapters dealing with photons. This one deals
with photon mechanics, with the processes underlying their creation
and maintenance. The next chapter will deal with something that is
composed entirely of photons, the “cosmic background radiation”.
However, that will not be the last we hear of them in this analysis.
Photons will crop up again and again in later chapters as part of the
equilibration and decay processes of more complex particles.
FACTS
That
photons exist has been known of for centuries. However, what we know
about them is a lot less than we would like. What we do know is just
a mix of measures and statistical behavioural analyses and, while
this information allows us to accurately predict how a photon will
behave in many circumstances, it is very different from knowing what
photons actually are and how they actually work. The plain truth is
that we don’t.
Different
types of photons can be identified by their differing wavelengths,
frequencies, energy, or momentum. For classification purposes,
photons are regarded as fundamental particles and, as such, they can
be created or destroyed by interacting with other particles but they
will not decay of their own volition.
According
to the Particle Standard Model, photons have a zero rest-mass, a zero
electric charge, a positive momentum, and a positive angular
momentum. The energy and momentum of a photon is inversely
proportional to its wavelength (and proportional to its frequency).
While the rest-mass of a photon is zero, it is generally believed
that a photon in motion does have mass – a photon’s momentum can
be transferred when it interacts with matter and a photon’s path
will alter when it moves through a gravitational field.
Photons
are constantly moving. In a vacuum, the velocity of every photon is a
shade under 300,000 kilometres per second – a velocity known as the
speed of light or “lightspeed”. It is commonly believed that
lightspeed is a cosmological speed limit that cannot be exceeded –
not by photons or by anything else – although this, like so much
else in the Current Paradigm, has never been unequivocally proved.
Photons
are emitted by particles such as electrons or nucleons. This can be
due to an internal event, like the particle changing to a lower
energy state. It can also be due to an external event, such as a
collision with another particle. Very high energy photons can, in
some circumstances, become electrons or antielectrons.
It is also possible for photons to split into two and for two photons
to merge into one, always subject to the conservation of energy and
momentum.
THEORY
According
to the Current Paradigm, the Universe at 10-43
was a highly energetic soup that contained all the mass and all the
energy there is in the Universe today but squeezed it into an area
that was smaller than the nucleus of an atom. Such a place was
incredibly hot and incredibly violent. In it, the lifetime of a
photon was extremely short. Barely would it have been emitted by one
matter particle before it would crash into another and be destroyed.
Since
the Universe was expanding rapidly, its density was rapidly
decreasing and the distance between particles was rapidly increasing.
Because of this, the lifetime of the photons was rapidly extending.
This continued until the arrival of a period called the Recombination
Epoch which was 300,000 years after the Big Bang. By this time the
speed of the baryons and the electrons had reduced to a level whereby
they could come together to form hydrogen atoms. This concentration
of matter into “hotspots” further increased the space between
particles and meant that, for the first time, a good proportion of
the Universe’s photons could move at lightspeed without the near
certainty of crashing into a particle and being destroyed. The cosmic
background radiation (CBR) that we detect today is what remains of
those first free photons.
The
CBR photons of today are not, however, as they were then. At 10-43,
the Universe’s photons were highly energetic. Their wavelength was
very short. They were extremely destructive photons, the equivalent
of our modern day gamma and x-rays photons, or worse. By the time the
Recombination Epoch arrived, the Universe had expanded and the
wavelength of its photons had grown correspondingly longer.
As
the Universe has continued to expand over the past 13 billion years,
the wavelength of the CBR photons has grown longer and longer. At the
same time, their numbers have reduced as they have collided with
matter particles and been absorbed. This combination of reducing
density and increasing wavelength has pushed the CBR almost to the
limits of what we are capable of measuring. The CBR is still with us
– but only just.
A
NEW VIEW
Almost
everything we know about photons has been deduced topdown. Not least,
this has been due to the extreme difficulty we have in examining
them. Since photons are always travelling at lightspeed, we have been
unable to capture one and examine it closely. All we have been able
to do is to chart their behaviour and, while this has enabled us to
predict what photons will do in many circumstances, it still doesn’t
tell us very much about them.
In
the Current Paradigm, photons are “carriers” of energy emitted by
larger particles as part of their equilibration or decay processes –
or that is the situation as it applies today. Whether the same
situation applied prior to the Recombination Epoch is not clear
because the extrapolation has passed through the glass floor to a
place where there are no facts at all. It is supposed that there were
baryons and electrons prior to Recombination, at least some of the
way back to 10-43
and that they would have been emitting photons.
Some
speculate that, going back far enough, the Universe would have been a
dense soup of photons and quarks. And going back even farther than
that, the wavelengths of the photons would have become so short that
they would have been transmuting into quarks and back again. There is
logic in this in that, today, when quarks are separated out from
nucleons, they immediately decay into photons. Also, this fits in
with the equivalence of matter and energy postulated in Special
Relativity. It is still speculation though and the truth is that no
one knows.
Compared
to the picture presented by the Current Paradigm, that presented by
the New Cosmology has the clarity of crystal. There are no more hard
facts in it than there are in the Current Paradigm picture but by
moving forward in time from a logically deduced beginning, the
Universe behaves itself in a logical fashion, obeying all the laws of
physics that apply today. We have already seen processes, that can be
directly compared with present day processes, start up naturally.
What the New Cosmology has not yet got, however, is any photons or
any complex particles that might emit them.
So
where did the first photons come from? Before the New Cosmology
decides that, we need to introduce a few more processes and measures
which are necessary for the creation of photons – and indeed are
necessary for the creation of all complex particles.
BONDING
“Bonding”
is the gravitational linking of two particles so strongly that they
cannot be separated without some outside intervention. Bonding by
gravity is a recurring feature in the Universe. Quarks are bonded
together to form nucleons,
nucleons are bonded together to form atoms, atoms are bonded together
to form stars, and so on. Not all bonds are the same, however, with
different types of bond occurring naturally in specific conditions.
There
are three types of bonding. There is “solidbonding”,
“liquidbonding”, and “gasbonding”. These three types can
occur in all types of structures but it is in atoms that we humans
are most aware of them. Planet Earth, for example, is a bond of
atoms. There is a solidbonded part which we stand on, walk over,
build houses on, grow roses in, etc. There is a liquidbonded part:
the oceans, the rivers, beer, windscreen wiper fluid, etc. The
gasbonded part, of course, is the air we that breath, fly through,
and poison.
The
bonding of atoms into large structures can be dramatic and impressive
but it is also deceptive. Actually, this is really “secondary”
bonding. Secondary bonding cannot take place at all unless there is
first a bonding at the level of the teels that the atoms are made out
of. And because teels are such simple particles, it is at this level
that the mechanics of bonding are best explained.
Teels
are solidbonded when one or more of them are held so closely together
by their mutual gravity that they are permanently touching each
other. If this accretion contains few enough teels, they may “roll”
around each other but beyond a certain number, this becomes
impossible and the teels lock into a matrix and become a “solid”.
Liquidbonded
teels are also bound together by their mutual gravity. The difference
lies in the amount of realspeed that the teels have. Each will have
enough to put it above the escape-velocity of any pair of teels but
not enough to exceed the escape-velocity of an accretion of them. In
a liquidbonded accretion, the teels are constantly on the move,
constantly colliding and exchanging speed but never able to pick up
enough speed to be able to get away.
There
is no such thing as a gasbonded accretion of teels. All the teels in
such an accretion have enough realspeed, not merely to exceed the
escape-velocity of a pair of teels but to exceed the escape-velocity
of the accretion itself. What this means is that a gasbonded
accretion must inevitably evaporate away to nothing over time.
Gasbonding
does exist, however. It just can’t exist independently. Gasbonded
teels are gravitationally bound, not to other gasbonded teels but to
a solid or liquidbonded core. The realspeed of a gasbonded teel is
above its mutual escape-velocity with any other gasbonded teel but is
less than the escape-velocity of the solid or liquidbonded lump
around which it is moving.
The
key factor in determining the nature of any accretion is spin. Spin a
solidbonded teel structure fast enough and it will begin to liquefy.
Spin it even faster and it will gasify and evaporate. The same will
happen with more complex particles although the process is not
necessarily as straightforward. For instance, spin a sphere of ice
fast enough in a vacuum and it could well shatter before it
liquefies. Do it another way, however, by raising the spinrate of the
water atoms within the ice sphere, and the sphere will indeed first
liquefy and then gasify.
A
teel accretion doesn’t have to be in one form only. Actually, it
rarely is. Mostly it is a mix of all three forms of bonding in the
same way that Planet Earth is a mix of all three. It will have a
solidbonded core, cloaked by a liquidbonded “teelocean” that is
itself cloaked by a gasbonded “teelosphere”.
-
Complex
particles are made out of less complex particles bonded together by
their mutual gravity.
-
The
strength of the bond is moderated by the spin of the complex
particle.
-
The
faster the spin, the less strong the bond.
-
There
are three types of bond: solid, liquid, and gas.
-
Complex
particles are often a mix of all three types of bond.
VERGENCE
An
essential measure in the bonding of two teels together is their
“vergence-velocity”. Vergence comes in two forms: as convergence
or divergence - and vergence-velocity is the rate at which two teels
move towards, or away from, each other.
Just
as every teel in the Universe has a gravitational relationship with
every other teel in the Universe, every teel in the Universe also has
a mutual vergence-velocity. Every pair of teels is either moving
towards or away from each other. At the same time, of course, every
teel pair also has a mutual escape-velocity. The significance of
this, so far as the creation of complex particles is concerned, lies
in whether or not the vergence-velocity exceeds the escape-velocity.
Prior
to Moment Zero, the vergence of the Universe was neutral in that its
teels were neither converging nor diverging. After Moment Zero,
however, all the teels were very definitely diverging and, in the
same way that the totalspeed of the Universe has never changed
(subject to no teels ever escaping) neither has the totalvergence.
The
use of the term totalvergence is accurate because, as with speed,
vergence can come as realvergence or potentialvergence. Also like
speed, vergence is a conserved property in that it can be transferred
from one particle to another by collision or by gravitational
attraction but it can never be destroyed or eliminated.
-
VERGENCE:
Vergence is the movement of objects toward, or away from each
other. Vergence is a subproperty of speed. Like speed, it is a
conserved property in that it can be transferred from one object to
another by collision or by gravitational attraction but it can never
be destroyed or eliminated. Vergence can come as realvergence or
potentialvergence.
Consider
a pair of moving teels. Because the teels have gravity, they are
attracted towards each other. They therefore have a mutual
escape-velocity. If they are moving apart from each other, they will
be doing so at an angle that can be anywhere between zero and 180
degrees. If the angle is zero degrees, in other words if the two
teels are moving parallel to each other, their vergence-velocity will
likewise be zero. The nearer the angle is to 180 degrees, the faster
the vergence-velocity will be – and the faster the teels are
moving, the faster the vergence-velocity will be.
If
the vergence-velocity of this teel pair is zero, it will be lower
than their mutual escape-velocity and the pair cannot escape from
each other. At any angle greater than zero, the vergence-velocity
will be greater, and the faster the teels are moving, the faster it
will be. Raise the vergence-velocity far enough and the pair’s
mutual escape-velocity will be exceeded.
Wherever
a teel pair’s vergence-velocity does not exceed their mutual-escape
velocity, they are solidbonded together.
-
Vergence
comes in two forms: divergence and convergence.
-
Vergence-velocity
is the speed at which two object diverge or converge.
-
The
nature of the bonding of two objects is dictated by their
vergence-velocity.
-
If
the vergence-velocity of two objects is less than their mutual
escape-velocity, they are solidbonded together.
THE
DEMOCRATIC PRINCIPLE
The
Democratic Principle applies to life, the Universe, and everything.
It is the rule of thumb which says that the big, the strong, the
forceful, the majority, will dominate the small, the weak, the
ineffectual, the minority. It is the rule which says that a big boxer
will beat a small one, that a large army will beat a little one, that
over time a big planet will grow bigger and a small one will get
absorbed, that a bucket of icecream will have a major effect on your
digestive system and a spoonful won’t.
Unfortunately,
the Democratic Principle is that bane of physics, the general rule,
the imprecise law, the statement to which there can be exceptions due
to chance, luck, or any number of other factors. While a large army
should beat a small one, Henry V still managed to give the French a
thrashing at Agincourt. The Democratic Principle applies most of the
time – but not every time.
Having
said that, the greater the disparity between the majority and the
minority, the nearer the Democratic Principle comes to 100% accuracy.
For example, consider a jerrycan full of water and the River Nile. No
matter how many different ways you pour the water from the jerrycan
into the Nile, the chances of your ever being able to get it to go
upriver are extremely small.
The
Democratic Principle applies universally but its relevance to this
particular chapter lies in its effect upon gravitationally bonded
accretions of teels. Every particle in any accretion is moving. Even
those which are solidbonded into the core are moving. That movement
has a velocity and a vector. If those velocities and vectors are
random, then the condition of the accretion is chaotic. However,
observation tells us that it is rare for an accretion to be chaotic
for very long. Rapidly, chaos is transformed into order and the
accretion begins to spin with the particles all moving in
approximately the same direction.
Everything
in the Universe spins eventually. In our Universe, “spinning” is
normal and “not spinning” isn't. Observation tells us this.
The motion of the particles that something is made of can be random
for a while but the Democratic Principle will assert itself
eventually and the something will begin to spin.
ACCRETIONS
Prior
to Moment Zero, the Universe was composed of stationary teels that
were solidbonded to the limits of their rejectivity. The
gravitational bond between these teels was as strong as it is
possible to be – and was stronger than any bond has ever been
since. The vergence of the Universe was neutral and since nothing was
moving, the Democratic Principle could not apply.
At
Moment Zero, each individual teel was suddenly given an amount of
speed that was enough to accelerate it to much more than lightspeed.
Since the teel vectors were random, chaos ensued. The chaos was then
followed by a reordering so that each teel was now moving directly
outwards from the centre of the Universe. The reordering was
accompanied by shockwaves which transferred speed from the centre of
the Universe to the surface – so that the outer teels were moving
faster than the inner ones.
The
size of the Universe at Moment Zero was a notional one billion
kilometres in diameter. Because the Universe was spherical, this
meant that every teel, as it moved outwards, would be diverging from
its fellows although the great size of the Universe meant that the
angle of that divergence for adjacent teels was minute. It was not
zero but it was extremely close to it. This in turn meant that,
notwithstanding their tremendous forward velocity, the
divergence-velocity of any pair of teels was so low that it was
considerably less than their mutual escape-velocity.
This
presented a conflict of considerable proportions. Prior to Moment
Zero, all the teels in the Universe were solidbonded together. After
Moment Zero, and notwithstanding the enormous amount of speed that
had just been given to each teel, the teels remained solidbonded to
one another. This was because their divergence-velocity was lower
than their escape-velocity. However, speed is conserved. Once the
teels had it, they couldn’t get rid of it. Something had to give.
If
this situation had continued, the Universe would have “coughed” –
and then it would have carried on “coughing”. The teels would
have moved outwards but their mutual gravity would have slowed them
extremely quickly. Their realspeed would, almost instantaneously,
have become potentialspeed and all outward movement would have
stopped. The expansion of the Universe would have stopped. Now, the
mutual gravity would have drawn the teels back in, converting the
potentialspeed back into realspeed, driving them faster and faster
until the teels were back together again at the limit of their
rejectivity. Speed, being conserved, would now drive the teels back
out again to the limits of the solidbonding. That was how matters
would have carried on, possibly for ever. In and out. In and out.
Cough, cough, cough.
However,
the Universe didn't cough (and a very good thing too given that if it
had we wouldn't be here). For the Universe to cough, and to continue
coughing, the alignment of the outward-moving teels had to be
perfect. Every vector and every velocity had to be exactly right.
Fortunately for us, such perfection was unachievable. It was
unachievable because the initial chaotic moment was not resolved
peacefully. It was resolved by collisions. Random collisions – and
randomly colliding vast numbers of superfast particles together is
not the best way to achieve a perfect alignment.
At
the end of the chaotic moment, some teels were inevitably not moving
at exactly the same velocity as their near-neighbours and were not
perfectly aligned. In other words, within this rapidly expanding
Universe, there were flaws. There were irregular gaps.
At
short ranges, like the ranges between these teels, gravity is
extremely strong but its strength falls rapidly with distance.
Picture a teel with teels to each side of it. Picture that to one
side there is a normal gap and to the other there is a
larger-than-normal gap. This means that the mutual gravity is
stronger to one side than the other. The consequences are inevitable.
The central teel is attracted towards the normal gap and that
increases the non-normal gap, further increasing the gravity
imbalance. This is a "meltdown" condition. The bigger the
gap gets, the greater the gravitational imbalance, and the bigger the
gap. This, in that first fraction of a second after Moment Zero,
would have been happening throughout the Universe.
The
gaps would have joined up extremely rapidly and, in a moment, the
Universe would have ceased to be a single cohesive ball. It would
have shattered into “accretions” of teels. These accretions were
the Universe's first complex particles although, as complex particles
go, they were extremely crude by the standards of what was to come,
being nothing more than simple gravity-bound accretions of teels
without any form of internal structure at all.
Crude
the accretions may have been but the act of breaking up into them
saved the Universe from a fate of eternal coughing. The teels within
the accretions were still solidbonded to each other, with their
escape-velocity being higher than their vergence-velocity. However,
each pair of accretions now had a mutual escape-velocity and
vergence-velocity of their own – in most cases, the latter was
higher than the former which meant that the accretions could escape
from each other and which, in turn, meant that the the Universe could
continue to expand.
PROTOPHOTONS
Even
though the teels were now locked into accretions by their mutual
gravity, they still possessed a degree of divergence which was
amplified as the accretions raced away from the centre of the
Universe. This divergence meant that while the teels in the
accretions were may initially have been solidbonded, the bonding soon
became a mix of solid, liquid and gas. Conditions were now chaotic
with teels moving in all directions and constantly colliding with
each other. Then the chaos was eased as the Democratic Principle came
into play. Order was imposed. The accretions began to spin. The
accretions became “protophotons”.
-
It
therefore has has an axis, an equator, and two poles.
-
It
has an internal structure.
-
It
has a solidbonded teelcore, a liquidbonded teelocean, and a
gasbonded teelosphere.
-
A
protophoton has a teelmass, a teeldensity, a teelspeed, and an
escape velocity.
-
It
can move at any velocity.
-
It
spins.
Protophotons
are common enough particles although we are never aware of them. They
are a stage in the creation of photons, the stage between accretions
and the final product. Every photon there has ever been has gone
through a protophoton stage at one time or another. In the present
day, they are produced during the equilibration of larger particles,
and during the decay of larger particles. How this comes about will
be dealt with in detail in the chapters dealing with those particles.
In the past, protophotons were produced in vast numbers during the
aftermath of Moment Zero.
Protophotons
are inherently unstable particles. Nor are they independent
particles. For any sort of prolonged existence, they depend upon the
condition of the uniflux through which they are moving. If the
condition is wrong, protophotons will rapidly equilibrate into
photons. When protophotons are produced during the equilibration or
decay of larger particles, the uniflux condition is invariably wrong.
Consequently, the protophoton stage is over and done with so quickly
that, in the Current Paradigm, no such thing as a protophoton stage
has ever been identified.
There
has only been one period in the history of the Universe when the
condition of the uniflux was such that protophotons could have a
quite lengthy lifetime and that was immediately after Moment Zero.
During that time, protophotons were able to survive, always subject
to their not being destroyed by collisions of course, for perhaps
hundreds of thousands of years before eventually equilibrating into
what we now see as the Cosmic Background Radiation. The CBR, and the
period that produced it, is dealt with in detail in the next chapter.
The
way that protophotons equilibrate into photons is a multiprocess. A
number of simple processes and mechanisms combine to rid the
protophotons of the excess of mass, speed, and vergence and turn them
into stable and potentially eternal photons. The key to equilibration
lies in the ability of a protophoton's teels to collide with each
other.
In
any collision between two teels, speed will be transferred from one
to the other but, as long as there are no interfering external
factors, the sum totalspeed of the two teels will remain the same.
The speed transfer can be small, even imperceptible, but there will
be one. It is almost impossible for there to be no transfer at all.
Spin
is speed confined by gravity and the fastest realspeed in a spinning
object is found at its surface. After any teel collision within a
spinning object, and the consequent transfer of speed, the particle
that now has the most realspeed will move towards the surface and
that which now has the least will move towards the centre. Think of
hot air rising and cold air falling.
Teel
collisions in the solidbonded core of a protophoton will move speed
and vergence out into the liquidbonded teelocean. In turn, collisions
in the teelocean will move speed and vergence out into the gasbonded
teelosphere. This raises the speed and vergence of the teelosphere,
lifting the velocity of the outermost teels above the protophoton's
escape-velocity.
A
protophoton is thus characterised by a continuing ejection of
overfast teels. In its train, this ejection means that the
protophoton is losing teelspeed and teelmass. It is also losing
vergence, increasing density, losing velocity, and losing spinrate.
At first glance it might seem that, if matters continue in this way,
the protophoton will evaporate away to nothing – but that will not
happen. It will not happen because the protophoton is losing
teelspeed faster than it is losing teelmass. It is this imbalance
that eventually equilibrates the protophotons into a photon.
Without
the imbalance in the speed and mass loss, equilibration is
impossible. If each ejected teel takes with it one unit of teelmass
and one unit of teelspeed,
the disequilibration of the protophoton remains exactly the same. The
protophoton can eject 10%, 50%, or 99% of its teels and is still just
as disequilibrated as it was before the ejections began.
Equilibration is only possible because each ejected teel is the
speediest that the protophoton has. While it takes away one unit of
teelmass, it takes away more than one unit of teelspeed.
Because
of the imbalance, the protophotons teelmass and teelspeed fall at
different rates. This allows them, eventually, to come into balance
which each other. This happens when the protophoton's velocity falls
to a shade below 300,000 kilometres per second – to lightspeed.
This is equilibrium. This is when a protophoton becomes a photon.
-
A
teel accretion transforms into a protophoton.
-
It
is spinning and ejecting teels as it moves towards equilibration.
-
Due
to teels being ejected, the mass is decreasing.
-
Due
to teels being ejected, the speed is decreasing.
-
The
ejected teels represent one unit of teelmass but more than one unit
of teelspeed.
-
Due
to the loss of mass, the escape-velocity is decreasing.
-
Due
to the loss of speed, the density is increasing.
-
Due
to the increasing density, the escape-velocity is increasing.
-
Due
to the loss of speed, the protophoton is contracting.
-
Due
to the increasing contraction, the spinrate is increasing.
-
Due
to the increasing spinrate, the density is decreasing.
-
Due
to the decreasing density, the escape-velocity is decreasing.
-
Overall,
the mass of the protophoton is decreasing.
-
Overall,
the spinrate and the velocity of the protophoton is decreasing.
-
When
the velocity of the protophoton decreases to lightspeed, it is
equilibrated.
-
The
mass and speed of the photon are in balance.
PHOTONS
DESCRIBED
The
structure of a photon is a simple one. There is a solidbonded core of
teels, surrounded successively by a liquidbonded teelocean and a
gasbonded teelosphere. In the teelcore, the slowest teels are those
along the photon’s axis because their velocity is the same as the
velocity of the photon. The fastest teels are those at the equator
because, while their forward velocity is that of the axis teels, they
are actually moving along a longer helical track.
Within
the teelocean and the teelosphere, teels fall automatically into a
classical movement pattern. Currents of faster teels move from the
north and south poles to the equator where they meet. The increased
pressure caused by the two meeting streams forces them upwards and in
moving up, realspeed is converted to potentialspeed. Faster teels
coming up behind force the upwelling to move sideways so that the two
streams are now moving northward and southward at a slowing pace. At
the poles, the streams sink, converting potentialspeed to realspeed.
Now, at a faster rate, they head once again for the equator.
This
is, of course, an extremely simplified picture of the movement
pattern. A more complex but more realistic picture is provided by our
own Planet Earth. The water molecules in the oceans and the air
molecules in the atmosphere are moving around the planet as it
revolves. At the same time they are moving towards the poles at a
high level and returning to the equator at a lower one. They are not
doing this, however, in a way that is readily apparent. The classic
movement pattern is obscured by hurricanes, tsunamis, el Ninhos,
jetstreams, etc, so that currents of water and air often seem to be
going the wrong way. Nevertheless, the classic pattern is still
there. There is no good reason for supposing that the classic pattern
is not likewise obscured in a photon’s teelocean and teelosphere.
Inbuilt
into the photon’s structure is the means by which it maintains its
velocity. The velocity of a photon in open space is always
lightspeed. It is still, however, subject to the same
velocity-changing influences that all other particles are subject to.
It can be accelerated or decelerated by the gravitational attraction
of other objects. Likewise, absorbing quantities of faster or slower
teels from the uniflux will also accelerate or decelerate a photon.
Effectively, then, the velocity of a photon does change but
immediately that happens, internal mechanisms operate to
reequilibrate it to lightspeed.
The
principal measure by which we currently identify the difference
between one photon and another is the wavelength. The wavelength
equates to the mass. The more massive a photon is, the denser it is,
the faster it spins – and the shorter is its wavelength.
If
the velocity of a photon is pulled above lightspeed by another
gravity source, the spinrate will rise by a corresponding amount.
Consequently, some of the teelcore will liquefy at the equator, some
of the teelocean will gasify at the equator, and the speed of the
teelosphere above the equator will be raised above the
escape-velocity so that some teels can be ejected out into the
uniflux. This will reequilibrate the photon to lightspeed with the
loss of some mass and speed. We detect this as lengthening of the
wavelength.
The
same process, but reversed, comes into play if a gravity source
forces the velocity of a photon below lightspeed. As the spinrate
decreases, some of the teelosphere will liquefy and some of the
teelocean will solidify. Consequently, the density of the photon will
increase, raising the escape-velocity and allowing teels to be
absorbed from the uniflux and retained. This will reequilibrate the
photon to lightspeed, albeit with more mass and speed than before. We
detect this as a shortening of the wavelength.
Other
versions of the same equilibration process slide into action if the
spinrate of a photon is raised or lowered while passing through
regions of high or low speed uniflux. In attuning itself to a higher
speed uniflux, a photon will equilibrate by losing mass and speed. In
attuning itself to a lower speed uniflux, it will do so by gaining
mass and speed. We detect this by an alteration in the wavelength.
If
a photon hits a larger particle, more often than not, its mass and
speed will be absorbed by that particle. The additional mass and
speed may be enough to disequilibrate the absorbing particle in which
case its own equilibration processes will begin. Where the incoming
photon is very massive, or where there are large numbers of less
massive photons, the absorbing particle can break up before it can
reequilibrate.
The
most massive photons regularly noted are the very short wavelength
gamma photons. These photons are massive enough to cause damage to
other particles, even in small numbers. This is especially noticeable
with living tissue. The least massive photons are the VLF radio
photons. These are so insubstantial that they are normally easily
coped with by the equilibration processes of absorbing particle. In
large quantities, though, it is still possible for VLF radio photons
to overwhelm a particle’s equilibration processes.
BLACK
HOLES
In
the Current Paradigm, a black hole is a region of space in which the
gravitational field is so strong that nothing can escape from it. The
name comes from the way that even light cannot escape. Because light
cannot escape, black holes are invisible to us and our only means of
detecting them is by observing their interaction with matter that is
outside the black hole.
The
term "Black Hole" was coined by the cosmologist John
Wheeler in 1967 and caught on extremely quickly. Presumably this was
because, like the term "Big Bang", it added some drama to a
subject that was, for laymen anyway, rather yawn-worthy.
Unfortunately, and also like the term Big Bang, the emotive
connotations of the term Black Hole ultimately get in the way of
understanding. In the Big Bang Standard Model, the early universe
didn't actually begin with a bang and in the New Cosmology a Black
Hole isn't actually a hole. So far as Big Bang was concerned, I got
around this by rechristening the beginning of the Universe "Moment
Zero". For Black Holes, I'll condense the two words into one so
that from hereon they are "blackholes". It isn't a perfect
solution but it'll do for me.
Our
inability to "see" blackholes means that, even though the
concept is so firmly established that almost no one doubts their
existence, they are still theoretical. The scientific press
frequently reports the identification of new ones but those
identifications are all circumstantial – circumstances infer that
something has been found that fits our current perception of what a
blackhole might be. However, much as we might like it to be
otherwise, a circumstantial identification is not the same as an
absolute proof so, while the lack of any direct observational
evidence doesn't mean that blackholes don't exist, cosmologists
should be taking great care in coming to any conclusions. More care
than some of them actually do.
The
blackhole idea is very old. It dates back at least to the eighteenth
century, with John Michell describing one in a letter to Henry
Cavendish in 1783. In those days the idea was rooted in the
comparative simplicity of Newtonian mechanics. The modern version is
more complex and less comprehensible, having been developed out of
Einstein's General Theory of Relativity and improved by the use of
quantum mechanics. In the modern version, a blackhole is a volume of
space enclosed within an "event horizon". The event horizon
is a kind of surface from within which nothing can escape. Inside the
event horizon, all the mass of the blackhole is compressed by its own
gravity into something that is infinitely small, infinitely dense,
and with its spacetime infinitely curved. This "something"
is known as a singularity.
Logically,
blackholes can have any mass you care to think of but only four mass
ranges are considered to be common. These are:
-
INTERMEDIATE
MASS BLACKHOLES: these weigh in at anywhere between
800 and 3000 times the mass of the Sun and have been posited as the
source of the very active x-rays that we have been detecting. For a
long time, it was unclear how blackholes in this range could form
but of late it has been theorised that they do so in the heart of
dense star clusters.
-
SOLAR
MASS BLACKHOLES: these weigh in at 1.5 to 3.0 times
the mass of the Sun. They are formed by the gravitational collapse
of stars at the end of their life cycle. To collapse into
blackholes, though, the stars have to be very massive to begin with
– in the order of 20 solar masses and upwards.
-
MICROMASSIVE
BLACKHOLES: technically, these are any blackhole
weighing in at less than the mass of the Sun although most interest
is focused on the very micromassive holes near to the Planck Mass
(the Planck mass is the mass of a blackhole whose Schwarzchild
radius, multiplied by π equals its Compton wavelength). Blackholes
like these are believed to have been produced in large numbers
during the immediate aftermath of the Big Bang.
So
far as the New Cosmology is concerned, the flaw in the current
version of blackhole theory lies in the presence of singularities. It
has no trouble with the older-style Newtonian concept of blackholes
but the singularities that stem naturally from the Einsteinian
version are an extrapolation too far. This is, of course, not the
first time we have come across the singularity concept. In the
Current Paradigm, today's Universe was extrapolated back in time
until, at the moment of the Big Bang, it too became a singularity.
Nowadays,
most cosmologists have little trouble in accepting the idea of a
singularity, either in the Big Bang or in the centre of blackholes.
Possibly, this is because, through long usage, they have become
comfortable with it so that, while many of them may not 100% believe
in the concept, since it is where the maths lead it will do until
someone (else) comes up with a better idea. There are dissenters,
those who strongly disagree with the idea, although I suspect that,
as a proportion of practising cosmologists, there are less now than
there used to be.
While
the number of dissenters is small, the number of enthusiasts for the
concept is considerable. They are enthusiastic, not the least,
because the singularity idea allows the imagination to roam. There
is, for instance, the theory positing that for every blackhole
singularity there is, somewhere, a white hole singularity, an idea
that perhaps echoes the idea of matter and antimatter. Here, the
blackhole acts as an absorber for any matter that crosses the event
horizon and the white hole acts as a source that ejects matter from
its event horizon, raising the possibility of travelling through the
connecting "wormhole". The white hole might be in this
Universe or it might be in another one, or it might even in another
dimension.
It
is all nonsense of course. Silliness with nothing more than sets of
fancy mathematics for a justification. In truth, the idea that there
might be a singularity at the centre of a blackhole is the same as
the idea that there might have been a singularity at the beginning of
the Universe. It is glass floor science. It is Zeno's Paradox revised
for the present day. It is a topdown extrapolation made with too
little starting evidence. Just because we don't know of a mechanism
that will stop matter being crushed to infinity, doesn't mean there
isn't one.
And,
of course, by referring back to the first chapters of this analysis,
we do know what the mechanism is. It is rejectivity. If matter is
broken down into its fundamental components as it is inside a
blackhole, it is broken down into teels and as we have already
established, teels have just two properties: gravity and
rejectivity. So, it is gravity that crushes them together and it is
rejectivity that stops them being crushed beyond a specific limit (a
limit that falls a long way short of being crushed infinitely). A
blackhole, therefore, of any size, is an accretion of teels in which
their mutual gravity has drawn them together to the limits of their
rejectivity.
Because
the approach of cosmologists towards blackholes has thus far been
resolutely topdown, and because they have lumbered themselves with
the idea that they contain singularities, blackholes appear to be
exotic creations. The very name reflects this: blackholes are
menacing things, cloaked in darkness, waiting in the depths of
unfriendly space to ensnare anything unfortunate enough to fall in.
However, remove singularities from the picture and blackholes become
quite ordinary. They are not bizarre creations. They are just very
dense lumps of matter, the inevitable consequence of having a lot of
matter in a very small place. The much simpler version of the
blackhole idea, the one first posited three hundred and fifty years
ago, was actually pretty much spot-on. Given that the Current
Paradigm and the New Cosmology versions of a blackhole are somewhat
different, the following definitions will be useful:
It
is worth noting that the New Cosmology version makes no mention of an
event horizon. If a New Cosmology blackhole has an event horizon at
all, given that a pair of solidbonded teels could never hope to
capture a photon, it could never be quite the same thing. In the
Current Paradigm, nothing can move faster than lightspeed which
places the event horizon at a specific distance out from a blackhole
of a given mass. In the New Cosmology version, while photons cannot
travel faster than lightspeed, teels can travel at any speed.
Accelerate the teels in a blackhole to a high enough velocity and the
blackhole will evaporate. For all practical purposes, then, there is
no such thing as an event horizon over which nothing can cross.
In
the Universe today, while blackholes can be of any mass, they are
normally only found in one of the four mass ranges. Of the four, this
chapter is only interested in the very micromassive blackholes (or
microholes) that were created immediately after Moment Zero, leaving
the others to be dealt with in later chapters. Those later chapters
will show that microholes are being created all the while today,
along with photons, as part of the equilibration processes of larger
particles.
Immediately
after Moment Zero, the Universe broke up into accretions of teels. As
those accretions began to spin, they developed a structure. At the
heart of each structure was a core of solidbonded teels which, in
line with the above definition, was a blackhole. As to what
subsequently happened to those blackholes, those above a specific
mass/speed equilibrated into photons. Those below the require
mass/speed didn't and were either absorbed by other particles or they
were able to equilibrate with their surrounding uniflux and survive,
perhaps even to the present day.
Nowadays,
it is thought by some that the Universe may be infused with
prodigious numbers of small blackholes. Exactly how small, though, is
a matter for debate. It is believed that the smallest possible mass
for a blackhole is a Planck mass which is 1.1 X 1019
GeV/c2 approximately but not everyone believes that they
actually do get to be that small. As to their creation, some are left
over from the Big Bang while others are created when cosmic rays
collide with atoms. It has been suggested that blackholes are a
constituent part of atomic nuclei and at least one cosmologist is
proposing that electrons are actually blackholes that have not yet
been formally identified as such. For some cosmologists, small
blackholes are the possible cause of both darkmatter and darkenergy.
Notwithstanding
that scientists want scientific progress to be a rational process,
following logical and well-thought-out steps, much of it is just
fumbling in the dark, looking for a faint chink of light. This is
especially so when the research is being carried out topdown. What is
currently believed/thought/conjectured about small blackholes is a
superb example of what can happen. Ideas that are partly right are
flowing out but, as yet, the ability to put those part-right ideas
together to create a complete and in-focus picture is beyond anyone.
It is only when it is considered bottomup that the picture comes
together, in focus with the clarity of crystal.
Microholes
with a mass less than that of a photon were created immediately after
Moment Zero. Most of those will have been absorbed by larger
particles by collision. However, some of those early microholes will
have survived and are with us today. Those who suggest that they
might be found in darkmatter and darkenergy, within what the New
Cosmology would call teelospheres and the uniflux, are correct
although they are not the principle constituent which is of course
solo teels, simple and unadorned.
However,
the main contention to come out of all this is that photons, or at
least the cores of photons, are blackholes. They qualify as
blackholes since they meet the above definition and, more to the
point, they behave exactly as blackholes do. It is just the scale
that is different.
THE
REALITY CHECK
Within
the Current Paradigm, there is a comprehensive “statistical”
knowledge of photons: mass, charge, wavelength, frequency, etc. This
knowledge allows us to make a great deal of use of photons, both as
interpretational tools and as a part of mechanisms.
Over
the past century or so, detecting the wavelengths and frequencies of
photons has become increasingly easy for us. We can now readily
produce photons at precise energy levels for a vast range of specific
tasks: for lighting of many different types, microwave ovens, x-ray
machines, airborne laser weapons, and so on. Predicting what a
particular photon will do in any particular circumstance has
long-since ceased to be a mystery.
There
is, however, a gaping hole in this huge bank of knowledge. It is that
no one knows what a photon is. Not properly anyway. Not in the way
that an engineer might know an engine. In the Current Paradigm, a
photon is “a quanta of energy” and energy is “a capacity to do
work” – so a photon is a quantity of capacity to do work. Beyond
that there is nothing. The questions that are still to be answered
are hugely fundamental. Questions like: how do photons “store”
this mysterious capacity to do work, do photons have a structure, are
there mechanisms and processes going on inside photons, do photons
have an inside at all, and so on?
The
shortcomings of topdown analysis are rarely any clearer than here. In
topdown analysis, what is known is extrapolated into what is unknown.
That extrapolation is then confirmed by observation and/or experiment
and becomes a fact from which yet further extrapolations can then be
made. Problems rise up when confirmation seems impossible to achieve.
That is the situation in which the Current Paradigm finds itself
today.
In
bottomup analysis, the most fundamental of all the facts in a case
are drawn together and allowed to interact. If there are enough
facts, the interaction will provoke processes. If the consequence of
those processes resembles reality, it becomes reasonable to assume
that the bottomup analysis is correct. If the consequence does not
resemble reality, the analysis is clearly wrong and will need to be
done again. In that sense, a bottomup analysis is self-proving.
In
the New Cosmology, we have taken a vast and still ball of teels,
introduced prodigious quantities of speed into it, and seen what will
happen. What has happened is that the expanding teel-ball has broken
up into accretions, which have then become protophotons, which have
then become photons. A proportion of these original photons have
survived to the present day, dimly visible to us as the Cosmic
Background Radiation.
Along
the way, much has been explained about the nature of photons: how
they form, their internal structure, their internal processes, how
they manage to move only at lightspeed, how they interact with other
particles, how they can change wavelength, and so on. All the
essential aspects of a photon have been described in this chapter and
they all conform to what we can see about us – and have required no
laws of physics that have not been confirmed many times over.
Especially,
the New Cosmology photon conforms exactly, without any exceptions or
caveats, to the “factual” photon as it is already know. Consider
the following:
-
Interaction: A
photon’s momentum, mass, speed, and vergence, can be transferred
Since
the New Cosmology photons are an exact match for what is known of
real-life photons, and since this chapter provides additional
information that is not found in the Current Paradigm, can this
chapter be said to have self-proved itself? Very possibly – but
that self-proving is not unqualified. As has been said in previous
chapters, it is highly unlikely that the Universe immediately prior
to Moment Zero really was a completely motionless ball of teels. That
means that the ideal photon-creating conditions described in this
chapter never happened. It is not to say that something like it
didn’t happen. As you will see in the concluding chapters,
“something like it” almost certainly did happen. Nevertheless,
you should see this chapter as a necessary but temporary
simplification – not as a description of reality.
| |
CHAPTER
SEVEN
THE
COSMIC BACKGROUND RADIATION
This
chapter is about the “Cosmic Background Radiation” – the CBR –
which is a bombardment of photons that comes at the Earth from every
direction at such a low energy level that it is barely detectable.
The bombardment is believed to date back to the “Recombination
Epoch” which took place 300,000 years after the Big Bang. If this
is so, it means that the CBR photons are the oldest objects in the
Universe today that we are currently capable of detecting.
The
CBR has an honoured place in the history of the Current Paradigm. It
was the detection of the CBR in 1965 that triggered a rapid decline
in belief in the Steady State Theory. Barely a few years later, the
Big Bang Theory had become the mainstream belief, a position it has
held ever since.
FACTS
Whether
or not the Recombination Epoch really existed is unknown. The
Recombination Epoch is an entirely theoretical construct. The Cosmic
Background Radiation used to be a theoretical construct as well,
occurring as a natural consequence of the Recombination Epoch. Now
though, with the CBR having been discovered for real, it is no longer
theoretical.
Because
the existence of the CBR was predicted before its was discovered, its
subsequent discovery is regarded by many as strong circumstantial
proof that there really was a Recombination Epoch. By extension, it
is also taken as strong circumstantial proof that the Big Bang Theory
itself is correct. However, those proofs seem less strong when it is
remembered that the CBR, or something like it, was previously
predicted to exist for reasons that had nothing whatsoever to do with
any Big Bang.
That
there should be a Cosmic Background Radiation, as a consequence of
the Recombination Epoch, was predicted by George Gamow and others in
1948. They thought it would have a temperature of 5K. However,
predictions that a Cosmic Background Radiation would be found, but
due to other causes, date back at least as far as 1896 when Charles
Guillaume predicted a form of CBR with a temperature of 5.6K. Later
predictions of greater or lesser accuracy followed from the likes of
Eddington, Finlay-Freundlich, Regener, and Shmaonov. The real zinger
came in 1941 when Andrew McKellar deduced an almost spot-on
temperature of 2.3K from his observations of the radiative excitation
of atoms.
The
reasons suggested for the existence of the CBR varied from researcher
to researcher. Some attributed it to decaying starlight and others to
“tired” light. McKellar thought it was the “rotational
temperature of interstellar space”. However, there was one
particular way in which all these earlier predictions were similar to
each other and in which they all differed from that of Gamow et al.
Gamow’s
prediction was a measure of the background temperature of the
Universe. The CBR temperature that we measure with the WMAP Probe is
assumed to be the same temperature everywhere in the Universe. All
the earlier predictions claimed that the temperature was localised.
It was a measure of the temperature near to the Earth and it was
assumed that the temperature would be different in other parts of the
Universe: near to the centre of the Milky Way, for example, or far
away from it.
The
truth is that, as of now, nobody yet knows who is right: either
Gamow or those earlier researchers. The general presumption today,
because the Big Bang is the centrepiece of the Current Paradigm, is
that it is Gamow but that is a presumption without a solid
foundation. It is assumed that the results of the WMAP Probe are
applicable to distant temperatures because that ties in with the
predictions of the Big Bang Theory. However, the truth is that the
probe only tells us the temperature of the CBR photons as they are
received by it. In other words, it only measures near-Earth
temperatures. The temperature of the CBR in other locations may, or
may not, be different and we will only be able to prove matters one
way or the other when we have devices that can measure the real
temperature of the CBR in places other than the near vicinity of the
Earth.
Within
the Current Paradigm, there is such a web of suppositions surrounding
the CBR that it tends to be forgotten how sparse the facts really
are. What we know is that it is an electromagnetic radiation with a
thermal 2.725 kelvin black body spectrum which peaks in the microwave
range at a frequency of 160.4 Ghz – which corresponds to a
wavelength of 1.9mm – and that we observe it to be isotropic to
roughly one part in 100,000. And that, pretty much, is it.
However,
having pointed out how few the facts are, it is also worth pointing
out that those facts do not exist in splendid isolation. They didn't
spring out of nothingness. It will greatly help our understanding to
take a good look at the background and circumstances that underlie
those facts.
TEMPERATURE
The
temperature of the CBR is 2.725 kelvin. Temperature is a physical
property of a system. The higher the temperature, the hotter the
system is – and the lower the temperature is, the colder the system
is. On that basis, the CBR is very cold, being only a smidgeon above
absolute zero.
There
is more to it than that, however. If we move down to molecule-level,
temperature is seen to be the result of the motion of particles. The
more motion there is, the higher the temperature is. In a block of
ice, the water molecules are frozen into immobility. In a kettle of
boiling water, the water molecules are in furious motion. However,
even in that block of ice, there are degrees of “frozenness” and
thus some degree of mobility. The frozen water molecules are locked
into a matrix but within that matrix there is space to move – and
the colder the ice, the less inclined the water molecules are to
move. Motion, especially in the case of frozen water molecules, can
be rotational or vibrational, as well as "translational".
Our
principle way of detecting temperature and measuring it is by
assessing the quantity and the energy level of photons. In the
Current Paradigm, photons equate to "energy". A fire emits
photons which we feel on our skin. Our skin, being quite sensitive,
can measure the quantity and energy level of those photons (at least
to the extent of telling the brain that staying and getting warm is a
good idea – or that running away is a better option). However,
skin only detects photons within a fairly limited range of energy
levels. At most energy levels, the skin only detects the photons when
the quantity has grown sufficiently large to cause damage.
Our
eyes can detect photons at a different range of energies than the
skin but the range is similarly narrow. For this reason, we have
developed many sophisticated devices which are capable of detecting
photons no matter where on the electromagnetic scale they may be.
Photonic
energy levels are ordinarily measured as "wavelengths" or
"frequencies". The wavelength scale and the frequency scale
look different but their end results are the same so, for this
review, we deal only in wavelengths. Wavelengths can vary enormously.
The shortest wavelengths, those of the highly energetic gamma
photons, can be as short as 0.1A. The longest wavelengths, those of
the insubstantial and nearly undetectable VLF radio photons can be
over a kilometre.
The
photons that make up the CBR have a long wavelength. They don't all
have the same wavelength, however. Their wavelengths stretch up the
scale from the bottom where the extremely long wavelengths dwell to
about a third of a the way up. What this means is that the low
temperature of the CBR, as we measure it today, is due to its photons
being a mix of "not very hot" and "thinly spread".
There
is a pattern to be seen in the wavelengths of the CBR photons. The
wavelengths are not random. Indeed, they are the very opposite of
random. When the wavelengths are plotted against the electromagnetic
scale, it can be seen that they are coming at us in a pattern that is
known as a "blackbody curve"
BLACKBODY
In
physics, a blackbody is an object that absorbs all the photons that
hit it. Not one of the photons is reflected by it and none will pass
through it. The absorbing of the photons, however, heats the object
and once it has reached a particular temperature, it will begin to
radiate photons as it rids itself of that heat. The radiated photons
are not of a single wavelength and nor are they radiated at random.
They conform to a specific pattern known as blackbody curve.
A
blackbody curve shows the intensity with which energy is being
radiated and the height of the wavelength peak. It is named after the
form it takes when plotted on the electromagnetic scale. Photons
being radiated from a blackbody will maintain this same classical
curve shape on the scale, no matter how much energy is being
radiated. However, while the curve may remain the same, the more
energy there is being radiated, the higher up the scale the
wavelength peak will be.
Everything
is a blackbody to a greater or lesser extent, although mostly it is
lesser and often it is so much lesser that a radiation is
unrecognisable as a blackbody curve at all. Even with the better
blackbodies, the curves still tend to be deformed. The radiation that
comes closest to matching the classic blackbody curve is the Cosmic
Background Radiation which is so close as to be almost perfect. It
is, however, a radiation with an extremely low intensity and
consequently its wavelength peak is, at 1.9 mm, a long way down the
electromagnetic scale.
In
the Current Paradigm, the blackbody which emitted the CBR was the
Universe itself at the time of the Recombination Epoch. At that
moment, and for the first time, photons were able to move without the
inevitability of colliding with, and being absorbed by, matter
particles. The intensity of the radiation at that time was immense.
All the energy that is in the Universe today was then confined within
a relatively small area. Consequently, the wavelength peak was very
high.
What
has happened since that time is that, as the Universe has expanded,
the same amount of energy has been spread over a progressively larger
area. During that time, the blackbody curve of the CBR has remained
as near to perfect as makes no difference. However, with the same
number of photons spread over an ever-wider area, the photon density
has been falling so that their intensity is now very low. At the same
time, the lower density has allowed the wavelengths to expand.
Consequently, the wavelength peak of the CBR has fallen to its
current low level.
ISOTROPY
The
current CBR may be extremely faint but we are now becoming quite good
at detecting it and our latest pictures of it have quite a bit of
detail. One of the things that stands out very plainly in our latest
pictures is that the CBR, no matter where we might look, appears to
be much the same everywhere. It is "isotropic".
Or
is it? To be truly accurate, the CBR is isotropic on a large scale.
On a large scale, the CBR photons come at us from all directions in a
near-perfect blackbody curve at 2.725 kelvin. On a smaller scale,
however, there are inconsistencies in both density and temperature.
The differences are not great. We are, after all, examining something
that is well-nigh undetectable in the first place so the
inconsistencies are not proclaiming their existence very loudly –
but they are there.
In
the augmented photographs of the CBR, the inconsistencies are plain
to see. Instead of a smooth and isotropic texture, the "surface"
of the CBR looks not unlike the outside of a human brain with a
complex network of rilles picking their way between low mounds. On a
large scale, the surface of rilles and mounds looks the same
everywhere but, on a smaller scale, no two rilles and no two mounds
are exactly alike.
The
Current Paradigm has no problem in reconciling the large scale
isotropy with the smaller scale anisotropy, not least because both
conditions were predicted to be so before they were actually found.
Long before Penzias and Wilson first discovered the CBR, Gamow and
others were not only suggesting that it would exist but that it would
be isotropic on a large scale due to the circumstances of its
creation – and anisotropic on a smaller scale due to the later
formation of galaxies.
COLOURSHIFTING
Most
of what we know about the Universe has come from our being able to
see – and being able to see requires some means of detecting and
interpreting photons. In the first place, the detecting was done with
our eyes and the interpreting was done with our brains. Now, we are
augmenting our eyes with ever more sophisticated detecting machines
and augmenting our brains with ever more sophisticated computers.
Ironically, for all that new sophisticated hardware, our ability to
understand the Universe, as opposed to merely being able to see it,
still largely depends on just one fact: that photonic wavelengths
can alter by way of a process commonly known as "colourshifting".
Colourshifting
works like this: at the moment of their creation, photons have a
specific wavelength that depends entirely upon the circumstances of
their creation. Hydrogen atoms, for instance, radiate photons in a
number of specific wavelengths and all other atoms can be similarly
identified by the wavelengths of the photons they radiate. However,
these wavelengths are not permanently fixed and over time, as the
photons voyage through the Universe, they will change.
These
changes are known as "colourshifting" because in the
visible part of the electromagnetic scale, they are seen as movements
towards either the red or blue end of the spectrum. In the Current
Paradigm, there are currently thought to be four types of
colourshifting which are:
-
DOPPLER
COLOURSHIFTING: if a photon
source is moving away from an observer, the observer will see its
photons as redshifted. If the photon source is moving towards an
observer, the observer will see the photons as blueshifted.
-
RELATIVISTIC
DOPPLER COLOURSHIFTING: if
the photon source is moving at close to lightspeed, additional
doppler colourshifting will be caused by the dilation of time.
Perversely, this colourshifting is also apparent when the photon
source is moving parallel to the observer which means that the
redshifting of the source's photons doesn't necessarily mean that
the source is moving away from the observer.
-
COSMOLOGICAL
COLOURSHIFTING: this is
colourshifting due to the expansion or contraction of space. Given
that the Universe is currently expanding, all objects are moving
away from each other and therefore all photons are doppler
redshifted to any observer – or should be although in practice the
intervention of the four forces means that smaller objects, or large
objects that are close-by, are not necessarily moving apart from ALL
other objects and do not therefore appear to be redshifted to all
observers. The principles underlying cosmological colourshifting
were first formulated by Edwin Hubble and his colleagues which is
why it is often known as the "Hubble Shift".
Unlike
photons created within atoms, the photons of the CBR were not created
at a specific wavelength. Rather they were created in a specific
range of wavelengths, and in each wavelength in a specific density,
that corresponded exactly to a blackbody curve. It is this blackbody
curve that has been being colourshifted over the past 13 billion
years.
Since
the creation of the CBR, its blackbody curve has been subject to all
four forms of colourshifting. However, the dominant one has been the
cosmological colourshifting. Over 13 billion years, the space that is
the Universe has been expanding so that all the objects within it,
including the CBR photons, have been moving apart from each other.
They have been progressively redshifted, with the wavelength peak
moving progressively down the wavelength scale.
That
colourshifting happens is a fact. What the colourshifting in photons
tells us, however, has not been established with any degree of
certainty. Within the Current Paradigm, there are interpretations
that are quite strongly held. They are, however, very much unproven
and alternative interpretations, bottomup interpretations, will be
forthcoming later in this chapter.
ETHER
Underlying
the cosmological colourshifting concept is the idea that space can
expand or contract. As has been mentioned in earlier chapters, this
is a counterintuitive idea. Our instincts tell us that matter and
space are different: that matter is something and space is nothing:
and that space, being nothing, can only have properties that are
defined by the matter that resides within it.
Counterintuitive,
the idea might be but the cosmological community has a long history
of believing in it in one form or another. In its nineteenth century
form, it was believed by many that space was infused with something
called the "luminiferous ether". Nor was this an
unreasonable belief, given the paradigms of the day. Because light
often behaved as a wave, and because all other waveforms required a
medium through which to move (water for ocean waves, air for sound
waves), it was logical to suppose that waves of light should also
have such a medium – the luminiferous ether.
In
1887, two cosmologists called Michelson and Morley performed an
experiment using an early interferometer. Their assumption was that
their experiment would demonstrate the existence of the ether but it
didn't. Rather, it demonstrated the exact opposite. However, since
belief in the existence of the ether was strong, so too was disbelief
in the results. Over the next few years, the experiment was performed
again and again, in many different ways and with many different kinds
of apparatus. All that happened was that, as the years went by, the
same result became ever more exact and ever more reliable. There was
no luminiferous ether.
Besides
proving the non-existence of ether, the Michelson/Morley experiment
produced an oddity that stretched the imaginations of the time to
near breaking point. It was that the speed of light was always the
same, no matter what the velocity of the observer might be. This was
a bizarre finding, to say the least, and it was not until 1905 that
Einstein's Special Theory of Relativity showed a way to maintain a
constant velocity of light without needing a medium to flow through.
If
the Michelson/Morley experiment hadn't already killed of the idea of
the luminiferous ether, Special Relativity should have rendered it
more dead than a piece of roadkill on the M1. However, some ideas are
just too strongly held to be easily thrown away and, ironically, it
was Einstein himself who reintroduced it, albeit, this time in a new
set of clothes. In 1915, the General Theory of Relativity posited
that space itself took on the physical properties of the luminiferous
ether by being able to curve, stretch, shrink, and deform. Einstein
was well-aware that he had just indulged in a major volte-face and in
1920 justified himself as follows:
.....
the Special Theory of Relativity does not compel us to deny ether. We
may presume the existence of ether. Recapitulating,
we may say that, according to the General Theory of Relativity, space
is endowed with physical qualities. In
this sense, there exists an ether.
The
current status of the ether idea is pretty much as Einstein left it.
While the nineteenth century concept has been long since dismissed,
the "have your cake and eat it" General Relativity
interpretation continues in a sort of half life. It is accepted as
likely to be so but almost no work is being done on it: the
scientific equivalent of an audience voting with its feet.
Having
said that, there is one area in which research into the ether is
strongly underway. Einstein suggested that major gravitational
disturbances, such as those arising from colliding neutron stars,
might create waves in the ether. In the hope of picking up these
waves, a number of hugely sensitive and hugely expensive detectors
have been set up although, as yet, not one gravity wave has been
detected.
Ironically,
there are grains of truth littered through the ether story and all
that is needed to bring them together is the New Cosmology. Space is
indeed infused with a medium which acts very much like the
luminiferous ether. It is the uniflux. And the density and speed of
the uniflux is influenced by the presence of gravity concentrations
such as stars and galaxies so as to form teelospheres. If we were
able to see these teelospheres, gathered around their matter
concentrations, their appearance would be remarkably like the
drawings of steel balls and rubber sheets that Einstein used to show
how space would curve around matter – effectively, Einstein had the
right idea but was pitching it at too fundamental a level. Einstein
also had the right idea but pitched at too fundamental a level when
he spoke of gravity waves moving through (empty but curvable) space.
Waves won't pass through empty space but they will pass through the
uniflux. The uniflux is a bonding of teel particles in the same way
that water is a bonding of water particles and air is a bonding of
air particles. Just as waves can be induced to move through water and
air, they can be induced to move through the uniflux. Given that we
are currently unable even to detect the uniflux itself, detecting
waves in it is never going to be easy but, who knows, one day one of
those gravity wave detectors might "hear" the echoes of a
really big explosion which (hopefully) will be a long way away. It
just won't be a gravity wave, that's all.
STARTING
AGAIN
What
the Current Paradigm has to say about the Cosmic Background Radiation
is that some 300,000 years after the Big Bang, vast numbers of
photons were released which we are receiving on Earth today from all
directions. At the time of their release, the photons conformed in
density and wavelengths to a perfect blackbody curve in which the
wavelength peak was very high up the electromagnetic scale, possibly
even at the very top. Over the succeeding 13 billion years, the
Universe has expanded and consequently both the wavelength peak and
the density has fallen, redshifting the CBR to a level that is barely
detectable.
What
does the New Cosmology have to tell us about the CBR? Does it agree
with the Current Paradigm or does it not? The answer is that it does
– and it doesn't. So far as the broad picture is concerned, the two
tell us much the same story – the CBR photons date back to very
early in the life of the Universe and since that time have been
redshifted to near-invisibility. The difference is in the details. Or
to be more accurate, the difference is that the Current Paradigm
picture lacks details in that it is a broad picture and not much more
than that. In contrast, the New Cosmology picture is awash with
detail and carries with it the potential for, perhaps, eventually
filling in all the details there are.
The
reason for the lack of detail in the one and the mass of detail in
the other is the same one that has already been given in earlier
chapters. Because the Current Paradigm picture was deduced by running
time backwards from the present day, it ran out of any facts that
might anchor it to reality long before it reached the distant past.
Inevitably, such conjectures are broad brush conjectures. The New
Cosmology picture, on the other hand, because it is a bottomup
picture, already has a great deal of detail in place before it
settles to considering the origin of the Cosmic Background Radiation.
Especially, it has already established what photons really are and
how they will behave in given circumstances.
Here
follows the New Cosmology picture of the origin of the CBR and of its
current state.
THE
EARLY UNIFLUX
Nothing
within the Universe can be completely independent of it surroundings.
This is certainly true of photons. The state of any photon is, in
large part, conditioned by the uniflux through which it is moving.
For this reason it is worth taking a look at the origin and nature of
the early uniflux.
When
the the Universe broke up into accretions of teels, there would have
been residue of solo teels left, milling about in between. This
residue would have been reinforced by a rubble of "mini"
and "micro" teel accretions: accretions too small to ever
have any chance of evolving into protophotons. There would have been
yet more rubble resulting from the larger accretions knocking bits of
each other during collisions. In other words, a form of uniflux
existed almost from the very beginning.
However,
the uniflux proper did not begin to form until the teel accretions
began to spin themselves into protophotons. Once this began to
happen, the space between the protophotons rapidly became filled with
vast quantities of solo teels – and not just ordinary teels either.
The teels being ejected by the protophotons were the very fastest
that they had.
The
Universe, at this time, was still small. Moment Zero was no more than
a couple of seconds ago, possibly less. If the fastest teels were
moving at (say) ten times lightspeed, the Universe would still have
been less than twenty five billion lightyears in diameter – a size
which may be enormous by our Earth-scale standards but which was
minute when compared with how big the Universe is today. This meant,
of course, that all the mass and energy that we have in the Universe
today was crammed into an area that was relatively tiny.
Consequently, the amount of space between the protophotons was
limited and this resulted in a uniflux that, as well as being
extraordinarily fast, was extraordinarily dense.
It
was also chaotic. The structure of a protophoton (and of a photon,
for that matter) dictates that it ejects its excess teels primarily
from above the equator. After the protophotons had formed, their
vectors became increasingly less ordered due to collisions and
gravitational disturbance. This meant that their equators could be
pointing in pretty much any direction and this was reflected in the
chaotic condition of the uniflux with teels moving every which way,
constantly colliding, constantly changing direction, and constantly
exchanging speed.
This
may have been chaos but it was not total disorder. Within the
uniflux, a pattern was forming due to the consequences of collision
mechanics. Random collisions produce random results but if you
confine the colliding objects in some way, the results are no longer
entirely random. In this case, the teels of the uniflux were confined
by their mutual gravity. Because of this, speed moved outwards
towards the surface of the Universe ball in the same way that
shockwaves move away from the site of an atom bomb explosion.
The
universe took on, in a vestigial form, the structure that it still
has today. It became an expanding teelosphere in which the teels with
the greatest totalspeed were towards the surface and those with the
least were towards the centre. Within this expanding uniflux, there
was a core of protophotons which was also expanding although at a
slower rate.
-
The
uniflux is the Universe's teelosphere.
-
At
this early time, the uniflux was expanding at many times lightspeed.
-
But
it was slowing down due to the Universe's gravity.
-
With
realspeed being converted to potentialspeed.
-
The
fastest uniflux teels were nearer to surface of the Universe.
-
The
slowest uniflux teels were nearer the centre.
PROTOPHOTON
EQUILIBRATION
Before
the Universe broke up into teel accretions, all the teels were
heading outwards from the centre. Consequently, when it broke up, all
the teel accretions were likewise heading outwards. However, this
tidiness didn't last long because the break up could never have been
100% clean. All it required was for a few of the accretions to be on
rogue vectors for the whole of the Universe to become chaotic. Just
one rogue vector would have resulted in two or more accretions
colliding. From those collisions would have come more rogue vectors
and more collisions. And from those yet more, and from those yet
more. Given the incredible speed at which the accretions were moving,
and the density of their packing, collisions and rogue vectors would
have quickly spread right through the entire Universe.
From
a Universe expanding harmoniously, the Universe had become one that
was expanding chaotically. Initially, each of the teel accretions had
been moving on a course that was taking them directly outwards from
the centre of the Universe. Now, due to the collisions, the course of
every clump was an ellipse which, if undisturbed by further
collisions, would take it right round the Universe. The lack of
disturbance was a bit of wishful thinking though. Such was the
density of their packing that collisions and changes in vector and
velocity would have been constant. Every moment, a new ellipse, so to
speak.
Along
with putting the accretions into elliptical orbits, and keeping the
orbits very short, the collision activity had another effect. It set
the accretions spinning and that led to the ejection of teels out
into the uniflux. The mix of spin and of the increasing density of
the uniflux began to reorder the accretions internally and give them
a proper structure: a core, a teelocean, and a teelosphere. Soon,
those accretions which had enough mass would become protophotons.
There
was a fundamental difference between the accretions and the
protophotons they would become. accretions could spin, they could
have an internal structure, they could be ejecting and absorbing
teels, but they were not yet protophotons and would not be until they
had equilibrated.
In
an equilibrated protophoton, the realspeed of the teels immediately
inside the surface of a protophoton is the same as the realspeed of
those immediately outside in the uniflux. In this condition, the
number and speed of any teels being absorbed from the uniflux will be
matched by the number and speed of those being ejected.
When
the teel accretions first formed, their velocity and the velocity of
the uniflux through which they moved, was much the same. Then they
began to spin. The act of spinning did two things: it slowed the
accretions down and, because they were now ejecting their fastest
teels, it accelerated the uniflux. The accretions soon came to
resemble protophotons/photons in structure with a core, teelocean,
and teelosphere, but they were not yet protophotons because they were
not equilibrated. The uniflux teels immediately outside their
surfaces were moving faster than those immediately inside.
Automatically,
the equilibration processes ground into action. The teels being
absorbed from the uniflux were faster than the fastest teels already
possessed by the teel accretions. These plunged into the teelosphere.
Some may have plunged through it into the teelocean. Some may even
have got through the teelocean to the core. Wherever, the teels came
to lodge, they would have raised the teelspeed of the whole
accretion, from centre to surface. Even those that only came to lodge
in the teelosphere would, through collision mechanics, eventually
raise the speed of the core.
Raising
the speed of the core meant increasing the spinrate and this, in
turn, increased the teel ejection rate and thus reduced the teelmass.
It also, because the teels being ejected were so much faster, had the
effect of raised the speed of the uniflux yet further. Thus, was set
in train a progressive cycle in which the increase in the accretion's
spinrate fuelled an increase in the speed of the uniflux, which in
turn fuelled an increase in the accretion's spinrate, which in turn
fuelled an increase in the speed of the uniflux, and so on.
Both
the uniflux and the accretions were accelerating. This, though, was
at the cost of the clump's teelmass which was being progressively
whittled away. Clearly, if this were to continue, the teel accretions
would carry on whittling themselves away to nothingness – and the
Universe would become one vast uniflux.
This
it didn't happen was because of the changing character of the of the
uniflux. The accretions were ejecting teelmass and teelspeed out into
the uniflux. This was producing a uniflux that was increasingly fast
and increasingly dense. The density, however, had the effect of
pushing the fastest teels outwards in the direction of least
resistance, out into the wide open space beyond the expanding sphere
of teel accretions, leaving behind a progressively slower and less
dense uniflux for the accretions to move through. At the same time,
the uniflux as a whole, was being decelerated by the gravity of the
Universe.
So,
from first accelerating, the uniflux through which the accretions
were moving was now actually decelerating. This affected the clump's
ejection/absorption imbalance. Consequently, the steady loss of both
teelmass and teelspeed was becoming progressively less and soon the
point was reached where the mass and the speed of the teels being
ejected out into the uniflux was the same as the mass and speed of
the teels being absorbed. This was the moment of equilibration. This
was the moment when the teel accretions became protophotons.
Once
achieved, equilibration was a self-maintaining condition. If a
protophoton moved from a region of slow uniflux to a faster one, the
speed of the teels being absorbed increased and this raised the
protophoton's spinrate. The raised spinrate then ejected enough teels
to balance the protophotons spinspeed with the faster speed of the
uniflux. Conversely, moving into a region of slower moving uniflux
would reduce the protophoton's spinrate and lower the number of teels
being ejected, thus also maintaining the balance between the
spinspeed of the protophoton and the speed of the uniflux.
Within
the expanding uniflux, there was now an expanding core of
protophotons. However, it was not a simple core. It already had a
form of complexity: a form with which we are already familiar.
Collisions between the protophotons would have produced the same
effect that they produced in the teels of the uniflux. The collisions
pushed speed outwards so that the fastest protophotons were towards
the surface of the protophoton core and the slowest were towards the
centre.
Also,
like the expanding uniflux, the expansion-rate of the core of
protophotons was falling. It was falling because of the Universe's
gravity – and it was falling because the protophotons were now
moving in ellipses around the Universe, rather than directly out from
the centre of it.
-
The
Universe was a uniflux, expanding much faster than lightspeed.
-
Within
the uniflux there was a core of teel accretions, also expanding
faster than lightspeed.
-
The
uniflux and the teel accretions were attuned.
-
Collisions
between teel accretions set them to spinning.
-
The
spin of the teel accretions ejected fast teels.
-
The
uniflux accelerated and the teel accretions decelerated.
-
The
uniflux and the teel accretions were no longer attuned.
-
The
teel accretions were losing teelmass and teelspeed.
-
Meanwhile,
the speed of the uniflux changed from overall acceleration to
deceleration.
-
Both
the uniflux and teel accretions were now both decelerating – but
at different rates.
-
The
speed of the accretions and the uniflux converged and they became
equilibrated.
-
The
teel accretions were now protophotons.
-
The
Universe was still expanding faster than lightspeed.
-
It
was now a uniflux inside which was a core of protophotons.
-
The
protophoton core was also expanding faster than lightspeed although
at a slower rate than the uniflux.
-
The
fastest protophotons were nearer the surface of the protophoton
core.
-
The
slowest protophotons were nearer the centre of the protophoton core.
EQUILIBRATION
There
is a protophoton phase in the creation of all photons. Currently,
photons are created during either the equilibration or the decay
processes of larger particles. In both cases, the protophoton phase
doesn't last long, a tiny fraction of a second at most, because the
distances being travelled, and the amount of time taken to do the
travelling, are small. Most current photons come from nucleons and
those are very tiny objects. By contrast, the Universe of 13 billion
years ago, within which the CBR photons came, was very big.
Inevitably, then, the protophoton phase lasted a lot longer.
The
processes by which protophotons evolve into photons was described in
detail in the last chapter so we'll not go deeply into that again.
However, it is worth recalling that the principal difference between
a protophoton and a photon is that in the latter the spinspeed has
stabilised at a velocity that is, in open space, a shade under
300,000 kilometres per second and it will maintain this velocity no
matter what might be the speed of the uniflux through which it is
moving. By contrast, a protophoton can be moving at any speed as long
as it is equilibrated to the uniflux through which it is moving –
and that this velocity will vary as the speed of the uniflux varies.
The faster the uniflux is moving, the faster a protophoton will move
– and conversely.
When
the CBR protophotons first equilibrated, the speed of the uniflux was
many times lightspeed. This meant that the velocity of the
protophotons was also higher than lightspeed. It also meant that they
could not become photons. They could only do that if the speed of the
uniflux fell sufficiently to allow the velocity of the CBR photons to
fall to lightspeed.
The
speed of the uniflux was falling although this was not a simple
slowing down. It was yet another multiprocess at work. Consequently,
the uniflux was not slowing down at the same rate in all parts of the
Universe. It was falling everywhere because the gravity of the
Universe was slowing it down as it raced outwards. At the same time,
it was slowing down in some places and speeding up in others due to
the collision mechanics-inspired movement of speed outwards from the
centre of the Universe. Effectively, the nearer the centre of the
Universe the uniflux was, the slower it was moving. Towards the
surface of the Universe, as a consequence of the multiprocess, the
uniflux was slowing down but the rate at which it was slowing was
much less than at the centre.
So
far as the protophotons were concerned, this put the slowest near to
the centre of the Universe and the fastest towards the surface. Thus
it was that the central protophotons slowed to lightspeed first and
equilibrated into photons first. Equilibration then spread outwards
from the centre to the surface. Exactly how long the spread would
have taken to get from the centre to the surface – who knows? I
certainly don't but my guess is that it took quite a while. The
Universe was many billions of lightyears in diameter when
equilibration began and the variation in uniflux speed from the
centre to the surface would have been considerable. It might even be
that it took thousands, or hundreds of thousands, or even millions of
years for all the CBR protophotons to equilibrate into photons, for
equilibration to spread all the way out. (I assume the spread reached
the surface a long time ago – but I could be wrong.)
-
The
Universe is an expanding uniflux.
-
Within
the uniflux, there is an expanding core of protophotons.
-
The
speed of each is faster than lightspeed.
-
Each
is decelerating due to the gravity of the Universe.
-
Due
to collision mechanics, the fastest uniflux and the fastest
protophotons are nearer the surface of the Universe.
-
The
decelerating speed of the uniflux decelerates the protophotons.
-
When
the protophotons are decelerated to lightspeed, the deceleration
stops.
-
They
have equilibrated as photons.
-
Equilibration
happens first at the centre of the Universe.
-
It
then spreads outwards toward the surface.
-
Due
to the size of the Universe, and to the considerable variation in
uniflux speed from the centre to the surface, it may have taken some
time for all the CBR protophotons to become equilibrated.
THE
CBR BLACKBODY CURVE
The
CBR photons come at us from all directions in a range of wavelengths
which, when plotted against the electromagnetic scale, correspond as
nearly as makes no difference to a perfect blackbody curve. Why? How
did exactly the right quantities of photons get to be in exactly the
right wavelengths? And how long have they been like that? Was it
right from the moment of their first equilibration or have they been
subject to some subsequent circumstance which has moulded them into
this form? For the answers, we need to go back, almost to the very
beginning.
If
you look at a dried-out mudflat you will see something we have come
across before. What you will see is a form of isotropy. As the water
has evaporated, the mud has attempted to draw itself together but has
been prevented from doing so by a number of factors. Consequently it
has broken up into smaller pieces that resemble crazy-paving. The
pieces are not all the same in that they come in a range of sizes –
and there is a maximum size. Within the range, the differing sizes
are distributed evenly across the mudflat and, when the crazy-paved
mudflat is looked down on from a height, it really does look much the
same everywhere.
When
the Universe broke up into teel accretions, much the same happened.
The accretions were of many different masses but those masses were
not random. They were within a specific range which was dictated by
factors such as bonding and vergence. There was a maximum mass, and
below that mass the quantities of accretions for any given mass were
predictable. They were predictable by reference to a blackbody curve.
There is no secret as to how this happened. It was just another form
of random number generation, not unlike that found in roulette –
given enough spins of the wheel, each of the numbers on a roulette
table will come up extremely close to 1/36th of the time.
The
resemblance between the drying-out of a mudflat and the breakup of
the Universe into teel accretions is interesting. Do the sizes of the
mudflat pieces also conform to a blackbody curve? Very possibly. Many
of the circumstances of their formation are the same as those that
created the teel accretions. However, there are also differences that
could affect the purity of the curve, perhaps enough to render the
curve unrecognisable: the gravitational confinement of the mudflat,
variations in what is under the mudflat, the constitution of the mud,
the water evaporation rate, and so on.
When
the teel accretions first formed, they did so conforming to a
blackbody curve but these were turbulent times and the accretions now
had to go through some dramatic changes. In order to become
protophotons, they had to shed teelmass. So much teelmass indeed that
the lower-mass accretions would have evaporated all their teels away
into the uniflux. Then, once the accretions had become protophotons,
they had to shed yet more mass in order to equilibrate into photons.
Yet, notwithstanding all these changes, they remained locked onto the
blackbody curve. Since, every clump and every protophotons was
subject to exactly the same factors, they all shed their teelmass at
the same proportional rate – the cosmological equivalent of a line
of chorus-girls in a Busby Berkeley musical.
The
key factor in distinguishing one photonic blackbody curve from
another is the wavelength peak, as is shown when the curve is related
to the electromagnetic scale. One end of the curve will be at or near
the base of the scale, at the red end, with the very long wavelength
radio photons. The other end, the wavelength peak, can be pretty much
anywhere on the scale. Currently, the wavelength peak of the CBR is
at the 1.9mm mark, among the microwaves.
The
wavelength peak equates to the most massive photons on a particular
blackbody curve. When the CBR photons first equilibrated, some of
them were very massive – and possibly as massive as photons can
get. Consequently, the CBR wavelength peak would then have been much
higher up the electromagnetic scale than it is today – and possibly
right at the very top.
Since
then, the CBR photons have lost yet more teelmass. Over the past 13
billion years, the wavelength peak has crept two thirds of the way
down the electromagnetic scale to its present position around the
1.9mm mark, all the while maintaining that near-perfect blackbody
curve. The entire Cosmic Background Radiation has been "redshifted"
chorus-girl fashion. To understand how this can have happened, let
us take a closer look at the fundamentals of colourshifting.
-
The
accretions are in a range of masses that conform to a blackbody
curve.
-
The
teel accretions eject teelmass in order to equilibrate into
protophotons.
-
They
eject yet more teelmass in order to equilibrate into photons.
-
The
teel accretions have become the Cosmic Background Radiation.
-
The
photon masses of the CBR still conform to a blackbody curve.
-
The
CBR wavelength peak is at, or near, the top of the electromagnetic
scale.
-
As
the Universe expands, the CBR wavelength peak falls down the
electromagnetic scale.
-
The
CBR is "redshifted".
-
Today,
the CBR wavelength peak stands at 1.9mm.
THE
CBR REDSHIFT
In
the Current Paradigm there are four forms of colourshifting: Doppler
Colourshifting, Relativistic Doppler Colourshifting, Cosmological
Colourshifting, and Gravitational Colourshifting. However, having
four forms is just another consequence of topdown thinking. Of
phenomena measured but not understood. Without knowing what a photon
really is, without understanding how and why a photon does what it
does, the behaviour of a photon can only be predicted statistically –
eg: in the past, nine have gone that way and one has gone this way
so there is a nine out of ten chance that they will go that way in
the future. Statistics is not understanding. What follows is
understanding:
A
photon is a spinning ball of teels in which the teelmass, teelspeed
and velocity are all in equilibrium. The wavelength of a photon
equates to its mass. A photon's wavelength, at any stage in its life,
is measured against its "base" wavelength – that is,
against the wavelength at which it first equilibrated. Throughout
their lives, photons are continually changing their wavelengths as
they adjust themselves to the speed of the uniflux through which they
are passing.
Base
wavelengths are constant. Photons emanating from a similar source
will have a similar wavelength, no matter how different the
conditions may be around the source. For example, hydrogen atoms emit
photons at a wavelength of 10-21 and they do so, no matter
where in the Universe the hydrogen atom might be or in whatever
horrendous conditions it might find itself. However, hydrogenic
photons coming from beyond the Earth are never at exactly 10-21.
Over its lifetime, such a photon's wavelength will have altered as it
has adapted itself to the surrounding conditions. It will
colourshift. If the wavelength has grown longer than 10-21,
that is if the photon has lost mass, it is said to have redshifted.
If the wavelength has grown shorter than 10-21, that is if
the photon has gained mass, it is said to have blueshifted. Every
10-21 photon coming from outside the Earth has some degree
of red or blue shift by the time we are able to detect it.
Colourshifting
is the consequence of a multiprocess. It results from two entirely
different processes being underway at the same time. Sometimes the
two processes produce a colourshift in the same direction and
reinforce each other. At other times, they work against each other
and moderate the resulting colourshift. The two processes are
"uniflux colourshifting" and "gravity colourshifting".
-
UNIFLUX
COLOURSHIFTING: the wavelength/mass of a photon is
affected by variations in the speed of the uniflux through which it
is moving. When moving from a slower uniflux to a faster one, a
photon will redshift. When moving from a faster uniflux to a slower
one, it will blueshift.
-
GRAVITY
COLOURSHIFTING: the wavelength/mass of a photon is
affected by variations in the strength of the gravity fields through
which it is moving. When moving from a weak field to a stronger one,
a photon will redshift. When moving from a strong field to a weaker
one, a photon will blueshift.
Just
defining uniflux and gravity colourshifting doesn't actually help
much in increasing our understanding of what happens inside the
photon. This is because the two forms of colourshifting are
themselves multiprocesses. In each case, a number of different things
are happening, some of which are reinforcing the result while others
are weakening it. Here are descriptions of what happens.
-
Teels
being absorbed from the uniflux are slower than the fastest teels
in the photon.
-
The
teelspeed reduces.
-
The
density increases.
-
The
photon contracts.
-
Spinrate
increases because of the contraction but:
-
Spinrate
reduces more because of the reduced teelspeed.
-
The
escape velocity increases.
-
The
teelmass increases
-
The
photon is blueshifted.
-
Teels
being absorbed from the uniflux are faster than the fastest teels
in the photon.
-
The
teelspeed increases.
-
The
density reduces.
-
The
photon expands.
-
Spinrate
reduces because of the expansion but:
-
Spinrate
increases more because of the increased teelspeed.
-
The
escape velocity reduces.
-
The
teelmass reduces.
-
The
photon is redshifted.
-
The
stronger gravity behind "attempts" to decelerate the
photon.
-
Realspeed
converts to potentialspeed.
-
The
teelspeed reduces.
-
The
density increases.
-
The
photon contracts.
-
Spinrate
increases because of the contraction but:
-
Spinrate
reduces more because of the reduced teelspeed.
-
The
escape velocity increases.
-
The
teelmass increases.
-
The
photon is blueshifted.
-
The
stronger gravity ahead "attempts" to accelerate the
photon.
-
Potentialspeed
converts to realspeed.
-
The
teelspeed increases.
-
The
density reduces.
-
The
photon expands.
-
Spinrate
reduces because of the expansion but:
-
Spinrate
increases more because of the increased teelspeed.
-
The
escape velocity reduces.
-
The
teelmass reduces.
-
The
photon is redshifted.
If
all the circumstances are equal, the effect of gravity and uniflux
colourshifting will also be equal and the two will cancel each other
out. However, so far as this Universe is concerned, the circumstances
have never been equal since Moment Zero. From Moment Zero onwards,
speed has dominated gravity. Hence, the Universe is expanding. The
weighting has therefore favoured uniflux colourshifting. This is why
the wavelength peak of the Cosmic Background Radiation has been
redshifting. Were the Universe to now be contracting, the weighting
would be in favour of gravity and the Cosmic Background Radiation
would be blueshifting.
A
good, although somewhat perverse, confirmation of this is provided by
the Pound-Rebka Experiment – perverse in that the New Cosmology
interpretation of the result of the experiment is, in part at least,
the exact opposite of the interpretation found in the Current
Paradigm. The experiment was first performed in 1959 and subsequent
refined versions have confirmed the findings. The object of the
experiment was to test an aspect of the Relativity theories: that
photons travelling away from a strong gravity field will display a
redshift. In the New Cosmology, of course, photons travelling away
from a strong gravity field will be blueshifted. The experiment,
apparently, confirmed the Einstein contention.
In
the experiment, gamma photons were fired up a 74 foot high shaft.
Since the gravity of the Earth was weaker at the top than at the
bottom, the theories suggested that the photons should display a
colourshift. A 74 foot shaft isn't very long when photons are
travelling at the speed of light. Nevertheless, when the measurement
was taken it was found that there was indeed a small redshift.
However, what was not understood at the time was that that small
redshift was the result of a multiprocess. As the photons zipped up
the shaft, the gravity of the Earth was actually blueshifting them
while, at the same time, the increase in uniflux speed that came with
increasing altitude was redshifting them. On balance, there was more
redshift than blue.
A
knowledge of the processes underlying the colourshifting of photons
is hugely important if we are to understand what photons are telling
us about the Universe at large. This section has described enough of
the basics to enable an understanding of the CBR redshift. However,
there is a lot more to know than this. The subject is dealt with in
much greater detail in Chapter 14 – Vision in the Universe. That
chapter explains why the Universe looks as it does from where we are,
why appearances can be deceptive, and how much we are currently being
deceived.
A
final point. A close reading of this section will throw up an
apparent anomaly. Because speed has moved out from the centre of the
Universe over the past 13 billion years, the speed of the uniflux
towards the surface of the Universe has accelerated and towards the
centre it has decelerated. At the same time, there has been a
movement of mass outwards from the centre, as is exemplified by the
growth in the strength of darkenergy as explained in Chapter Five.
These factors, logically, would make the CBR photons, as seen from
the vicinity of Planet Earth, more redshifted towards the surface of
the Universe than they are towards the centre. Yet, this is not what
we see. What we see is isotropy. The CBR looks much the same, no
matter where we look. There are good reasons for this which we'll
deal with in the next section.
-
The
CBR photons equilibrate out of the CBR protophotons.
-
Each
CBR photon has a base wavelength.
-
When
all the base wavelengths are plotted against the electromagnetic
scale, they form a blackbody curve.
-
As
the Universe has expanded, more than 50% of the mass of the Universe
is behind the CBR photons.
-
At
the same time, the gravity of the Universe by area, has weakened as
the density of the Universe has fallen.
-
This
has blueshifted the base wavelength of the CBR photons.
-
As
the Universe has expanded, the uniflux has slowed.
-
This
has also blueshifted the CBR photons.
-
However,
the general movement of speed outwards from the centre has
redshifted them.
-
On
balance, there has been more redshift than blue.
THE
CBR ISOTROPY
As
we see it from Planet Earth, the Cosmic Background Radiation is
large-scale isotropic. The CBR is also isotropic when seen from most
other locations within the Universe but – and this is a very big
but – it isn't the same isotropy. In all directions from the
vicinity of Planet Earth, we detect the CBR photons coming at us at a
temperature of 2.725 Kelvin and a wavelength peak of 1.9mm. However,
if we go somewhere else to take the measures, say to the wide open
spaces between the galaxies, the CBR will still be isotropic but the
temperature and the wavelength peak will be different. This is a neat
trick and to understand how it is done, we need to look at how
wavelength peaks have been distributed across the Universe.
Immediately
before the CBR photons began to equilibrate, the whole of the uniflux
was expanding outwards at well over the speed of light. However, the
speed was not distributed evenly. The uniflux was fastest near to the
surface of the Universe and slowest near to the centre. It was also
decelerating. Within the uniflux, vast numbers of protophotons were
all, likewise, moving faster than lightspeed and they to were
decelerating. The high speed of the protophotons was due to their
being equilibrated to their surrounding uniflux. No protophoton could
become a photon until its velocity had fallen to lightspeed and this
could not happen until the speed of the surrounding uniflux had
fallen by the appropriate amount.
Because
the speed of the uniflux was slower towards the centre of the
Universe, it was here that it first became slow enough for
protophotons to equilibrate into photons. Thereafter, the
protophotons equilibrated at successively greater distances all the
way out to the surface. Imagine, if you will, a wave of equilibration
spreading out from the centre in the same way that circular waves
spread outward across the surface of a pond when you drop a stone
into the water. Since some of the photons would have been in the
visible wavelengths, it would have been the Universe lighting up for
the first time, from the middle outwards.
When
the Universe broke up into teel accretions, the accretions were in a
range of sizes/masses that conformed to a blackbody curve. That range
of sizes/masses applied throughout the Universe, from the centre to
the surface. The same applied when the accretions became
protophotons. The protophotons were smaller and less massive than the
accretions had been but those sizes and masses were still conforming
to a blackbody curve throughout the Universe. However, when the time
came for the protophotons to equilibrate into photons, the
circumstances had changed and new rules applied.
The
speed of the uniflux was lowest near the centre of the Universe and
so it had to shed less speed before its protophotons could
equilibrate at lightspeed. Conversely, the speed of the uniflux was
highest towards the surface and so had to shed a lot more speed
before its protophotons could equilibrate at lightspeed. Any fall in
the speed of the surrounding uniflux was accompanied, in the
protophotons, by a commensurate loss of mass. The farther the fall,
the greater the commensurate mass loss. Thus, the farther away from
the centre the protophotons were, the lower was the wavelength peak
of the blackbody curve on which they equilibrated.
It
helps to see the Universe of the time as a succession of shells, one
inside the other, from the surface to the centre, like a gigantic
gobstopper. Each shell consisted of photons in a range of masses
conforming to a blackbody curve, each with its own wavelength peak.
The central shells had the highest wavelength peak. Successively,
moving outwards towards the surface, each shell had a lower
wavelength peak. Interestingly, a plot of the successive wavelength
peaks from centre to surface would itself be a blackbody curve.
Thus
the Universe could then (and can now) be seen as consisting of
blackbody curves with two different orientations. There were the
"horizontal" curves, those represented by the shells, each
with its own wavelength peak having a plot on the "vertical"
curve that stretched from the centre of the Universe to its surface.
When the horizontal curves formed, they were as near to perfect as
they could be, all the way around each shell. Since then, with the
appearance of gravitational hotspots, each shell has developed
"blemishes" where the wavelength peak has gone up or down
the electromagnetic scale – although the curve will still appear to
be near-perfect when seen from any particular spot. I like to think
that when the vertical curve first formed, it too was a near-perfect
blackbody. It may have been but it might just as well have been less
sophisticated, just a straight line running from one end of the
electromagnetic scale to the other. Whichever it was, it is certainly
a long way short of near-perfect today. The position of a wavelength
high on the electromagnetic scale depends on the speed of the uniflux
and that, over the past 13 billion years, has become increasingly
less consistent.
It
cannot be emphasised enough that the vertical and horizontal
blackbody curves were, and remain so to this day, inextricably
interlinked. The wavelength peak of a blackbody curve will change if
there is any change in the speed of the surrounding uniflux or in the
local gravity strength. Any wavelength peak change will take place on
both the vertical and the horizontal curve. Any peak change on the
one curve will be exactly matched by a peak change on the other. This
interlinking is the direct cause of the apparent isotropy of the CBR
photons.
The
CBR photons are not static within their horizontal shells. Each one
is moving on a ellipse that, unless there are crises along the way,
will take them right around the Universe. During that time, they will
move from one horizontal shell to another, all the while adjusting
their mass to match any changes in gravity strength and uniflux
speed. Example: CBR photons first equilibrating in shell "A"
would have done so at wavelength peak "A". If those photons
then, in following their ellipse, moved out to shell "M",
the wavelength peak would also have become "M". If,
finally, the ellipse brought the photons back to shell "A",
the wavelength peak would likewise have returned to "A".
To
bring this closer to home, suppose that Planet Earth is in shell "E".
The CBR photons in the shell likewise have a wavelength peak of "E"
and this means that all CBR photons coming to the Earth from within
the shell will display wavelength peak "E". However, inside
shell "E" is shell "D" with its CBR photons at
wavelength peak "D" and to the outside is shell "F"
with its CBR photons at wavelength peak "F". However, from
Planet Earth we see no sign of wavelength peaks "D" or "F"
is because the wavelength peaks of any CBR photons originating in
those shells will, by the time they have journeyed to the Earth, have
adjusted their mass to match the gravity and uniflux speed of our
shell. Their wavelength peak will have become "E".
The
CBR photons first equilibrated over 13 billion years ago when the
Universe was a lot smaller and a lot simpler. A lot has happened
since then. The Universe today is a far more ragged place than it
used to be. The vertical and the horizontal blackbody curves are
still with us but each is now very different from its original
graceful form, bent and distorted, although this is not apparent to
us because the CBR photons, as we receive them, have adjusted
themselves to our local gravity and uniflux speed.
Meanwhile,
the speed of the uniflux, everywhere, has fallen hugely as the
Universe has expanded. However, it has not fallen consistently.
Concentrations of matter: atoms, stars, galaxies, etc: now act as
speed filters so that every photon finds find the speed of the
uniflux changing from lightyear to lightyear. At the same time, there
has been a general movement of speed outwards towards the surface.
And
as for gravity: when the CBR photons equilibrated, the gravity of
the Universe was tidily gradated from centre to surface. It isn't
now. The matter concentrations are gravitational hotspots, some of
them hideously strong, more than strong enough to drag photons from
their courses and, if the circumstances are right, more than strong
enough to drag photons in and destroy them. At the same time, the
movement of the uniflux outwards has placed much of the mass of the
Universe nearer to the surface than to the centre so that the
Universe's gravity now tends to draw photons outwards rather than in.
As
we receive them today, the CBR photons have been travelling for a
long time. Given that their tracks are elliptical, and that the
Universe was once much smaller, most of those photons will have
actually gone right round the Universe, perhaps many times. During
their travelling, they will have been subjected to many changes in
both vector and mass as the gravity and uniflux around them has
changed. On a large scale, the CBR photons still isotropic. However,
on a smaller scale we can now see imprinted on them a little of the
history of their journey.
-
The
farther the protophotons were from the centre, the faster the
uniflux was and the longer they took to equilibrate.
-
The
nearer the protophotons were to the centre, the slower the uniflux
was and the quicker they equilibrated.
-
The
longer a protophoton took to equilibrate, the longer was it base
wavelength and the less its mass.
-
The
quicker a protophoton equilibrated, the shorter its base wavelength
and the more its mass.
-
At
any given distance from the centre, the protophotons were
equilibrating in a range of masses that equated to a blackbody curve
– the horizontal curve.
-
The
wavelength peak of the blackbody curve varied with distance from the
centre.
-
The
wavelength peak was highest near to the centre of the Universe.
-
The
wavelength peak was lowest near to the surface of the Universe.
-
On
a line drawn from the centre of the Universe to the surface, the
wavelength peaks of the CBR equated to another blackbody curve –
the vertical curve.
-
The
vertical and horizontal blackbody curves are permanently interlinked
– any change in one means a corresponding change in the other.
-
Nowadays
the interlinking is still there although it is less obvious than it
once was because of such factors as:
-
Collisions
have directed particles into elliptical orbits and slowed the
Universe's expansion.
-
The
creation of gravitational hotspots such as atoms, stars, galaxies,
etc.
-
The
creation of speed filters such as atoms, stars, galaxies, etc.
-
The
movement of speed and mass out from the centre of the the Universe
towards the surface.
-
From
the vicinity of the Earth, the wavelength peak appears to be the
same in all directions.
-
This
wavelength peak is the Universe's horizontal blackbody curve for our
planet's distance from the centre of the Universe, as modified by
the above factors.
-
This
wavelength peak also equates to a specific point on the Universe's
vertical blackbody curve.
-
As
CBR photons move towards or away from the centre of the Universe,
they move along the vertical blackbody curve.
-
As
they do so, their mass changes to match the local horizontal
blackbody curve.
-
Which
is why the wavelength peak, as detected from the vicinity of Planet
Earth, appears isotropic.
THE
REALITY CHECK
As
has been said repeatedly in this chapter, the facts relating to the
Cosmic Background Radiation are few in number. They are that, from
all directions, the Earth is being bombarded by a constant rain of
photons – and that these photons come at us at a temperature of
2.725 Kelvin and in a range of wavelengths that make up a blackbody
curve with a wavelength peak of 1.9mm.
Around
those few facts, cosmologists have woven a pattern of explanation.
According to the Current Paradigm, the CBR dates back to the
Recombination Epoch, 300,000 years after the Big Bang, at which time
the density of the Universe had fallen sufficiently to allow at least
some photons to move at lightspeed for the next 13 billion years
without hitting some form of matter and being absorbed.
There
is no proof at all that this pattern of explanation is correct. Even
the Recombination Epoch itself is an entirely theoretical concept.
Some see the CBR being predicted prior to discovery as good
circumstantial evidence in its favour. Others see the very existence
of the CBR as evidence that the Current Paradigm is right. In truth,
though, there is more religion than science here. Such evidence as
there is, is as strong or as weak as people want it to be.
The
Current Paradigm pattern of explanation doesn't fail the Reality
Check but nor does it pass it. Rather, it is impossible to put it to
any form of reality checking because there is no reality to compare
it with. It is yet another example of the known having being
extrapolated back into the unknown. It is therefore just theory and,
in fairness, few claim it to be anything else. Nevertheless, it is
also the paradigm of the day and the problem with paradigms is that
they tend to be taught in a way that discourages attempts to find
alternates.
In
contrast to that of the Current Paradigm, the pattern of explanation
provided by the New Cosmology can be put to a reality check and it
doesn't fail. It doesn't pass with full flying colours of course
because the the absence of facts from the early time denies any
opportunity for absolute comparison. Nevertheless, the bottomup
character of the New Cosmology explanation does allow us one very
good test – if the pattern is allowed to run, will it naturally
develop into what we know.
In
the New Cosmology, starting with just a large quantity of teels and
the properties of gravity and rejectivity, we have been able to
progress forward. The teels and their properties have interacted and
provoked processes. These processes have ultimately produced
something that looks exactly like the Cosmic Background Radiation
that we can see today from Planet Earth. Along the way, we have
explained the mechanisms that have redshifted the CBR and seen how it
manages to appear isotropic to our sensors.
At
this point it is worth repeating a warning from earlier reality
checks. While I have some skills which the average cosmologist
doesn't possess, I also lack skills that the average cosmologist has
by default. Consequently, what has been written here has not been
properly tested. I am fully aware, therefore, that I could be
deluding myself and that what is written here is not as strong as I
think it is. Testing whether it is, or is not, is a task for someone
else.
| |
GLOSSARY
ANTIELECTRON:
A
charged particle containing one axial and one centrifugal quark,
solidbonded together by their mutual gravity but kept apart by the
density of their teelospheres. It is identical to an electron but has
become misaligned to vector of the surrounding uniflux. Antielectrons
will automatically realign with the uniflux, given enough time. If an
antielectron collides with a matter particle, it will annihilate.
ANTIMATTER
PARTICLE: A
charged particle that is misaligned to the vector of the surrounding
uniflux. An antimatter particle will automatically realign with the
uniflux, given enough time. If an antimatter particle collides with a
matter particle, it will annihilate.
AXIAL
QUARK: Axial
quarks have a modified photonic structure in which the primary intake
of teels from the uniflux is at one pole and with the ejection of any
excess being at the opposite pole. Axial quarks are found in mesons,
electrons, neutrons, and protons. An axial quark equates to an
up-type quark in the Current Paradigm.
BASE
WAVELENGTH: The
base wavelength of a photon is the wavelength/mass at which it first
equilibrates.
BLACKHOLE:
In the Current Paradigm, a blackhole is a region of space in which
the gravitational field is so powerful that nothing can escape after
having fallen past the event horizon. In the New Cosmology, it is an
accretion of two or more teels solidbonded together.
CENTRIFUGAL
QUARK: Centrifugal
quarks have a photonic structure in which excess teels are ejected at
the equator. Centrifugal quarks are found in mesons, electrons,
neutrons, and protons. A centrifugal quark equates to a down-type
quark in the Current Paradigm.
CHARGE:
Charge
is a property of a axially-structured particle whereby it cannot help
but align itself with the uniflux through which it is moving so that
Pole A is "into the wind".
CHARGED
PARTICLE: A
charged particle has an axial structure as compared to an uncharged
particle which has a centrifugal structure. Axial quarks, electrons,
and protons are charged particles.
COLLISION
MECHANICS: Collision
Mechanics is underpinned by the notion that speed is conserved.
Immediately before two particles collide, each will possess a
specific quantity of speed which, added together, might come to a
notional speed quantity of 1.0. After the collision, and depending on
the circumstances, that speed quantity can be redistributed among the
two particles, subject to the total quantity continuing to be 1.0.
Similarly, Collision Mechanics conditions the post-collision vectors
of any pair of truly-fundamental particles. Because each particle is
identical and perfectly spherical, their post-collision vectors are
predictable by the use of Euclidian Geometry, Newton’s Laws of
Motion, etc.
COMPLEX
PARTICLE: A
complex particle is any particle that is an assembly of numbers of
teels. Photons are complex particles, as are quarks, atoms, stars,
and galaxies. The largest of all the complex particles is the
Universe itself.
COSMOLOGY:
The
study of the past, present, and future structure of the Universe.
CURRENT
PARADIGM:
What is currently believed to be the most likely picture of the past,
present, and future structure of the Universe.
DEMOCRATIC
PRINCIPLE: Where
two groups are in opposition, more often than not the larger group
will prevail – and the larger the disparity between the groups, the
more likely is that prevalence.
DOWNQUARK:
See
centrifugal quark.
ELECTRON:
A particle containing one axial and one centrifugal quark,
solidbonded together by their mutual gravity but kept apart by the
density of their teelospheres. Overall, the teelosphere of an
electron is axial and aligned to the surrounding uniflux.
ESCAPE-VELOCITY:
Escape-velocity
is the minimum speed that Object A needs to possess in order to,
without any power, escape from the gravity field of Object B.
GASBONDING:
A
particle is gasbonded to a solid or liquidbonded accretion when its
realspeed is greater than its mutual escape-velocity with any similar
particles in the accretion but is less than the escape-velocity of
the solid or liquidbonded core.
GRAVITY:
Gravity
is the product of a law which states that “every object in the
Universe attracts every other object with a force directed along the
line of centres for the two objects that is proportional to the
product of their masses and inversely proportional to the square of
the separation between the two objects”. Why the law applies is
unknown.
GRAVITATIONAL
COLOURSHIFTING: The
wavelength/mass of a photon is affected by variations in the strength
of the gravity fields through which it is moving. When moving from a
weak field to a stronger one, a photon will redshift. When moving
from a strong field to a weaker one, a photon will blueshift.
GRAVITATIONAL
STRENGTH: The
gravitational strength of any object, as measured at its surface,
equates to the number of teels it contains moderated by the density
of their packing.
HORIZON
PROBLEM: The
CBR photons are extremely similar, no matter from which direction
they come. This suggests that they were once so close together that
they could equalise. The earliest measurable diameter for the
Universe is one Planck Length, Taking account of lightspeed, and
assuming an age for the Universe of 13.7 billion years, this gives
the current diameter of the Universe as 27.4 lightyears. However,
actual measurements seem to show that the diameter of the visible
Universe alone is 156 billion lightyears. How, then, could the CBR
photons have once been so close together that they could equalise?
LIQUIDBONDING:
A
particle is liquidbonded into an accretion when its realspeed is
greater than its mutual escape-velocity with any similar particles in
the accretion but is less than the escape-velocity of the accretion
itself.
MICROHOLE:
A
microhole is a micromassive blackhole with a mass less than that of a
photon.
MINIHOLE:
A minihole is a
micromassive blackhole with a mass greater than that of a photon but
less than that of an electron.
NEW
COSMOLOGY: The
Current Paradigm restructured by way of an Organisation and Methods
"bottomup" analysis.
PHOTON:
A
particle comprising a solidbonded teelcore surrounded by a
liquidbonded teelocean (perhaps) and a gasbonded teelosphere. Because
the teelmass and teelspeed of a photon are in equilibrium, the
velocity of a photon in open space is always lightspeed.
PHOTON
EQUILIBRATION: A
photon is equilibrated when its teelmass and teelspeed are in
balance. The velocity of an equilibrated photon in open space is
lightspeed.
POTENTIALSPEED:
Potentialspeed
equates to “energy of position”.
PROTOELECTRON:
A
particle composed of two centrifugal quarks, bonded together by their
mutual gravity but kept apart by the density of their teelospheres. A
protoelectron is equilibrated to the adjacent uniflux which means
that its mass will rise or fall with changes in the speed of the
uniflux. For its continued existence, a protoelectron has to be
within a high-speed uniflux. If the uniflux speed falls below a
specific level, the protoelectron will decay into an electron.
QUARK:
A
particle comprising a solidbonded teelcore surrounded by a
liquidbonded teelocean (perhaps) and a gasbonded teelosphere. The
default quark structure is that of a photon, the only difference
being that a quark has a much greater mass. Quarks can only keep
their higher mass when solidbonded either to another quark as an
electron or a meson, or to another two quarks as a nucleon. Unbonded
quarks are unstable and decay into photons.
REALSPEED:
Realspeed
equates to “energy of motion”.
REJECTIVITY:
Rejectivity
is the product of a law which states that “one particle cannot
occupy a place in space and time that is already occupied by
another”.
SOLIDBONDING:
A
particle is solidbonded into an accretion when its realspeed is less
than its mutual escape-velocity with any similar particles in the
accretion.
SPEED:
Speed
is “movement” as a generalised property in a generalised
particle. Where a particular particle is moving at a particular
speed and in a particular direction, “speed” becomes “velocity”
and “direction” becomes “vector”.
SPIN:
Spin
is speed confined by gravity. The source of the confining gravity can
be internal and thus “intrinsic” or external and thus “orbital”.
SPINSPEED:
Spinspeed
is a measure found in any complex particle. It is the sum of the
realspeed of all the truly-fundamental particles that the object
contains divided by their number. Spinspeed does not include
potentialspeed. Spinspeed may express itself either in the forward
motion of the complex particle or as its spin or in a combination of
the two.
TEEL:
A
teel is a fundamental particle. It is eternal and it is indivisible.
It has only two properties: gravity and rejectivity.
TEELOSPHERE:
A
teelosphere is made of teels whose principle gravitational
relationship is with a substructure embedded within the Universe. All
substructures, from photons up to galactic superclusters, have a
teelosphere.
TEELOSPHERIC
EQUILIBRIUM: A
teelosphere is in equilibrium with its adjacent uniflux when the
velocity of the teels just inside its surface is the same as the
teels just outside. In this condition, the teelosphere’s teelmass
and teelspeed are also in equilibrium.
TOTALSPEED:
Totalspeed
is the sum of the realspeed and potentialspeed of any particle or
complex particle.
UNCHARGED
PARTICLE: An
uncharged particle has a centrifugal structure as compared to a
charged particle which has an axial structure. Photons, centrifugal
quarks, and neutrons are uncharged particles.
UNIFLUX:
The
uniflux comprises those teels outside the teelosphere of the
substructure currently being considered. The principle gravitational
relationship of those teels is either with another substructure or it
is with the Universe itself.
UNIFLUX
COLOURSHIFTING: The
wavelength/mass of a photon is affected by variations in the speed of
the uniflux through which it is moving. When moving from a slower
uniflux to a faster one, a photon will redshift. When moving from a
faster uniflux to a slower one, it will blueshift.
UPQUARK:
See
axial quark.
VERGENCE:
Vergence is the movement of objects toward, or away from each
other – convergence or divergence. Vergence is a subproperty of
speed. Like speed, it is a conserved property in that it can be
transferred from one object to another by collision or by
gravitational attraction but it can never be destroyed or eliminated.
Vergence can come as realvergence or potentialvergence.
VERGENCE-VELOCITY:
The rate at which a pair of objects converge or diverge.
|
|
|