DJK
Office
Phone: (319)
338-3092
Home Phone: (319) 354-7383
Email:
Sales@DaveKreiter.com
Following, are some of my reviews of books that have informed me about the nature of reality. Click on a title to read the review.
Decoding the Universe
Charles Seife
The Non-Local Universe:
The New Physics and Matters of the Mind
Robert Nadeau and Menas Kafatos
The Conscious Universe:
Part and Whole in Modern Physical Theory
Menas Kafatos and Robert Nadeau
Programming the Universe
Seth Lloyd
The Self-Aware Universe
Amit Goswami
The Quantum Brain
Jeffrey Satinover
Quantum Physics:
Illusion or Reality
Alastair Rae
Quantum Evolution:
How Physics’ Weirdest Theory Explains Life’s Biggest Mystery
JohnJoe Mcfadden
Quantum Enigma:
Physics Encounters Consciousness
Bruce Rosenblum, Fred Kuttner
Quantum Enigma
Wolfgang Smith
The Quantum Brain
Jeffrey Satinover
Train your mind Change your Brain
Sharon Begley
The Mind and the Brain
Jeffrey M. Schwartz, M.D. and Sharon Begley
Mind in Life
Evan Thompson
Mind as Mosaic:
The Robot in the Machine
Bruce H. Hinrichs
Unified Reality Theory:
The Evolution of Existence into Experience
Steven Kaufman
The Visionary Window
Amit Goswami
The Monkey and the Tetrahedron
David M. Jinks
The Physics of Consciousness:
The Quantum Mind and the Meaning of Life
Evan Harris Walker
The Fabric of the Cosmos
Brian Greene
The Essential David Bohm
Lee Nichol
The End of Time:
The Next Revolution in Physics
Julian Barbour
Everything Forever
Gevin Giorbran
The Bit and the Pendulum:
From Quantum Computing to M Theory
The New Physics of Information
Tom Siegfried
Supernatural:
Meetings with the Ancient Teachers of Mankind
Graham Hancock
Secrets of the Sacred Ark
Laurence Gardner
Nothing I See Means Anything:
Quantum Questions—Quantum Answers
David Parrish
The Mysterious Flame
Colin Mcginn
Meaning, Medicine and
The “Placebo Effect”
Daniel Moerman
Information and the Nature of Reality:
From Physics to Metaphysics
Edited by Paul Davies and Niels Henrik Gregersen
Incomplete Nature: How Mind Emerged from Matter
Terrence W. Deacon
Genesis and the Big Bang
Gerald L. Schroeder
DMT: The Spirit Molecule
Rick Strassman
Crystal & Dragon
David Wade
Biocentrism:
How Life and Consciousness are the keys to Understanding True Nature
of the Universe
Robert Lanza with Bob Berman
The Mind Body Prescription
John E. Sarno, M.D.
Who's in charge?
Free Will and the Science of the
Brain Michael S. Gazzaniga
The Hidden Records
The Star of the Gods
Wayne Herschel
Co-author Birgitt Lederer
The Brotherhood
of the Screaming Abyss:
My Life with Terrence Mckenna
Dennis Mckenna
Ayahuasca and Depression:
The Clarity of Darkness
Justin Taylor (Kindle version only)
Spontaneous Evolution:
Our Positive Future
Bruce Lipton Ph.D. And Steve Bhaerman (Kindle version)
The Cosmic Bridge:
Close Encounters and Human Destiny
Criag R. Lang
You are The Placebo:
Making Your Mind Matter
Dr. Joe Dispenza
Life’s Ratchet:
How Molecular Machines Extract Order from Chaos
Peter M. Hoffmann
Missing 411: A sobering
Coincidence:
David Paulides
Consciousness and the Brain:
Stanislas Dehaene
The Mysterious
Epigenome:
Thomas E. Woodward, James P. Gills
Life on the Edge:
The Coming of Age of Quantum Biology:
Jim Al-Khalili and Johnjoe McFadden
The Vital Question: Energy, Evolution, and the Origins of Complex Life :
Nick Lane
The 37th Parallel :
Ben Mezrich
Origins of
Consciousness:
How the Search to Understand the Nature of Consciousness is Leading
to a New View of Reality
Adrian David Nelson
The
Epigenetic Revolution:
How Modern Biology is Rewriting Our Understanding of Genetics,
Disease, and Inheritance
Nessa Carey
The
Biology of Belief:
Unleashing the Power of Consciousness, Matter & Miracles
Bruce H. Lipton, Ph.D.
The Enigma of Cranial Deformation:
Elongated Skulls of the Ancients
David Hatcher Childress
Brien Foerster
Our Mathematical Universe:
My Quest for the Ultimate Nature of Reality
Max Tegmark
The Spontaneous Healing of Belief: Shattering the Paradigm of False Limits
Greg Braden
You Are the Universe: Discovering Your Cosmic Self and Why it Matters
Menas Kafatos
Deepak Chopra
Mind to Matter
The Astonishing Science of How Your Brain Creates Material Reality
Dawson Church
The Deeper Genome:
Why there is more to the human genome than meets the eye. (Kindle)
John Parrington
Human by Design:
From Evolution by Chance to Transformation by Choice
Gregg Braden
The Field:
The Quest for the Secret Force of the Universe
Lynne McTaggart
Digital Physics:
The Meaning of the Holographic Universe and its Implications Beyond Theoretical Physics
Ediho Lokanga
Why Materialism is Boloney
Bernardo Kastrup
How to Change Your Mind
Michael Pollan
The Case Against Reality
Why evolution hid the truth from our eyes
Donald Hoffman
NOW
The Physics of time (Kindle version)
Richard A. Muller
The Time Illusion
The Arrow That Points but Does Not Move (Kindle version)
John Gribbin
Rare Earth: Why Complex Life is Uncommon in the Universe
Peter D. Ward
Donald Brownlee
The Grand Biocentric Design:
How Life Creates Reality
Robert Lanza, MD
Darwin’s Doubt:
The Explosive Origin of Animal Life and the Case for Intelligent Design
Stephen C. Meyer
Evolution & Intelligent Design In a Nutshell
Lo, Chein, Anderson, Alston, Waltzer
QBism: The Future of Quantum Physics
Hans Christian von Baeyer
Brief Peeks Beyond:
Critical Essays on Metaphysics, Neuroscience, Free Will, Skepticism and Culture
Bernardo Kastrup
Unraveling QBism for the Uninitiated
Dr. Sanjay Basu
The Experience Machine:
How Our Minds Predict and Shape Reality
Andy Clark
The Romance of Reality
Bobby Azarian
Why? The purpose of the Universe
Philip Goff
The Science of Self-Empowerment: Awakening the New Human Story
Gregg Braden
Charles Seife
Charles
Seife has not been the first to proclaim that the most fundamental
entity in the universe is
information.
Physicist John Wheeler, David Bohm, and Tom Siegfried among others
have held this view as well, but no other author I’ve read has gone
to such lengths to establish this idea as an undeniable conclusion.
In a concise
straight-forward format, Seife delves into biology, computer
science, cosmology, Relativity, and quantum theory, to establish the
notion that information and the second law of thermodynamics are
intricately linked. And
he does this without ever allowing the reader to become lost or
confused.
Information is always physical, whether it is marks on paper, holes
in a punch card, atoms in an electro-magnetic state on a CD, photon
polarization, or up/down spin on an electron.
All information has a physical representation.
And like any physical thing in our universe, it abides by the
laws of nature, including the laws of thermodynamics and Relativity.
Information, like energy, can neither be created nor
destroyed. Information
always moves toward the most probable state: maximum entropy.
And no information can travel faster than the speed of light.
The qubit, which
is the quantum representation of the classical bit, abides by the
laws of quantum physics, and despite the weird instantaneous quantum
connection between particles in an entangled state demonstrated by
Bell’s theorem of inequality; the qubit does not violate
faster-than-light communication.
Oddly, the qubit does violate one tenant of Relativity—that
no effect can precede its cause.
It seems that the time-asymmetrical qubit has no “before” or
“after”.
Unlike the
classical bit which resides in a binary, either/or state, the qubit
can be in a superposition of states:
two states simultaneously.
This fact is what makes the possibility of quantum computing
so enticing. By nesting
probable outcomes in a superposition of states many fewer yes/no
questions are needed in algorithms, making quantum computing many
orders of magnitude faster than classical computing.
But, far
wider implications exist for the quantum qubit.
Siefe believes that the qubit’s superposition of states
solves two contentious vagaries of the Copenhagen Interpretation of
reality: What
constitutes and observer? And is there a difference between the
classical and quantum worlds?
Siefe says that there is no clear-cut demarcation between the
subatomic and classical world, and there is no conscious observer
required to collapse the wave function. This directly leads to a
resolution of the famous Schrödinger’s cat paradox.
Since the universe at large is constantly involved in probing
with light waves, neutrinos, and zero point energy, the universe
itself acts as the observer.
Large macro objects such as cats undergo decoherence (a
collapse of the superposition of states into a classical bit) very
rapidly, while a single subatomic particle or photon
take a much longer time, being less likely to come into
contact with nature’s measurements.
Information is so
fundamental that Siefe believes Richard Dawkins’ popular book called
“The Selfish Gene”, would have been more aptly called “Selfish
Information”. Siefe says that when it comes to biological organisms,
information is even more selfish than the gene, and can run contrary
to survival of the fittest.
He cites several examples of information reproducing itself
even though it is detrimental to the organism, and at times, to an
entire species.
Information will attempt to replicate even at the expense of the
proliferation of the organism carrying the information.
This book was
very enticing, and left me with some questions.
Is it “information” that is the most basic entity, or is it
“meaning” as Physicist David Bohm maintains.
Is there a difference between information and meaning?
Experiments with polarization of light would lead me to
suspect that there is a difference.
And, finally, is the
brain really a classical machine as Siefe says, or is it a quantum
machine as Evan Harris Walker maintains.
(See my review: “The physics of Consciousness” on Amazon).
Either way, Charles Seife is right on the mark with this work
The New Physics and Matters of the Mind
Robert Nadeau
and Menas Kafatos
If
Albert Einstein had lived long enough to witness the results of
Alain Aspect’s experiments at the University of Paris-South in 1982
he would have to concede that quantum theory is a self-consistent
theory, and physical reality is non-local. The Aspect experiment,
and the Nicolus Gisin experiments with twin-photons in 1997,
validated John Stewart Bell’s non-locality theorem, which he
published in 1964. These
experiments confirmed that the physical universe is holistic, and
indeterminate; a fact that many physicists call the most profound
discovery in all of science.
This is the premise of a thoroughly enlightening book by
Nadeau and Kafatos, who by virtue of concise and clear writing
entice anyone willing to take the time and effort on a journey into
the new world of non-locality.
The Authors State that in order to embrace their premises one
must be committed to metaphysical and epistemological realism.
Metaphysical realism assumes that physical reality is real or
has an actual existence independent of observation, and
epistemological realism assumes that progress in science requires
strict adherence to scientific methodology.
This seems to be a bit of an inconsistency since one of the
main premises of their book is that metaphysical realism is invalid.
Einstein, who was himself a contributor to quantum theory,
and whose theory of relativity demonstrated that space, time, and
motion are relative concepts, was none-the-less not ready to give up
on the idea of deterministic principles.
His debates with Bohr concerning whether quantum theory was a
complete theory raged on until his death.
In his most famous thought experiment, called EPR, devised
with the help of two colleagues, he made his most brilliant effort
to save a classical causal reality.
The purpose of EPR was to show that both the position and
momentum of a particle could be deduced in violation of the
indeterminacy principle making quantum theory incomplete.
John Stewart Bell sounded the death knoll to any hope that Einstein
could be correct in his effort to save determinacy.
Bell’s theorem is predicated on two major assumptions:
Locality (signals or energy transfers between space-like separated
regions cannot occur faster than the speed of light) and realism
which assumes that physical reality exists independently of the
observer. As Nadeau and
Kafatos put it, both of these assumptions may be invalid.
While Bell’s theorem in no way violated Relativity’s ban on
information traveling faster than light speed, it did clearly
demonstrate that there is an indeterminate connection between
systems, which manifests in “no time”.
These connections are instantaneous.
Since the connections are indeterminate and are subject to
the laws of probability there is no way to transfer information in a
meaningful way under these random circumstances, therefore
Relativity itself remains intact.
Had Einstein witnessed the results of the experiments
validating Bell’s theorem he would have had to concede, however,
that realism—a reality existing independent of observation—could not
exist at the deepest levels or reality.
Quite simply, if nonlocality is a fact of nature,
indeterminacy is also an indisputable fact of nature.
Einstein
would have revealed, however, in a second type of
nonlocality—temporal nonlocality.
It has been found that in Wheeler-type delayed choice
experiments the past cannot be distinguished from the future—an
assumption that Einstein clearly maintained.
Confusion concerning delayed choice experiments is that the
observer causes an event to happen in the past.
There is no causal effect in this case.
What comes into existence in observation is one aspect of a
complimentary whole that always existed.
The demise of
realism also puts to rest the paradox of the well-known Schrodinger
Cat Experiment. Since
there is no one-to-one correspondence between physical theory and
reality and, since only measurement or observation determines
reality, as Bohr clearly maintained, we cannot assume that systems,
either the particle or the cat, have any reality in the absence of
measurement. Therefore,
nothing can be said about “the cat in a state of limbo.” Since no
measurement has been made, there exists no reality to this
unobserved situation.
Despite the fact that realism has been shown to be invalid,
Cartesian dualism, the reigning scientific paradigm since the
seventeenth century, remains steadfast among many scientists today.
Dualism has found form in the post-modern rationalist ontology of
positivism. Adherents of this philosophy believe that certain truth
about physical reality resides only in the mathematical description,
and contend that a one-to-one correspondence exists between every
element of physical theory and every aspect of the reality it
describes.
Neither dualism nor positivism can be considered valid in
light of the new reality of nonlocality for a variety of reasons:
First, as we noted earlier the positivist contention of an objective
reality is inconsistent with principles of indeterminacy.
Physical laws and theories have no independent existence and
are human creations useful only in coordinating observations with
experience. By
definition an objective reality suggests a consciousness separate
and distinct from the rest of reality.
This arbitrary distinction is invalid in a holistic universe.
If nonlocality is a property of the entire universe, then we
must also conclude that an undivided wholeness exists on the most
primary level of physical reality.
Second,
science can claim knowledge of physical reality only when the
predictions of a physical theory are validated by experiment.
In a non-local universe the indivisible whole cannot be
measured or observed, and nothing can be said about the character of
this reality. Quantum
systems it seems behave in such a holistic state.
The wave and particle are the total reality of the
situation—they are complimentary.
The system is determinate when left alone, but upon
observation it behaves indeterminately according to the laws of
probability. Only one
aspect of the whole can be observed at any one moment in time.
Interestingly, As science has progressed in this century, more and
more complementarities have been found.
In special relativity, mass and energy as well as space and
time were found to be complimentary; in mathematics real and
imaginary numbers are complimentary, and in biology Bohr himself
realized that organic and inorganic matter are constructs that need
to be applied independently to describe the phenomenon of life.
Additionally life appears to be holistic, as any attempt to
isolate its parts for scientific study, tends toward a reductionist
approach which only makes for inconguencies in the study of the
whole. This new
understanding of the relationship between part and whole in biology
mirrors nonlocality in physics.
Complementarity
demonstrates that the whole is not equal to the sum of its parts.
This is true in physical reality and it is true in biology.
The authors make the philosophical argument that since
consciousness is an emergent process of the brain, and the brain in
turn, is an emergent process of the undivided whole, it is not
unreasonable to believe that the universe is conscious.
Part and Whole in Modern Physical Theory
Menas Kafatos
and Robert Nadeau
The theme of this book is that the
universe is an undivided whole, and that John Stewart Bell's theory
of non-locality and its subsequent experimental demonstration by
Alain Aspect at the University of Paris-South is a fact, not only
quantum physics, but the Universe as a whole.
Quantum physics undermined the classical belief that there exists a
one-to-one correspondence between every element of physical theory
and every element of physical reality.
It was this proposition most of all that Einstein could not
come to grips with.
The
author states that modern physics has gone to the realm of the
unvisualizable. This
started with the special theory of relativity, general relativity,
and quantum theory.
The
authors lay down a very clear and concise history of the development
of quantum theory, including the two-hole experiments, Schrodinger's
cat, and Bell's theorem.
The
authors explain very well Bohr's position on the theory of
complementarity, and allowed me to see that I had errored in my own
book when I suggested that Bohr believed that there was a causal
effect in the collapse of the wave function.
No such thing--Instead Bohr believed that we are merely
measuring different aspects of reality, both and neither are
correct, but the observer and the measuring device, the total system
must be taken into account.
The theory of complementarity is mutually exclusive, similar
to thought and emotion as the author suggests.
Also, I might add, that awareness is exclusive in that only
one aspect of reality can be illuminated at one time, to the
exclusion of all else.
The following quote best summarizes the theme of the book.
Science is a dialogue with nature in which we can
only correlate relations between particulars, and thus any proof
that the parts constitute the whole is not and cannot be the subject
of science. How, then,
do we even grasp the notion of a whole?
The answer is, we think, quite obvious--we do so because that
sense of wholeness is a "given" in consciousness.
But science, again by definition, does not define wholeness
any more than mathematicians can define mathematically an empty set,
or cosmologists can define the universe before its origin.
Definitions require opposition between at least two points of
reference....This whole is literally indescribable in the sense that
any description, including those of ordinary language, divides the
indivisible.
This book was
well written and informative; however, it was a little disappointing
in that there were no real philosophical breakthroughs.
The one exception may have been the description of two types
of non-locality, spatial non-locality and temporal non-locality.
This was a revelation to me, but should have been apparent in
the Wheeler delayed choice experiment and the regular two-hole
experiment. The
measurement of the particle happens in what appears to be a reverse
temporal association-- backwards in our perceived time frame.
I made a leap of logic however when I realized what Rudy
Rucker stated: That it
doesn't matter whether you think of the fourth dimension as a
spatial dimension or a temporal dimension, because they are the same
thing. What this could
mean is that non-locality is really only spatial and that we must
entertain the notion of a higher dimensionality.
This could in fact be the solution to understanding
non-locality--That the universe is connected at a higher level
dimension.
Seth Lloyd
The
universe we observe did not come about by a grand designer, or by
purely random classical events according to Seth Lloyd.
Rather, the complexity we observe in the universe is a result
of random programs run on the largest computer imaginable—the
universe itself.
All physical
interactions of matter and energy produce information, and the
universe, just like the computer sitting on your desk, uses simple
Boolean logistical operations such as “NOT”, “AND”, “OR”, and “COPY.
For example the spin of an atomic particle can be represented
by a single bit “0” for the “up” spin, and “1” for the down spin.
An interaction with an electromagnetic field can “flip” these
spins from a “0” to a “1” and vice versa, just as a “NOT” operation
would do on any universal computer.
Likewise,
an atom’s absorption of a certain frequency of light whose electron
resides in the lowest energy state represented by a “0” bit can jump
to a higher energy state represented by a “1” bit.
That process can be reversed when an atom emits a photon and
returns to the ground state. The universe is constantly computing
and the programs it uses are the laws of nature.
Designs in nature which are the most common are designs
created by the simplest, most probable programs.
Information always has a physical representation and like energy it
is conserved. Although
information never diminishes and adheres to the laws of
thermodynamics, quantum theory, and Relativity, information can be
created from nothing.
For example, when a measurement is preformed to determine the
dynamic attribute of polarization for a photon of light, information
simply materializes.
This helps explain many
of the perplexities posed by quantum measurement problems, such as
the “spooky action at a distance” that so bothered Albert Einstein
in his famous refutation of quantum theory called the EPR
experiment.
Specifically, he wondered how the measurement of one particle could
instantaneously affect its entangled partner half-way across the
universe in an apparent violation of causality.
Seth Lloyd says that Einstein can rest easy, because there is
no observable instantaneous effect from one particle to the next in
violation of Relativity.
Rather, a quantum bit called the qubit, residing in a superposition
of states (a “0” and “1” simultaneously) shares a single bit of
information. Neither
particle has any polarization before the measurement.
After a measurement or observation, decoherence occurs and
the superposition of states collapses into a single classical bit,
resulting in information about both particles.
Information
theory is so fundamental that it could encompass the so called
Theory of Everything that physicists have sought for the last
century. Lloyd explains
that there is a possibility that information theory can unite
gravity with both Relativity and Quantum theory.
This would be the final piece of the puzzle.
Programming the Universe
was a wonderful compliment to Jeffrey Satinover’s book,
The Quantum Brain, and
Charles Seife’s book Decoding
the Universe. It has
become apparent from reading these books that information is the
most basic entity of reality, and that the universe evolved its
complexity from the bottom up.
Amit Goswami, Ph.D.
It has been said
that philosophers never answer any questions, they simply pose them.
Amit Goswami does both.
Armed with a keen understanding of philosophy and an academic
background in theoretical sciences, Goswami is able to both
succinctly state the essence of a problem and logically hypothesize
an answer, while fending off the criticisms offered by others in his
field.
Goswami
tackles what I consider the most important question of our time:
What are the implications of quantum physics for our everyday
reality? Numerous
attempts have been made to make sense of the oddities and paradoxes
of quantum physics, and there have been as many as a dozen proposals
to explain the implications for our everyday reality.
Among the propositions have been Bohr’s Copenhagen
Interpretation, Everett’s many-worlds Interpretation, and what some
have called the most naïve explanation—Consciousness Created
Reality. The advocates
of this Idealist philosophy, which includes John Von Neumann, Eugene
Wigner, Fred Alan wolf, and the author of this book, unashamedly
insist that objects such as the moon don’t exist until they are
observed.
Goswami
doesn’t reject other interpretations of reality outright, but
rather, he incorporates some of the best points into his strong
anthropocentric philosophy of Monistic Idealism, which posits that
the universe exists in a transcendental domain of potentiality, and
it is we, the observer, who collapse this potential into the
corporal world.
The fact
that observers have not been here during a majority of the
universe’s existence is no problem for Goswami, as he explains that
a myriad of universes have existed in a transcendental realm outside
of space/time, and an observation
now can go ”back-in-time”
to create the universe we know today.
Strange as it may seem the notion that a choice in the
present can affect past events is strongly demonstrated in
Wheeler-style delayed choice experiments.
One of the thorny
issues that always crops up with Consciousness Created Reality is
the separation that seems to exist between the observer and the
observed. Why do we feel
discrete from what we observe, and why is there a sense of a
mind/body duality?
Goswami contends that the brain/mind is an object in the universal
transcendental domain like any other object, and that the brain/mind
is a measuring device with both classical and quantum components.
We remain largely unaware of the creative choice that our
subconscious mind makes when it exercises a collapse.
The choices we make are very creative when we are young, but
as we become more conditioned the choices are skewed toward
predictable patterns. It
is these conditioned patterns and our memories of past choices that
give us the sense of our separate “selves”—our individual egos.
Having read “The conscious Universe” and The Unconscious Universe” I
was a bit leery about picking up yet another book with such a
similar title. I was
glad that I did. “The
Self-Aware Universe” is one of those landmark publications that can
change ones entire view of reality.
For me,
“Consciousness Created Reality” has always been the most
romantically satisfying explanation of the paradoxes of quantum
experiments. Goswami has
made it the most scientifically satisfying as well.
Jeffrey Satinover
Prior to reading The
Quantum Brain I had supposed that evolutionary theory was an
incomplete theory. Just
as quantum theory is a broader theory than classical physics, I had
imagined that one day a broader, more inclusive theory would come
along to explain the shortcomings of evolutionary theory.
I certainly was no creationist, but how, I wondered, could
one explain such things as the perfectly engineered bird’s wing when
we are told that evolution happens incrementally over generations by
pure chance? After all,
a partially developed wing incapable of flight is useless to an
organism and therefore, according to the principle of survival of
the fittest, that organism should quickly die out.
In addition,
aren’t most genetic mutations harmful rather than helpful?
Must there not be some teleological principle involved where
organisms move toward some “goal” as the ancient Greeks believed?
No.
It has now become apparent to me from reading
The Quantum Brain that self-organizing systems with no guiding hand,
can, and do, form global complexity from initial random local
conditions. And they do
this through a learning process of reiterative algorithms.
Classical chaotic
systems can be seen in inanimate objects, from fluid dynamics to the
growth of crystals, and intelligent aided cellular automata.
But it is life itself that takes full advantage of quantum
chaos. In fact, life
would not be possible without protein’s ability to take advantage of
the principles of indeterminacy, indistinguishabilty, and quantum
tunneling.
DNA, which itself is manipulated by quantum effects, holds
the blueprint for constructing proteins.
But once synthesized, proteins can manifest into trillions of
possible shapes to achieve their desired function in the cells.
In a classical deterministic universe, it would take proteins
billions of years to find the correct lowest energy folding
patterns, but with quantum tunneling, it takes only seconds.
With the
help of phonons, sound particles used by cells to communicate, quantum
tunneling effects can be extended over the length of these huge
protein molecules. In an
upward amplification, these proteins change the shape of the
microtubules, the highways of transportation and information
processing in the cells. These local processes are further amplified
into the neural networks, which in the brain, give rise to global
organization and intelligence.
We are
machines, but we are not deterministic machines.
Pure chance, the progenitor of free-will, can only take place
as a result of quantum indeterminism at the atomic level.
At the instant of decoherence, what we think of as
consciousness, the collapse of the superposition of states creates
information and reality.
Life, the product of this reality, has in turn adapted to take
advantage of quantum effects, by amplifying them and creating ever
more complex life forms.
Life, as Jeffrey Satinover envisions it, is an information creating,
quantum machine.
Jeffrey
Satinover does an exquisite job explaining how life and intelligence
emerges from indeterminate quantum chaos, and he has changed my
philosophy of life, evolution, and consciousness.
Illusion or Reality?
Alastair Rae
Since the
formulation of quantum theory in the 1920s, the Copenhagen
interpretation of reality has been the mainstream view among
physicists. But this
interpretation has been uncomfortable for many, for it raises many
paradoxes. The lack of
cause and effect, (indeterminism), the so called “observer affect,
(quantum measurement problem), and non-locality, are among them.
Wasting no time
in this 118 page book, Alastair Rae grabs the reader in the very
first sentence of the book by quoting Albert Einstein’s now famous
pronouncement: Does God play dice [with the universe]?
Using impeccable
logic and only a bit of mathematical jargon, which can be
circumvented by the reader Rae sets out to solve many of these
paradoxes. Citing
experiments with polarized photons of light, he asks:
What
exactly constitutes a measurement?
Does a measurement occur when a record is made?
Or does it take consciousness to collapse the wave into a
definitive particle? Is
there a resolution to the Schrodinger Cat paradox? How can we
explain nonlocality?
Rae
systematically entertains and rebuts in a convincing and objective
way many different philosophies put forward to make sense of quantum
reality. Some have
claimed, most notably Niels Bohr, that it’s the interaction of the
particle with a macro-measuring device that instigates the collapse.
Others believe that it takes a consciousness to create
reality. Still others,
looking for a way to save determinism, and circumvent the
measurement problem latch on to Hugh Everett’s many-world
interpretation.
In this
outlandish scenario a parallel universe splits off with each
measurement creating an almost unimaginable number of universes.
It is mathematically elegant but as Rae quotes physicist Paul
Davies “most would believe that the losses involved in the
‘extravagance with universes’ heavily outweigh the gains from the
‘economy with postulates’.
Ironically as Rae points out most scientists claim to be
“positivist”, believing that it is meaningless to speculate on
unobservable quantities.
Yet, they apparently have no problem believing in a myriad of
unobservable and un-measureable universes, completely and
irreversibly cut off from our own.
In the final two
chapters Rae objectively entertains what he believes is the most
likely resolution of the quantum measurement problem.
His idea was first proposed by Ilia Prigogine who won the
Nobel Prize for his work in the field of irreversible chemical
thermodynamics. The
classical idea put forward by Prigogine states that there is an
irreversible arrow of time and the second law of thermodynamics is
never violated.
Macro systems such as weather patterns or pots of boiling water are
intrinsically unpredictable and irreversible, while quantum systems
can be reversible.
Before a
measurement of a quantum particle is made and recorded it is not
apart of the reality of space and time and therefore has no
application to physical law.
As Rae explains in his polarized photon experiments, if no
measurement is made of a quantum system no impression has been made
on the universe, and the information which could have been obtained
can be reversed and destroyed.
If however a measurement is made, a change of some sort has
occurred, either in the measuring device or our brain.
The measurement has impacted the universe in some manner, and
as a result the macro system must now follow the second law of
thermodynamics, which has an arrow of time and hence is
irreversible.
Rae states
that “If we follow Prigogine’s approach, indeterminism becomes an
implicit part of classical physics.
Our inability to predict the future motion of the components
of a many-particle system is no longer to be thought of as a
limitation on our experimental or computational ability, but as an
inevitable consequence of the laws of nature.”
Has Alastair Rae
accomplished what he set out to do in this book? I don’t believe so.
At the beginning of the
book he states that he will tackle the problem of indeterminism, yet
he spends most of his time attempting to explain the quantum
measurement problem which is something quite different.
His attempt at determinism falls short on several points.
First, a
Prigogine macro system is indeed unpredictable, but it is not
indeterminate as Rae claims.
Rather, it is a determinate and irreversible system having
and arrow of time and an initial cause, no matter how subtle.
Secondly, he
fails to address the process of nuclear decay and the jump of the
electron from one orbit to another—both of which are “real” and
indeterminate.
Finally, in
regard to the quantum measurement problem,
Rae does not take into account recent experiments done with
photons as cited in Scientific American (November 1991).
It this particular experimental set-up at the University of
Rochester, researchers demonstrated that “The mere possibility that
the paths can be distinguished is enough to wipe out the
interference pattern.”
There is no measurement made, no record made, and no interaction
with a macro system.
Therefore, it seems that Rae’s explanation of a resolution to the
problem through a record in a Prigogine system is invalid.
This is still a
very well written and provocative book and I would recommend it for
those who are interested in an objective treatment of the quantum
paradoxes.
How Physics’ Weirdest Theory
Explains Life’s Biggest Mystery
JohnJoe Mcfadden
JohnJoe McFadden, professor of molecular genetics at the University
of Surrey, England identifies four major problems with Darwinian
evolutionary theory. The
first, and most well-known, is the gaps in the fossil record.
These are often explained as simply an incomplete fossil
record, however, as McFadden states:
“… recent exhaustive studies of well-preserved species, such
as marine snails, tend to support the view that, generally,
evolution seems to hop and jump, rather than crawl.”
McFadden points out that some paleontologists such as Stephen
Gould and Niles Eldridge have come up with a more compatible
hypothesis called “punctuated equilibrium”
which coincides
more closely with the fossil record, however, many prominent
evolutionary biologists including Richard Dawkins sees little in the
way of a mechanism to account for these large jumps in the
evolutionary tree.
The second
problem is the lack of precursors for the approximately one thousand
protein families.
McFadden states: “New
protein families must have arisen from existing proteins by some
kind of mutational process but how their sequence traversed this
vast empty sequence space devoid of Darwinian intermediates is a
mystery.”
The third problem is
the irreducibility of metabolic pathways.
McFadden gives the example of AMP (adenosine monophosphate)
which is a precursor of ATP.
Thirteen different enzymes are needed to synthesize AMP, yet
there exist no evidence of advantageous precursors of this
biochemical in other organisms.
How these sequences happened within the present Darwinian
theory of gradualism is difficult to explain.
Finally, there is
the problem of directionality of random mutations. Mutations happen
blindly with no regard as to whether the mutation will be harmful or
beneficial. Present
evolutionary theory accounts for adaptive changes in the organisms
through the process of natural selection.
Those mutations which turn out to be advantageous to an
organism will survive, and they will pass their genes onto the next
generation. This is the
bottom-up idea, in which changes happen randomly in the DNA.
But there could possibly be something like a Lamarckian or
top-down process as well.
McFadden sites an
experiment which lends credence to this idea:
John Cairns of the Harvard School of Public Health incubated
two cultures of E. coli. (“A” and “B”).
Both cultures were deficient in an enzyme needed to
metabolize lactose. He
fed culture A only lactose, and he fed culture B a yeast extract
that does not require the missing enzyme for metabolism. He was
investigating whether or not there would be a difference in the rate
of gene mutation between the two groups.
Culture A went into a latent phase and grew very slowly as
expected, and culture B thrived.
To his surprise, the stressed culture A produced many more
mutations for the specific gene responsible for the enzyme needed to
metabolize lactose than culture B.
The rate of mutation for other genes not related to the
lactose enzyme were the same in both cultures, which suggests that
there must have been a specific signal from the environment with
caused a top down change in the DNA.
These finding have been supported in other epigenetic
research.
McFadden
believes that quantum theory could help fill in the gaps of
evolutionary theory as well as explain how life originated.
He readily admits the possibility that life originated by
pure chance no matter how unlikely.
If this is the case, then we would expect that we were the
only life in the universe—a depressing proposition according to
McFadden. Of course it
is possible that life could have arisen only once in another part of
the universe and our planet was “seeded” by a comet or some other
means.
What
are the chances that life could have arisen randomly?
Chance and probability are often misunderstood.
If I flipped a coin 10 times and each time the result was a
head, one might think this very unlikely, and yet the chance of
throwing 10 consecutive heads is no more remote than any other
series of combinations of heads and tails.
If on the other hand I asked you before I threw the coins,
what the chances are of throwing 10 heads in a row (or any other
sequence for that matter)
the chances would be astronomically low— 2 to the 10 power.
So, it is possible that life could have happened once, but
the likelihood of it happening twice would be extremely low.
Fred Hoyle, who
championed the “steady state” theory of the universe, wrote in his
book, “The intelligent universe”: A generation or more ago a
profound disserve was done to popular thought by the notion that a
horde of monkeys thumping away on typewriters cold eventually arrive
at the plays of Shakespeare.
This idea is wrong, so wrong that one has to wonder how it
came to be broadcast so widely.
The answer I think is that scientists wanted to believe that
anything at all, even the origin of life, could happen by chance…”
The fact is, as he points out, is that not enough space
exists in the observable universe to hold the number of monkeys
needed to randomly type even one scene of any Shakespeare play.
Either way, life could have been a solitary occurrence.
Most, however, believe that life is ubiquitous in the
universe.
McFadden is one
who believes that life is ubiquitous in the universe and he calls
upon quantum theory to explain how life and its evolution could have
been directed. Some
features of quantum processes have already been identified.
For example quantum tunneling is essential for the transport
of protons (the nucleus of the hydrogen atom) in such molecular
events as protein folding and DNA replication. As described by the
uncertainty principle, a particle can simply jump the classical
energy barrier and appear in another location.
Another interesting
aspect of quantum theory is that information can be stored in a
state of superposition—the promise of quantum computing and a
feature of non-locality.
Information in violation of the first law of thermodynamics can be
created from nothing.
For example, light has a dynamic attribute (observer created) of
polarization. It has no
polarization until it is measured.
Before measurement the information exists as a qubit (a
quantum bit) which is stored everywhere and nowhere as a
superposition. Upon
measurement the superposition collapses into one of two possible
outcomes.
McFadden believes that
quantum superposition resides within the cell among the various DNA
and protein molecules. The cell itself can act as the observer to
collapse the superposition into a single event in a process called
decoherence a popular variation of the Copenhagen interpretation of
reality. Niels Bohr once
stated that on subatomic event is a real event until it is observed,
but recent experiments suggest that an actual observer is not
necessary for the collapse of the wave-function.
Many think that atomic particles interacting with the
environment are enough to collapse a superposition into a single
event. Millions of possibilities can exist in superposition and when
a cell senses an environmental need it can collapse the millions of
possibilities into a single reality—a beneficial mutation.
Physics Encounters Consciousness
Bruce Rosenblum, Fred Kuttner
Quantum theory is responsible for one
third of our economy and it is the most successful theory of all
time, yet few physicists want to face the
quantum enigma or the
“meaning” of this theory.
Although
the solution to this enigma remains unresolved after eighty years,
the problems can be simply stated, and the authors of this book do
an exemplary job of presenting the quantum enigma for anyone to
understand.
The “three
pillars” of the standard Copenhagen interpretation of reality:
quantum probability, the uncertainty principle, and complementarity,
as well as a thorough treatment of Bell’s theorem of inequality are
all explained in great detail with all of the implications it offers
for the topic of consciousness and free will.
The two most
important questions of our era, the quantum enigma and consciousness
may be fundamentally related, and the resolution of one could have
implications for the other.
“There is no way to interpret quantum theory without in some
way addressing consciousness” (156).
We all know
that we are conscious, because consciousness is a subjective
experience. This is the
first book I’ve read on this topic that offers an “objective” proof
that consciousness exists using the simple principles of quantum
theory. This alone makes
the book worth reading.
Wolfgang
Smith
In
this scholarly, yet accessible book, Wolfgang Smith draws a
distinction between his own philosophical views and those of Werner
Heisenberg’s. Believing
that quantity and scale alone do not distinguish the quantum world
from the everyday macro world of classical physics, Smith rejects
Heisenberg’s view and aligns himself with the philosophy of Niels
Bohr who once made the assertion that there is no quantum
world. Rather, Smith
arbitrarily divides the world into three separate categories:
The corporeal, the subcorporeal and the
transcorporeal.
The corporeal world is that which we perceive with our
senses, our everyday reality of sight, sound, touch, and smell.
Corporeal objects Smith maintains are not anything like the
physical world, but merely occupy the same space.
Though most idealist and representationalist philosophers
beginning with Descartes and John Locke and continuing to the
present, consider secondary qualities such as taste, sight,
and sound subjective qualities imparted by the observer, Smith
considers these qualities just as objective as mass and quantity,
while maintaining, if not incredibly, that even the red color of an
apple is an objective quality independent of observation.
This
corporeal world of the senses is presented by the physical or
subcorporeal world--Plato’s universal forms (nature in
and of itself) perfectly described by mathematics yet imperceptible
to the senses. Atomic and subatomic particles--the transcorporeal
world-- can never be perceived and must be measured by a
subcorporeal measuring device, such as a geiger counter, or bubble
chamber. These devices,
in turn, make a presentation of themselves by making a
transformation into the corporeal world of perception.
There is no indeterminacy as suggested by Heisenberg, nor is there
any wave/particle duality or quantum measurement problem as
described by Bohr. Smith
maintains that the state vector collapse does not happen at the
level of the atom, but occurs the moment a subcorporeal object
passes into the corporeal domain.
Macroscopic objects of classical physics are every bit as
“potential” as subatomic particles and it is measurement that
actualizes the potentia from the physical to the corporeal
level of reality.
As a
result, Smith believes that there is no mystery in the Schrodinger’s
Cat paradox. It is not
necessary he claims, for an observer to peer into the box to
determine if the cat is dead or alive, since the cat, which belongs
to the corporeal world, collapses its own state vector.
Just how the transition from the subcorporeal to the
corporeal world is achieved isn’t addressed directly, but one must
infer from statements such as, “the entire universe is created for
us,” that he is an adherent to the strong anthropic principle.
As a result, the quantum measurement problem is not solved
but instead, is merely shifted from the quantum domain to a supposed
transformation between the subcorporeal and corporeal domains under
equally mysterious circumstances.
Smith
believes that “God plays dice” and that it is only an averaging
effect of large numbers at the classical level of physics that
accounts for the deterministic appearance of reality.
In
the end Smith disappoints somewhat by reverting to a deity to
explain what is at present still misunderstood, betraying his
rational sensibility.
Jeffrey Satinover
Prior to reading The
Quantum Brain I had supposed that evolutionary theory was an
incomplete theory. Just
as quantum theory is a broader theory than classical physics, I had
imagined that one day a broader, more inclusive theory would come
along to explain the shortcomings of evolutionary theory.
I certainly was no creationist, but how, I wondered, could
one explain such things as the perfectly engineered bird’s wing when
we are told that evolution happens incrementally over generations by
pure chance? After all,
a partially developed wing incapable of flight is useless to an
organism and therefore, according to the principle of survival of
the fittest, that organism should quickly die out.
In addition,
aren’t most genetic mutations harmful rather than helpful?
Must there not be some teleological principle involved where
organisms move toward some “goal” as the ancient Greeks believed?
No.
It has now become apparent to me from reading
The Quantum Brain that self-organizing systems with no guiding hand,
can, and do, form global complexity from initial random local
conditions. And they do
this through a learning process of reiterative algorithms.
Classical chaotic
systems can be seen in inanimate objects, from fluid dynamics to the
growth of crystals, and intelligent aided cellular automata.
But it is life itself that takes full advantage of quantum
chaos. In fact, life
would not be possible without protein’s ability to take advantage of
the principles of indeterminacy, indistinguishabilty, and quantum
tunneling.
DNA, which itself is manipulated by quantum effects, holds
the blueprint for constructing proteins.
But once synthesized, proteins can manifest into trillions of
possible shapes to achieve their desired function in the cells.
In a classical deterministic universe, it would take proteins
billions of years to find the correct lowest energy folding
patterns, but with quantum tunneling, it takes only seconds.
With the
help of phonons, sound particles used by cells to communicate, quantum
tunneling effects can be extended over the length of these huge
protein molecules. In an
upward amplification, these proteins change the shape of the
microtubules, the highways of transportation and information
processing in the cells. These local processes are further amplified
into the neural networks, which in the brain, give rise to global
organization and intelligence.
We are
machines, but we are not deterministic machines.
Pure chance, the progenitor of free-will, can only take place
as a result of quantum indeterminism at the atomic level.
At the instant of decoherence, what we think of as
consciousness, the collapse of the superposition of states creates
information and reality.
Life, the product of this reality, has in turn adapted to take
advantage of quantum effects, by amplifying them and creating ever
more complex life forms.
Life, as Jeffrey Satinover envisions it, is an information creating,
quantum machine.
Jeffrey
Satinover does an exquisite job explaining how life and intelligence
emerges from indeterminate quantum chaos, and he has changed my
philosophy of life, evolution, and consciousness.
Train your Mind Change your Brain
Sharon Begley
Forward by the Dalai Lama
“One of the cardinal
assumptions of neuroscience
is that our mental processes stem from brain activity:
The brain creates and shapes the mind, not the other way
around.
But the data reported here now suggest there may be a
two-way street
of causality…” (xii )preface
The time was
2004. The place was the
exiled home of the Dali Lama in Dharmasala, India.
The event was the annual conference of the “Mind and Life
Institute” concerning the topic of neuroplasticity.
Perhaps it
shouldn’t seem all that unlikely that two such disparate
philosophies, western
science and Buddhism, should come together to discuss what one
discipline discovered subjectively over a period of centuries, and
the other “objectively” over the past few decades, that the brain
has the ability to change is own architecture over an entire
lifetime. After all,
both disciplines are dynamic in their search for knowledge.
Buddhism challenges the traditional belief
in an external,
objective, reality.
Instead, it teaches that our reality is created by our own
projections…
If we place all our hopes and fears in the outside world,
we have quite a challenge, because our control of the
outside world
is weak, temporary,
and even illusory (14).
Significant self-modification of the brain was unknown in adults,
but this is not the case for children.
The neuroplasticity of a newborn’s brain is legendary.
At birth, each of the 100 billion neurons connects to
approximately 2,500 other neurons and over the next two or three
years a flurry of dendritic branching takes place until, on average,
almost 15,000 connections are made by each neuron.
At the age of just three a human has reached the pinnacle of
neural connections.
After
age three a process of pruning takes place in which some 20 billion
connections are lost each day until early adolescence.
A second process of pruning takes place at around age 20.
As a result of this diminution of neural connections after
childhood, it is not surprising that neuroscientists believed the
adult brain to be fairly static and unchanging, with little hope of
new growth. Other
evidence supported this conclusion.
First,
neurons do not undergo reproductive division, therefore, it was
reasoned that no new neurons could be born in the brain.
The second and more fundamental reasoning goes right to the
heart of the mechanistic paradigm of classical physics.
If the brain is a deterministic system in which connections
are established by a grand genetic design, then changing the
architecture of an estimated 100 billion neurons with 100 trillion
connections should have catastrophic results.
This would certainly be true for a circuit board of a
computer. It is no
wonder that the prospects for neuroplasticity in the adult brain
seemed dim.
The discovery overturned generations of conventional wisdom
in neuroscience.
The human brain is not limited to the neurons
it is born with, or even the neurons that fill it after the
explosion
of brain development in early childhood (65).
However, as
new experiments with both human subjects and lab animals began to
come in, it became clear that the brain does change during adulthood
as a result of external stimuli including physical exercise, and
these changes are not simply more dendrite connections, but new
neurons originating from stem cells in the hippocampus area of the
brain.
If you can get someone to exercise, his depression lifts.
Neurogenesis may be the ultimate antidepressant (70).
Neuroplasticity is evident in every part of the brain.
The Somatosensory cortex, the homunculus-like region of the
brain, which receives and interprets touch sensations from all parts
of the body, the auditory cortex, the visual cortex, and
most surprisingly the motor cortex, which is responsible for
muscle movement, are all subject to change throughout life.
For example, it has
been demonstrated that when the visual cortex no longer receives
signals from the optic nerve because of blindness, the visual cortex
will reorganize to receive auditory signals.
Likewise, when deafness occurs, the auditory cortex will
begin to receive and interpret visual signals.
Mastery skills such as playing the piano or repetitive
athletic endeavors will increase the size of the area of the motor
cortex responsible for these movements.
But these
experiments were only of passing interest to the Dalai Lama. He
wanted to know if experiments could clearly show what he knew
through experience. Can
it be shown that the mind alone affects the brain?
New scientific
evidence suggested that awareness is essential in producing brain
changes. In experiments
with monkeys, it was shown that only when the animals were focusing
or paying attention to external stimuli, did they reap the benefits
of increased brain function for performing the tasks involved.
Brain specialization is not a function of anatomy
or dictated by the genes.
It is a result of experience (108).
These were
the first steps in answering the Dalai Lama’s question, but more
data was needed, and perhaps it was the hope of verifying this
belief, as well as his life-long interest in science, that convinced
him and fellow Buddhist monks, lamas, and teachers to trek half-way
around the world to the University of Wisconsin, where such research
was being done. The
specific question the Dalai Lama had, as he inspected the strange
EEG and MRI machines was this:
Can these devices tell whether or not thoughts occur before
electrical activity is registered in the brain?
The answer to this
question was never answered, nor could it be.
Researchers had another question in mind.
They already knew that the brain state changes during
meditation, but does the practice of long term meditation
permanently change the brain? Does our overall sense of happiness
and well-being change as a result of meditation?
And more fundamentally, does mind affect brain as the Dali
Lama maintains?
The machine we call the brain is modified
throughout life (36).
Researcher Richie
Davidson of Harvard and MIT, wanted to know if cognition and mood
were related. In
the 1970’s neuroscience treated the brain as very compartmentalized.
The limbic system was considered to be the seat of emotions,
while the frontal lobes were known to be responsible for high order
cognition. But while at MIT
Davidson discovered that the frontal lobe and the limbic system were
interconnected, and subsequent evidence with brain injured people
showed that injury to the frontal cortex could severely impact the
mood of the individual.
This neuroanatomy is consistent with
two thousand years of Buddhist though
which holds that emotion and cognition
cannot be separated.
Davidson found
that when brain waves were monitored by an electroencephalogram it
was possible to detect the difference in activity between the left
and the right frontal lobes of the brain.
A clear correlation was established between this brain wave
variability and the incidence of happiness or sadness of
individuals. Persons,
who have higher brain wave activity in the right frontal lobe
compared to the left frontal lobe, have a lower baseline of
happiness. These people
tend to exhibit feelings of sadness, anxiety and worry.
Those who have higher brain wave activity in the left frontal
lobe tend to have more positive emotions.
These individuals generally are more enthusiastic, energetic,
and generally happy.
Of course life
circumstances can change our happiness, such as illness, the death
of someone close to us, financial distress and a multitude of other
unfortunate circumstances.
Likewise fortuitous events can make us happier, but these
things usually have only a temporary effect on our overall sense of
happiness and joy.
Eventually we tend to gravitate to what has been called the “base
line” or “set point” of happiness.
Davidson
now wanted to know if the emotional set point of happiness could be
changed by the cognitive process of mindfulness meditation.
In May of 2001
Davidson had a chance to answer these questions thanks to the
Buddhist Tibetan Monks who traveled to his lab.
When hooked to an EEG machine the readings from the left
prefrontal cortex of monks who had practiced compassion meditation
for over thirty years went off the charts, while in the process of
compassion meditation.
Activity in the left frontal lobe was over 90 percent higher than
any other individual ever measured!
Gamma waves, brain
waves produced when having a “aha” experience, were very high for
Davidson’s meditating subjects as compared with control groups.
Once more, these levels of gamma waves, which normally last
for just milliseconds in the control group of novice mediators,
lasted as long as five minutes for the adept compassion meditators.
Even more impressive to this study is that once the subjects
stopped meditation the gamma waves continued, demonstrating that
cognitive processes have enduring effects on the architecture of the
brain.
From a scientific perspective
the honest answer is that we don’t know
how mental processes
influence the physical brain (156).
I suspect that neither
the Buddhist practitioners, nor the neuroscientists have it quite
right. Buddhist
philosophy concerning mind is similar to the philosophy of dualism,
which in western science goes back to Descartes.
Put in the language of a Greek syllogism, Descartes reasoning
on the dualism of mind/matter would have probably gone something
like this: All matter is
subject to the deterministic laws of nature.
I have free will and can think any thought I please;
therefore, my thoughts are different from matter and are not subject
to the laws of physics. Mind and matter must be distinct.
The trouble
with “self-evident” propositions in the deductive reasoning process
is that initial
propositions are often untrue.
Modern physics has shown us that at the most basic level of
subatomic particles, matter has none of the familiar attributes we
usually ascribe to it.
Matter has no solidity or texture, smell, color, or even a discrete
location in space.
When matter becomes epiphenomenal, no distinctions can be made, and
the mind/matter dualism disintegrates.
If there is no difference between mind and matter, then one
cannot act on the other.
The second
proposition is flawed as well.
We can think many thoughts, but our thoughts are
not infinite.
We can only be aware of the thoughts we can conjure up, but
we cannot be aware of those we cannot imagine.
Even though our ego tells us that our thoughts are original
and novel, most of our thoughts come from experience, and
imaginative variations of familiar things.
We cannot think of things that are totally unfamiliar.
This is why our projections of life in the future always fall
short of reality. Our
ancient ancestors could not have imagined nuclear fusion, radio
waves, electron microscopes, or most other modern contrivances.
It is difficult to see how a mind that is restricted by
convention can freely act upon the brain.
Equally blatant are the shortcomings of cognitive scientists
who are looking for a conventional mechanistic explanation of how it
is possible for the mind, which they correctly contend is an
emergent property of the brain, to have a causal effect on that
which produced it in the first place.
If the brain is a classical system as they contend, a logical
inconsistency occurs. A
cause and effect loop is created, which is not supported by
classical physics.
Experiments
with subjects who have electrodes placed on their heads to monitor
brain activity, have shown that brain activity begins even before a
person decides to do something.
Another words, if you decide to get up out of your chair and
go to the refrigerator to get something to eat, brain activity has
already begun before you become aware of the desire to take this
action. The
Buddhist philosophy supports the notion of free will, yet this
experiment is contrary to such a belief.
If the subject in this experiment wasn’t even aware of his
own desire to go to the refrigerator, then whose mind is it that
preceded the brain activity?
If there is some universal mind dictating our actions, then
we have no more free will than the determinist cognitive scientists
would have us believe.
Free will cannot be justified in either the Buddhist
philosophy, or the determinist philosophy of the cognitive
scientists, whose advocates believe that all of our actions and
thoughts are created by chemical and electrical events in the brain.
But if we think of the brain as a quantum machine instead of
a classical machine, then we can bring to bear all of the quirky
properties of the quantum world.
In the
quantum world, it is not the particle or the field that is the most
basic entity, it is information.
Information is always physical, whether it is marks on paper,
holes in a punch card, atoms in an electro-magnetic state on a CD,
photon polarization, or up/down spin on an electron.
All information has a physical representation, and the most
basic unit of information is the qubit.
The qubit abides by the laws of thermodynamics and
Relativity, except for two principles which are relevant to our
topic here.
The first law of
thermodynamics states that energy is conserved, and cannot be
created nor destroyed.
But in a process know as decoherence, information can be created
from nothing. For
example light has a dynamic attribute called polarization.
Before it is measured, light has no angle of polarization.
It is the very act of measurement, in and of itself, that
gives light its angle of polarization.
Also, according
to Charles Seife, in his book,
Decoding the Universe information does not violate
Relativity except for one strange quirk—that no effect can precede
its cause.
It seems that the time-asymmetrical qubit has no “before” or
“after”. The famous
“double-slit” experiment in physics confirms this apparent
backwards-in-time effect.
These facts are
relevant to our discussion of free will.
The quantum brain is seething with potentia in the form of
trillions of qubits of potential information in a coherent state.
In a process known as decoherence, observation by our
conscious mind, which is itself a product of our brain, collapses
these potentia creating information and our reality in the process.
This information created through observation by our conscious
mind loops back and affects our brain and subsequent actions, giving
rise to the very consciousness that observed it.
There is no before or after in the quantum information world;
therefore, no causal paradox arises as it does in the classical
model. We need not
wonder if brain activity created consciousness or consciousness
created brain activity.
A quantum loop occurs, and frees us from the idea of classical
determinism. Free will
is restored.
This was
certainly a thought provoking book, and I recommend it to anyone
interested in the brain and consciousness.
Jeffrey M. Schwartz, M.D. and Sharon Begley
Neuropsychiatrist
Jeffrey Schwartz stood at the precipice of a great divide: a divide
between classical physics and quantum physics, between behaviorism
and cognition, between physiological determinism and free will,
between John B. Watson and William James.
Behaviorist John
B. Watson, an adherent to the philosophies of classical physics,
functionalism, and determinism believed that consciousness is
nothing more than the product of causal events in the brain.
These notions implicitly deny the idea of consciousness and
free will, and his writings served as the guiding philosophy for the
behaviorists who were providing therapy for obsessive/compulsive
(OCD) patients in the 1980s.
“In the United States, therapists in the forefront of developing
these techniques have had patients rub public toilet seats with
their hands and then spread—well, then spread whatever they touched
all over their hair, face, and clothes.
They have had patients rub urine over themselves.
They have had patients bring in a piece of toilet paper
soiled with a minuscule amount of their fecal material and rub it on
their face and through their hair during the therapy sessions….In
other cases patients are prevented from washing their hands for days
at a time, even after using the bathroom” (3).
Jeffrey Schwartz
was appalled at this inhumanity.
A student of Buddhism, Schwartz believed that mindfulness or
“bare attention” might be more suited in helping his OCD patients
than the cruel and unnecessary treatment at the hands of the
behaviorist therapists, and decades of experimental research
beginning in the 1980s backed up his ideas that the mind can change
the brain. These experiments conclusively showed that the adult
brain has the ability to form new connections as well as grow
entirely new neurons from stem cells in the hippocampus of the
brain. These findings were largely ignored by behaviorist therapists
who were treating patients with obsessive/compulsive disorder.
Schwartz began to
recruit his own OCD patients.
He convinced them that the obsessive thoughts were the result
of the faulty functioning of their brains.
In a four step process of relabeling,
reattributing, refocusing, and
revaluing, he was able to
instill in his patients a method of self-induced therapy.
Over time these procedures
altered brain chemistry,
brain
function, and produced favorable long- term measurable changes in
the physical structure
of the brain.
These successes were not limited to OCD patients.
Similar procedures, which rely upon focused attention,
produced equally successful therapies for patients of stroke,
depression, turrets syndrome, and autism, as well as possible cures
for tinnitus.
Schwartz
coined the term “mental force” implying that the mind affects matter
and he states: “Mental force affects the brain by altering the wave
functions of the atoms that make up the brain’s ions,
neurotransmitters, and synaptic vesicles” (318).
Seeking a more rigorous quantum mechanical description of his
idea, it seemed only fitting that in the spring of 1988 he would
meet quantum physicist Henry Stapp.
Both he and Stapp had been profoundly influenced by the
writings of William James.
Stapp, like
James, believed that a materialist view of the mind was untenable.
The brain, Stapp insists, is a quantum mechanical system
subject to all the properties of quantum mechanics such as
non-locality, quantum tunneling, and the principle of uncertainty.
In particular,
the ion channels, which are the chemical gateways connecting neurons
across the synaptic gaps, are extremely narrow providing a large
amount of uncertainty.
As a result a quantum superposition of states is created in
which the neuron both fires and does not fire, both possibilities
existing simultaneously.
At the moment of observation, the superposition of states of the
brain and of the aspects of nature being probed collapse into a
definitive state in which information or meaning is gained.
The dualistic boundary between mind and matter collapses.
The role of the
observer, Stapp suggests, is to decide which questions to pose.
The mind chooses which deterministic thoughts welling up in
the brain should be held in consciousness in what William James
called the “stream of consciousness” and what physicists call the
quantum Zeno effect, named
after the Greek philosopher Zeno of Elea.
The quantum Zeno
effect, describes what happens when certain quantum attributes exist
in a superposition of states simultaneously.
An observation will collapse the two states into just one,
and continued closely spaced in time observations will tend to hold
it in that position indefinitely.
This procedure
has been carried out with an ammonia atom.
The nitrogen atom which sits either atop or below a tripod of
three hydrogen atoms exist in a superposition of being on top of the
tripod or being below the tripod.
An observation will find it at one place or the other, but
continual observation will freeze it in one location indefinitely.
Free-will allows choice in asking nature which questions to
ask and what thoughts to hold in consciousness.
Through
experiments he carried out in the 1980s Ben Libet believed that
free-will concerns which deterministic thoughts to act upon.
In his
experiments Libet hooked subjects up to an electroencephalograph
that measures brain activity in the cerebral cortex.
He asked subjects to make a voluntary movement at any time
they wished. Through
careful measurements he discovered that brain activity conducive to
a particular movement began even before the subject decided to act.
Some interpreted this to mean that free will is nothing more
than an illusion, since brain activity preceded the decision to act.
Libet had a different interpretation.
He surmised that free will is really about deciding whether
or not to act upon deterministic processes that occur in the brain.
In each occurrence of the brain’s activity, ample time
remains for subjects to veto the decision to make a movement.
This then is the essence of free will, and this is what gave
the OCD patients the prerogative not to act upon the unwanted
thoughts that welled up in their brains as a result of deterministic
processes.
Schwartz’ effort
was a triumph of quantum physics over classical physics, of
cognition over behaviorism, of free will over determinism and of
William James over the ideas of John B. Watson.
This was a masterful and well researched book that documents
many years of Jeffrey Schwartz’s professional career from his early
influence by William James to his groundbreaking work with OCD
patients, and culminating with his fortuitous meeting with physicist
Henry Stapp. This book
had an influential effect on my philosophy of the nature of reality.
Evan Thompson
Evan
Thompson draws from the disciplines of biology, philosophy,
psychology, and neuroscience to bring about a wide and varied
discussion of one of the most significant philosophical questions or
our time called the explanatory gap—the gap between our subjective
experience and the laws of nature.
“Exactly how are consciousness and subjective experience
related to the brain and the body?”
How is it that our subjective experience of the world sets us
apart from our environment, when our environment and life are
intricately coupled?
Thompson contends that there can be no dualistic separation between
the organizational properties of life and mind.
In fact, Thompson says in the preface: “…the self-organizing features of mind
are an enriched version of the self-organizing features of life.”
To understand mind it is necessary to understand life.
And to these ends, Thompson references the great
philosophical and scientific thinkers past and present in an attempt
to sort out questions of what constitutes life and consciousness,
and he carefully and respectfully points out what he believes could
be the strengths and weaknesses of each hypothesis.
Since it is
necessary to understand life in order to comprehend mind, it isn’t
surprising that the philosophical methodologies used to explaining
life are similar to those used to explain mind.
From my understanding of Thompson’s work, it seems that there
are two philosophically divergent paths that researchers have
pursued to explain these concepts.
One path, which encompasses the fields of cognitive science,
computation, and genocentrism, is mechanistic, reductive, dualistic,
and materialistic in nature.
The other more meaningful and holistic path favored by
Thompson encompasses principles including dynamism, autonomy,
autopoiesis, and enactive evolution.
The theory of
genocentrism supposes that the organism is merely a vehicle which
the “selfish gene constructs and controls for purposes of its own
survival. Genocentrism
as a theory of life and evolution is similar to the view of
computationalists in respect to the mind and the brain.
Both incorporate the dualistic notion of hardware vs.
software, matter vs. information and body vs. mind.
Just as the genocentrist views the genes inside the cell as
the software that controls everything from phenotype to evolution,
so the computationalist views the mind as the controlling software
inside the head. The
author summarizes this idea by stating that “The view that life is
essentially a matter of the genes inside the cell nucleus is
homologous to the view that the mind is essentially a matter of a
computer brain inside the head” (173).
The main problem with the genocentrist view is that the
theory presupposes that the apparatus of the cell is already in
place for the DNA and RNA replication process.
DNA and RNA are not self-replicating and are entirely
dependent upon the self-replicating cell to establish an environment
for the process of protein synthesis and reproduction.
There is no one-to-one correspondence between the coding of
the genes and phenotypic expression.
In fact, while it was once believed that it took one gene to
produce one protein, it has since been discovered that one gene can
code for many proteins and the expression of these proteins is
dependent upon quantum processes that allow individual proteins to
fold into as many as a thousand different configurations to carry
out their specific tasks. The multitude of processes that are
carried out by the membrane and various organelles of the cell in
their totality are what provide the milieu for the function of the
genetic material. As
Thompson states: “This
notion of information as something that preexists its own expression
in the cell, and that is not affected by the developmental matrix of
the organism and environment, is a reification that has no
explanatory value. It is
informational idolatry and superstition, not science.” (187)
Thompson details the
shortcomings of genocentrism and espouses the viability of the
inactive approach to explain mind and life.
The author states that self-organization and natural
selection are not mutually exclusive, but, are in fact,
complementary aspects of a unified process of enactive evolution.
The enactive approach takes into account the intentionality
of life as well as the emergence of mind in the self-organizing
processes that interconnect the brain, body, and the environment.
The expression of life is not merely a matter of information,
but a complementarity of information and meaning—an idea thoroughly
explored in my book, “Confronting the Quantum Enigma, Albert, Niels,
and John.” (2011)
Thompson’s
assumptions hinge on the many researchers who have attempted to
define life. The
consensus view is that for something to be alive is must be
“autopoietic”.
Autopoiesis is defined as a dynamic, self-organizing,
self-replication system.
Several researchers including Maturana and Varela contend that all
autopoietic systems are also cognitive systems.
Thompson states that if autopoiesis and cognition are what
distinguishes life from non-life, then the process of understanding
life and understanding mind are continuous.
This
dense book of five hundred pages took me several months to plod
through, but the effort was worth it.
Evan Thompson left no stone unturned in his quest to
understand life and mind in this well-researched masterpiece.
The Robot in the Machine
Bruce H. Hinrichs
Mind as Mosaic is a monumental work that cuts through folk psychology and
myth to attain a scientific understanding of the relationship
between the brain and the mind.
Hinrichs approach is to define terms and principles, and to
address long held philosophical notions such as dualism and the
mind-body problem.
Descartes,
the founder of modern western philosophy, believed that mind has a
supernatural origin that influenced behavior and was not necessarily
involved with the physical brain.
Hinrichs says the evidence is overwhelming that this is not
the case. Mental activities including our emotions, are completely
and solely a product of the biochemical activities of the physical
brain. The mental never influences the physical.
We don’t will
anything to happen, rather, our actions are a product of a
deterministic brain. Our
misconception that we will
our actions arises because the actions produced by our brain also
produces a conscious state that previews these actions and, in turn,
elicits the illusion that the conscious state caused us to act.
Mental
activity and will can be altered by physical changes that take place in the brain
by the simple process of learning, but behavior will rarely be
changed by simple suggestion or an individual’s desire to alter
behavior. Therefore, as
popular notion has it, there is no
ghost in the machine, but
rather a robot. Our mind
is a slave to our brains.
Still it is not
clear just how the physical brain produces our conscious state.
This is the essence of the mind-body problem.
Some such as Colin Mcginn believe that we can never
understand consciousness because it is beyond our potential.
Many thinkers believe that consciousness, by definition, can
never be understood, by citing Godel’s mathematical incompleteness
theorem that in essence states: No independent system can completely
understand itself.
Other
researchers, Hinrichs among them I suspect, believe that there is no
reason why consciousness is beyond the realm of a scientific
understanding. One thing
seems to be certain. The
world we perceive, the movie in our heads that produces our
conscious reality is not the reality perceived by our senses.
In other words our reality is an altered, filtered, product
of the physical world.
Our brain might
in some sense be a computer, in that it calculates, but it is by no
stretch of the imagination anything like the silicon chip computer
on our desks. There is
disagreement, in fact, whether our brains are machines at all.
Physicist Roger Penrose maintains that no machine can ever be
conscious. He believes
that the brain is not computational, but operates on the principle
of nonlocality, and therefore is not a computer at all.
John Searle agrees believing that machines can mimic the
thinking process by manipulation symbols, but can never understand
in a conscious manner.
There
are many differences that set the brain apart from traditional
computers. First, the brain is a living “machine” in which it is
impossible to separate the software from the hardware.
Unlike an electronic computer there is no location for memory
storage. Instead, memory
is stored amongst the neural connections.
In addition the brain acts holistically.
If one was to remove a chip, or part of a program, a computer
would cease to run, but removing a few neurons in the brain would
have little effect.
Hinrichs refers to the holism of the brain as a Mosaic, but I think
he has the wrong metaphor here, for the tiles of a mosaic form the
whole picture by simply adding the pieces together.
This in essence is the electronic computer metaphor. The
Metaphor for the brain would more accurately be like a hologram in
which each minuscule part contains the entire image.
Hinrichs makes a
convincing argument that our minds are determined by the actions of
the brain, which in turn is a product of evolution.
His thesis raises some very interesting questions regarding
the notion of free will.
The Evolution
of Existence into Experience
Steven Kaufman
Niels Bohr said that the work of the physicist in not to
explain how nature “is”, but rather, to describe physical phenomenon
according to the type of measurement preformed.
Steven Kaufman, apparently, is not willing to let the
physicist off the hook so easily.
In a bold and ambitious work, Kaufman invents what he calls
the Relational Matrix Model
to not only describe the phenomena of nature, but to give a
description of the “how” and the ”why” of phenomena.
In this all- encompassing ontological model, Kaufman offers a
physical description of the nature of:
Space/time, electromagnetic energy, mass, inertia, matter,
gravity, Planck’s constant, the wave/particle duality, and
consciousness.
It is often
stated that we have no better understanding of such phenomena as:
energy, gravity, matter, and inertia than the ancient Greeks.
Yet, Kaufman’s
Relational Matrix Model provides an explanation of what these
phenomena actually “are”.
For instance, he demonstrates with his model of reality how
distortions of “absolute existence” provides a basis for: why
gravity bends space, why matter exits, why electromagnetic energy
propagates through space at the speed of light, why gravity and
matter are associated with each other, and why Planck’s constant
must have its numerical value.
Everything seems to “work out” in Kaufman’s physical model of
reality, which manifests as sets of hierarchical complimentary
relationships between absolute existence and relative existence.
As Buddhist philosophers have long maintained: Nothing has
independent existence, and Kaufman takes this idea to its ultimate
fulfillment. He
states: “Realities exist
as part of a relationship, and exist as such only within the context
of that relationship.”
Existence is the process by which absolute existence dualizes to
form a relationship with itself.
This reality can only be actualized within the context of an
experience. Kaufman
says: “There exists no
experienced reality in the absence of an experiencer…”
Kaufman’s style
is to repeat over and over again various concepts, each time in a
slightly different manner.
This is by no means a shortcoming, but a necessity to help
the reader understand the complexity of his revolutionary system of
reality. His greatest
achievement in this work, I believe, is his treatment of the nature
of experience, awareness, and consciousness; all of which are
relative complementary aspects, each dependent upon the other for
their existence. He
states: “Consciousness isn’t a product of physical reality, but
reality is a product of consciousness.”
“Unified
Reality Theory” is a grandiose title, but Steven Kaufman delivers
the goods in this unique and extraordinary book.
Amit Goswami, Ph.D.
Seventy years after the quantum revolution began; Amit Goswami
peered through the visionary window to behold a truth that Eastern
traditions such as Hinduism, Buddhism, and Taoism have known for
centuries:
Consciousness is the ground of all being.
For him, this revelation synthesized the two disciplines of
science and spirituality.
Traditional western science has treated consciousness as an
epiphenomenon of matter, an emergent property of the brain.
But Goswami insists that this produces a paradox.
If consciousness is necessary for decoherence, the process by
which quantum possibilities become material reality, how is it
possible that consciousness can arise from the very material
consciousness creates?
If however, we
turn this idea on its head, and show that matter is an epiphenomenon
of consciousness, then the paradox disappears.
Matter is within consciousness.
“We don’t have consciousness, rather consciousness has us”
(52). It is only because
of our memory that we have a secondary awareness, which creates the
illusion that consciousness is an individual experience.
A universal
consciousness helps explain some quantum decoherence experiments in
which a conscious observer has been eliminated.
For example, in an experiment called the “Quantum Pinball (Scientific
American, November 1991) the results showed that just the mere
possibility that knowledge
could be gained, was sufficient to collapse the quantum potential
into reality. It appears
that decoherence is dependent only upon knowledge or meaning in and
of itself.
This idea
provides an answer to the question:
What constitutes a measuring device?
Is it any macro device as Niels Bohr had insisted; would
Schrodinger’s cat suffice?
Or is human consciousness necessary for decoherence as
physicist Eugene Wigner and mathematician John Von Neumann have
claimed? It appears that
universal consciousness, differentiation, and meaning are
responsible for decoherence.
If the
strong anthropic principle makes you uncomfortable, the idea that
the universe is created for us and the species to follow, then the
concept of universal consciousness relieves us of this
anthropocentric burden.
The universe is imbued with consciousness and meaning, and there is
a constant self-referential communication between consciousness and
matter in an endless creative drive toward greater and greater
complexity and evolution, including the optimal measuring device—the
human brain.
In the last portion of
the book Amit Goswami includes chapters on subjects such as
reincarnation, angels, and quantum healing.
If this makes you queasy, rest assured Goswami is a physicist
to the core. This book
was well worth the read, and a very good follow up to his book “The
Self-Aware Universe”.
(see my review on Amazon).
The Monkey and the Tetrahedron
David M. Jinks
I have reviewed a
dozen or more books of science in this forum,
(Amazon Customer Reviews) many of which I considered to be
outstanding in clarity, scope, and profundity, but I have remained
very stingy on granting the highest of exaltations on these works
until now. The Monkey
and The Tetrahedron by David M. Jinks, warrants my very first 5
star rating.
In this momentous
work Jinks delves into topics ranging from the enigmatic Monuments
of Mars, the UFO phenomenon, cold fusion, archeology, biology, and
even a science I had never heard of called Archeocrptology, and
brings them all together in a tightly knit scheme.
Attempting to synthesize what Jinks has achieved in this work
would only be an injustice.
It’s a book that must be read in its entirety.
There are enough unturned stones here that it could entice
you into pursuing you own investigations and scientific
experimentation. It did
me! What more can we ask
of a book than to have it compel us to pick up the torch of
scientific inquiry.
See Book Notes Below:
Page
15. Richard Hoagland,
director of the “Enterprise Mission”, has accumulated a body of
evidence which suggests that hyperdimensional physics is the driving
force in the universe.
Note:
The author says that he is not an expert in any one field in
particular, and says that there are no degrees for the subject
matter he discusses.
This should be exciting for any non-scientist who is curious because
anyone of us could conceivably advance the study of any of these
fields.
Page
17. The author covers
all of the Enigmatic Monuments of Mars discussed in Hoagland’s book
by the same title.
Page
20. The five main
pyramid points of the “City” form a pentagon.
Page
29 Horace Crater
reported that every internal angle created by lines drawn between
the Pentad of mounds is a simple function of 90 degree and 19.5
degree angles. The odds
of this happening by chance is one in 200 million.
Page
30. Researchers closely involved in the Cydonia studies echoed
Hoagland’s conclusion that the structures on Mars are probably
intelligently designed.
Dr. Eugene Mallove,
formerly of TASC and MIT Lincoln Laboratory, concluded that “The
evidence is probably irrefutable that an extraterrestrial
civilization came here millions of years ago and established itself
on Mars.”
Pages
34, 35 demonstrate government conspiracies to cover-up the
investigations of the monuments.
Page
57. The messages of
Cydonia
Page
58. The
mathematical relationships of the monuments on Mars most often
include the irrational numbers and ratios of square root of 2, 3, 5,
and the relationships of e and pi.
The square roots of 2,3, and 5 are the basis for all
volumetric forms.
Architects of antiquity believed that geometry and certain
mathematical relationships were crucial building blocks of the
Cosmos, and that architecture should reflect these symmetries.
E/pi =
.865 = the ratio of the surface area of a sphere to the surface area
of its inscribed tetrahedron.
One of
the most profound messages seems to be the relationship of a
tetrahedron inscribed within a sphere.
Each
of the four vertices of the tetrahedron forms 60degree angles with
the other three vertices.
A three dimensional projection of an equilateral triangle.
Page
63. Hoagland found that
if one point of an imaginary tetrahedron was placed within the Mars
sphere with one vortex at Olympus Mons, which is located at 19.5
degrees north latitude the other vortex falls at the longitude of
Cydonia.
Page
68. Hoagland entertains
the idea that higher dimensions interact with lower dimensions
creating excess energy.
The more dimensions—the more information—the more energy.
Page
71. It’s possible
that the 4th dimension creates our three dimensional
world at the quantum level.
The anomaly posed by
superconductivity and superfluidity might not be explained by
altered electron energy states, but rather the tapping of
hyperdimensional energy (the zero point energy)
or New Hydrogen Energy (NHE)—cold fusion”
Page
79. A Device called the
Patterson fuel cell, which creates extra energy has been patented.
Patents have been squashed in this country, but in countries
such as Japan cold fusion is treated as another branch of academic
physics.
Page
96. It is suggested that
materials such as palladium, platinum, nickel, and carbon, which
have tetrahedral geometry, are responsible for the NHE affect.
In addition it is worth noting that many cold fusion
reactions are produced using superconducting materials.
Most superconducting materials exhibit tetrahedral geometry
in their atomic configurations.
Page
97. A litany of
descriptions of NHE and gravity altering devices on these pages.
Page
99. Alchemy may be
possible without nuclear reactions if cold fusion is valid.
Example, is some cold-fusion reactions rubidium was
transmuted to strontium.
Also potassium to calcium, and lead to Gold and other metals.
Page
101. The evidence for
transmutations in experiments is overwhelming though the process is
not understood.
Page
102. Scientist are still
puzzled as to why Mars was warmer in the distant past.
Theories such as the Greenhouse effects of erupting volcanoes
don’t account for this affect.
Rather Hoagland proposes that the sun is dimming rather than
getting brighter due to the fact that as the Sun spins, its corona
extends into space thus slowing down its angular momentum, and as a
result causing it to translate less energy from 4D to 3D.
Fossil records show a warmer past.
(This contradicts the
notion I had that the Sun is 50 percent more luminous than in
Cambrian times.
Since
we are carbon based tetrahedral organisms we might be
hype-dimensional machines transferring 4D into 3D energy called
consciousness.
Page
104. The study of
Astrology might have originally been the biological response to
hyperdimensional gating driven by the largest conductors—The
celestial bodies.
Page
105. Time can be thought
of as the human response to the gradual, yet continuous degradation
of information transfer between the 4D and 3D due to partially
closed hyper-dimensional gate.
Time would cease to exist in a perfect system of no
resistance.
Page
117. Carl Munck’s
research shows beyond question that several hundred of the thousands
of earthen and rock buildings, temples, and mounds dotting the
planet are related to what he calls, simply “the code”—Monk’s Code.
He found mathematical parallels between sacred Earth
monuments, and Cydonia.
Page
120. Discussion of Stan
Tenen’s work who has spent 25 years studying mathematical codings
and geometric metaphors in ancient sacred texts.
Page
140. According to Braden
the electromagnetic field of the earth is too strong to allow
hyper-dimensional flow into our 3D world.
The Schumann Cavity
Resonance is a measure of the quasi-standing electromagnetic waves
in the Earth’s ionosphere.
Page
142. The Fibonacci
Series:
1…1…2…3…5…8…13…21…etc.
–each number is derived from the addition of the two previous
numbers. This series
describes many natural cycles including plant growth, breeding
patterns of rabbits, the reflection of light through mirrors, the
rhythmic laws of losses and gains in the radiation of energy.
Page
143. The strength of the
Earth’s magnetic field is thought to depend on the rotation rate of
the planet’s molten core
It’s
been postulated that a pole reversal occurs when the earth’s
rotation gradually comes to a halt and begins to rotate in the
opposite direction.
There is some evidence to suggest that a complete pole reversal took
place 11,000 to 13,000 years ago.
Some predictions estimate that the next reversal could occur
in the year 2030. (This
doesn’t seem feasible.
What would cause the earth to stop and spin in the other
direction?).
Page
163. Five Platonic
solids shown. Fully
enclosed volumes having sides of equal length and interior angles of
identical size.
Page
170 Cairo Egypt
was originally named EL-Kahir which means “Mars”.
Page
171. Two scientists
Richard Thompson, and Michael Cremo, have compiled an impressive
volume full of evidence that supports the idea that modern humans
existed long before modern theories of evolution allow.
Forbidden Archeology:
The Hidden History of the Human Race,
concludes that evidence show
that anatomically modern humans coexisted with other primates for
tens of millions of years.
Richard Leakey said it
does not deserve to be taken seriously by anyone other than a fool.
Michael Denton’s book
Evolution: A theory in Crisis
and Phillip Johnson’s book
Darwin on trial, points out that transitional species
have not been found.
There seems to be little in the way to support the gradual evolution
of species.
It was predicted that
Microbiology would eventually provide crucial evidence of the
millions of chance mutations required by Darwinism to produce
evolutionary changes.
Instead, recent breakthroughs in the genetic sciences strongly
refute the core assumptions of evolution by natural selection.
For example, at the molecular level amphibian, traditionally
thought to be intermediate between other terrestrial vertebrates and
fish, are as far from fish as any group or reptiles or mammals.
Page
175. The scientific
credo, which says extraordinary claims need extraordinary proof, is
simply a filter to reject unwanted ideas.
Any scientific claim should be subject to the same amount of
proof.
Page
177. Cremo and
Thomposon’s exhaustive research reveals hundreds of artifact
discoveries that upsets accepted theories:
A list follows.
Page
185. The lack of
evidence for advanced civilizations that predate the Egyptian and
Sumerians may be due to tremendous geological changes such as
glaciers, volcanism, and other violent activities, small populations
Page
189. List of statistics
of the great pyramid.
Page
192. The evidence
linking the three pharaohs to the consturction of the pyramids is
dubious. The Great
Pyramid is mentioned nowhere in any ancient text or painting
Page
196. Evidence shows that
the Sphinx may be 7,000 to 10,000 years old due to the fact that it
has been weathered by water.
Page
201. According to Bauval
and co-author Adrian Gilbert in The Orion Mystery, the great
pyramids of Giza were not glorified burial plots, but were part of
an elaborate “star map” designed to emulate a region of the
sky—Orion’s belt. Orion
represented the most powerful of gods, Osiris.
The Nile River represented the Milky Way.
Page
203. Bauval found that
the entire Giza Necropolis, was a giant astronomical clock designed
to draw attention to the epoch of 10,450 BC.—A time when according
to the ancient Egyptians when humans consorted with the gods.
The geometries of the sky and Giza mirrored each other 13,000
years ago!
Page
206. The vernal equinox
possibly not coincidentally falls in Leo from 10,970-8810 BC, the
new proposed date of the construction of the sphinx.
The
processional cycle is 25,920 years.
One half of a cycle is 12,960.
We are near the end of the half-cycle or “last time.”
These numbers and their divisors (5, 6, 12, 36, 72, 144, 360,
432, etc are special numbers which, in ancient texts, myths, or
monuments refer to the passing of the earth through the ages.
Page
208. A list of the
processional numbers.
The ancients may have encoded this information into their
measurements and time-keeping for future generations.
Page
211. The Dogon culture
mystery is presented.
Robert G Temple in his book;
The Sirius Mystery publicized the puzzle of the Dogon.
Page
213 Carl Munck who has joined Hoagland’s Mars Mission team spent 15
years as an Archeocryptologist studing the mathematically encoded
monuments of the world.
Specifically, Munck investigaed the relationships between ancient
anonymous sites around the world that appears to link them together,
through processional and tetrahedral numbers.
Munck called these patterns the “code”.
Munck’s Code is
the picture of simplicity.
In order to determine whether a structure is included in
Munck’s global grid, it is necessary to note the most obvious
feature displayed by the particular monument.
For example, circular or spherical structures suggest 360 and
/or pi; square architecture suggests right angles.
Certain monuments that prominently display fundamental
features—the number of stairs, staircases, terraces or columns, for
example—suggest certain numbers that should be used in determining
how (or if) the structure belongs in Munck’s grid.
Three other basic
measurements are needed: The structure’s Grid Latitude, its Grid
longitude, and it’s Grid Point.
Munck defines a structure’s Grid Latitude as the product of
its degrees, minutes and seconds of latitude.
A structure’s Grid Longitude is the product of its degrees,
minutes and seconds of longitude.
A structure’s Grid point value is the Grid Longitude divided
by its Grid latitude.
There’s one other
important and essential point.
The meaningful global grid arises only when the Prime
Meridian is positioned precisely within the Giza
Plateau—specifically, through the apex of the Great Pyramid!
Giza is at the exact
center of Earth’s habitable landmasses.
The east-west axis corresponds to the longest land parallel
across the earth, passing through Africa, Asia and America; the
longest land meridian on Earth passes through Asia, African, Europe,
and Antarctica.
Page
214. The
perimeter-to-height ration of the Great pyramid is pi.
According to Munck the Great Pyramid was built as a 3D model
of pi.
Page
215. The Code
conclusions:
The ancient designers
of monuments knew where the equator was and used the Great pyramid,
at the center of the planet’s landmass as the Prime Meridian.
They utilized the
12-inch foot, the 5280-foot mile, the 360-degree circle, and base-
ten mathematics.
They knew the
dimensions of the Earth and Moon and, probably, the distances from
the Earth to the Sun, Moon and Mars.
They knew about both
the processions of the equinoxes and the structures of Cydonia..
Page
228. The tetrahedron
generates all the processional numbers.
Page
231. The creators of the
Code appear to have known precisely what they were doing when they
locked in the hyper-dimensional numbers according to an astronomical
clock keyed to the Earth’s wobble.
Page
232. Discussion of
Atlantis and the lithosphere pole shift theory.
Page
243. Description of crop
circles. The samples of
grain in crop circles have the following anomalous characteristics:
1.
An abnormal enlargement
of cell wall pits in bract tissue (the membrane surrounding the
developing embryo.
2.
The presence of
“expulsion” cavities (holes at the plant stem nodes blown from the
inside-out).
3.
Grossly enlarged plant
stem nodes in conjunction with a marked bending of the plant stem
nodes, from as little as 10 degrees to an extreme of 90 degrees or
more.
4.
Malformed seed-heads,
in which in some cases the seed are missing or, more often, in which
the seeds are severely stunted.
5.
Significant changes in
germination or seedling growth rates.
One
particular crop circle at Barbary Castle was profoundly meaningful
with regard to the message of Cydonia, for it encompasses subtleties
of form which could only be known to someone well versed in
hyper-dimensional physics and sacred geometry.
The .865 (e/pi), 19.5 degrees and 49.6 degrees the buttress
angles of the D & M as well as the Stonehenge Avenue angles are each
found plainly communicated in the Barbary Castle formation.
Page
281- 388. The UFO
phenomenon.
Page
389. Lunar artifacts.
Some of the moon’s mysteries have been solved thanks to
images leaked from the Clementine probe.
1)
A paradoxical darkening
and lightening of the lunar surface due to the “Venetian Blind”
affect of the dome.
2)
Flashes of light
reported for centuries could be due to collapsing glass structures,
and may also be responsible for the release of the unexplained water
vapor emissions detected by NASA in 1971.
3)
Excessive “ringing” of
the lunar surface produced by Moon quakes, and bombardments may be
due to vibration of glass structures.
The quantum mind and the meaning of
life
Evan Harris Walker
In this insightful work,
The Physics of Consciousness, Evan Harris Walker concludes that
“consciousness is
reality”.
Many philosophers and scientist have suggested that consciousness
can never be explained because of its subjective nature, but Walker
disagrees.
Methodically
quantifying the various processes involved in information exchange
in the brain in terms of “bits”, Walker is able to extrapolate
approximations of the speed and capacities of the information being
manipulated in neural activities.
With this
information in hand, Walker proceeds to treat the mind/brain
activities in Descartesean fashion, maintaining that
in order to fully understand the relationship between the
part and the whole, a dualistic notion of mind and matter must be
entertained. The
dualities of wave vs. particle and observer vs. observed in the
strange quantum world of state vector collapse, give us a clue as to
what is transpiring in the brain.
Walker contends
that the 24 trillion synapses in the brain delineate the transition
point between mind and matter.
Most have rejected quantum effects across the synapses
because of the relatively large distances and energy it takes to
make the leap, but these individuals have not taken into account
several known facets of quantum physics such as the principles of
tunneling, indistinguishability, and the state vector collapse.
Walker says that
the mind itself consists of two parts—consciousness and
will.
Consciousness does not have to be a part of a living or
thinking entity, but is a factor in all quantum events in nature.
The Will, which defines
what we are, is the catalyst that collapses the wave function into
one discrete, non-local event from a myriad of possibilities.
According to the principles established by Bell’s
non-locality theorem, this state vector collapse travels out into
the universe at large, and always maintains a link to our individual
and collective minds. As
a bonus, Walker has answered another mystery as deep as
consciousness itself—the nature of time.
Time is real and asymmetrical.
It is the irreversibility of the state vector collapse that
gives time its arrow.
This is without
exception the most satisfying and believable description of the
nature of consciousness I have read to date.
This work is a bit difficult at times for the non-scientist,
but in the end patience pays off.
With great effort he has made the concepts of the principles
of “indistinguishability” and “non-locality” understandable, and he
has brought us to a definitive link between mind, time, and nature.
Even so, I found
his distinctions between consciousness, thought, and “will” a bit
confusing if not contradictory at times.
But he has something very important to say about the nature
of mind, and as a result of his thorough understanding of quantum
theory, he has come closer than any previous scholars to finally
nailing down the nature of consciousness.
Running parallel to his
main thesis is a delightful reminiscence of his high school lover
who died after graduation in the summer of 1952.
Excerpts from his diary and remembrances of the lost love of
days gone by, make Walker’s work both a scholarly and endearing
tale.
Brian Greene
This was an excellently written comprehensive treatment of
an expansive subject matter—the nature of reality.
Following are some key points in this book.
Page 36.
Newton thought that motion was relative to space, but Mach
thought that space was not a real concept.
Rather he believed that motion is relative to all matter in
the universe. The
problem of the “spinning bucket of water” in which centrifugal force
causes the water to become concave when the bucket is spinning had
various explanations.
Mach claims that the water would not become concave if there were no
matter in the universe.
He said that there would be no feeling of spinning, and in fact
spinning would have no meaning in an empty universe.
He said, “You feel acceleration only when you accelerate
relative to the average distribution of other material inhabiting
the cosmos.”
Page
51. Einstein probably
should have stuck with the original name of relativity theory.
He initially wanted to call his new theory, the theory of
invariance, but of course
it became known as relativity theory.
This theory states that
the laws of nature are the same for every frame of reference.
It is only when
observers in difference frames of reference compare their
observations that the peculiarities arise.
For example, space/time is absolute, and this is what
provides the reference for the acceleration and the spinning bucket
problem. Absolute
time does not exist, and absolute space does not exist, but absolute
space/time does exist according to Einstein.
An object that is not
accelerating will have a straight line though space/time, but an
accelerating object will track a curved line.
Page
67. Since gravity and
acceleration are equivalent, if you feel gravity’s influence you
must be accelerating. A
person falling down an elevator shaft is not accelerating, but is
weightless. It is the
building and the earth that are accelerating upward.
We are accelerating as we sit in our chars here on earth.
Page
99. The Einstein-Podolsky-Rosen paper was designed to demonstrate
that an exact position and momentum of subatomic particles could be
known.
Page
112. John Stewart
Bell proved EPR was incorrect.
Later experiments verified that the measurement of entangled
particles separated in space/time is not autonomous.
The measurement of one particle can have an instantaneous
effect on its entangled partner despite its spatial separation in
space/time.
Page
121. Einstein believed
that objects in the universe possess definite values.
Attributes do not hang in limbo waiting for an experimenter
to measure them. Most
physicists say that Einstein was wrong on this.
Page
141. A particular moment
can no more change in time than a particular location can move in
space. If the location
were to move it would be a different location in space; if a moment
in time were to change it would be a different moment in time.
Page
157. The second law of
thermodynamics seems to have given us an arrow of time.
More specifically it is the collapse of the wave function
that gives us the arrow of time.
This can only happen in one direction.
Page
171. The big bang
started the universe off in a state of extraordinary low entropy,
and that state appears to be the source of the order we currently
see.
Page
173. The more
squeezed, dense, and massive clumping of mass in the universe the
more over-all entropy.
Black holes are the extreme in entropy.
Black holes hold on to all the entropy the produce.
The universe started out in a state of very low entropy—the
most probable state.
Page
181. Richard
Feynman showed that the sum-over histories of a particles possible
paths toward the detector gives us the probability outcome of the
resulting outcome.
Page
190. In quantum
mechanics it is not possible to erase the past—only the
potentialities can be erased.
In a gravitational lens, for example, a measurement taken now
doesn’t effect which route the photon took billions of years ago,
because the photons have been in a quantum hybrid state of
possibilities.
Observation today causes one of the strands of quantum history to
gain prominence.
Page
208. The most likely
solution to the quantum measurement problem is decoherence.
No conscious observer is needed, because decoherence is a
result of environmental observation, by other photons, neutrinos,
etc.
Page
241. There are but three
possibilities to the shape of the universe, positive curvature,
negative curvature of flat—zero curvature.
Page
254. Photons are the
elementary particle of electromagnetic fields and can be thought of
as the microscopic transmitters of the electromagnetic force.
And just as electromagnetic fields are compose of photons,
and gravitational fields are believed to be composed of gravitons,
the strong and weak fields also have a particulate constituent.
The particles of the strong force are called gluons, and
those of the weak force are called the W and Z particles.
Fields respond to temperature
as ordinary matter does.
The higher the temperature, the more ferociously the value of a
field,
Page
261. It is the Higgs
field that scientists believe gives object inertia.
The Higgs field interacts with the quarks and electrons in
objects and causes resistance to motion.
The Higgs field unlike the presupposed ether does not
interact with light, nor objects in constant motion.
Page
272. A common
misconception is that the big bang provides a theory of cosmic
origins. It doesn’t.
The big bang is a theory explaining what happened a moment
after the beginning.
It’s quite possible that gravity acted as a repulsive force to cause
the universe to come into being.
Page
274. Einstein introduced
the cosmological constant to explain why the universe was static.
This he considered to be the biggest blunder of his life.
This cosmological constant gave gravity a repulse force to
explain why everything wasn’t pulled together.
Page
275. It’s not just
the mass of objects that contribute to the strength of the
gravitational field.
Energy and pressure also contribute.
Page
276. If you heat
up a cube of gold you increase its weight.
You are adding energy to the mass.
Energy equal mass.
Einstein showed
that the gravitational force depends not only on mass, and not only
on energy such as heat, but also on the pressures that may be
exerted.
Pressure like mass and energy, is a source of gravity.
And remarkably, if the pressure is a region is negative, it
contributes a gravitational push to the gravitational field.
Einstein found that he
could precisely balance the usual attractive gravitational force
with the newly discovered repulsive gravitational force to produce a
static universe.
Page 284. Guth’s
inflationary cosmology theory is based on the Higgs field that
caused a huge negative pressure that drove every region of
space outward.
Page 292. In
inflationary cosmology, space stretched by such a colossal factor
that the observable universe is but a small patch in a gigantic
cosmos.
Page 299. Ever
since the universe was 7 billion years old, its expansion rate has
not been decelerating, but instead the expansion rate has sped up.
Page
300. Not only does
ordinary matter---protons, neutrons, and electrons constitute only 5
% of the mass/energy of the universe, but the rest is composed of
dark matter and a mysterious dark energy.
Page
311. Gravity depletes
the energy of fast-moving particles of matter and radiation as the
universe expands, while an inflation field gains energy from
gravity. By mining
gravity for its energy the inflationary field (the Higgs field) can
produce all the ordinary matter and radiation in the universe from a
tiny twenty-pound speck of inflationary space.
Page
321. As the universe
expanded during inflation, the inflation field relinquished its
hugely amplified energy by filling space nearly uniformly with
matter and radiation.
Page
329. More than anything
else it is the Uncertainty Principle that encapsulates the break
with classical physics.
Qunatum uncertainty
applies to fields as well as particles.
Page
396. Newton’s inverse
square law for gravity is thus a reflection of a geometrical
property of spheres in three dimensional space.
Page
452. Time—Each
moment exists, each event exists, just as each point in space
exists. Moments don’t
momentarily come to life when they are illuminated.*by the spotlight
of an observer present, instead, once illuminated always
illuminated. Moments don’t change.
Moments are. All events making up the universe are on view;
they are all there, static and unchanging, but different observers
might disagree upon what happened first.
Page
458. Is time travel to
the past possible? Most
physicists say NO!
Lee Nichol
I’ve read several books by and about physicist Daivd Bohm
including, Wholeness and the
Implicate order, but it wasn’t until I read
The Essential David Bohm that I began to comprehend Bohm’s
philosophy.
If I were to attempt a one-line summary of his philosophy it
would be that nature is an undivided whole. This is not a new idea
as it has its roots in monistic traditions, but it has always been
difficult for me to understand just how we, as individual observers,
fit into the wholeness of the universe.
How is it, as Einstein himself wondered, that we are able to
make the universe comprehensible by doing objective science if we
are a part of what we are studying?
And if matter and energy scurry around in a cold, purposeless
fashion as most modern orthodox physicists proclaim, why do we, as
one of the most complex inhabitants of this universe, seem to aspire
to creativity and purpose?
The answer
according to David Bohm, is that the universe is organized at all
levels of complexity according to “meaning”, and this includes life
itself. If “meaning” is
enfolded within all matter and energy, in what Bohm calls the
implicate order, then there is no separation of mind and matter.
Nor, can objectivity and subjectivity be discrete. If the
entire universe is organized according to meaning, then the universe
is contextual and therefore subjective at all levels.
Objectivity becomes a false endeavor.
Yet, it
is undeniable that “objective” science has taken us a long way in
the twentieth century, from an understanding of the workings of the
atom, to the marvels of DNA. This approach has its limits, however.
A growing realization in many scientific disciplines is that
the ultimate building blocks of nature are unattainable.
The philosophy of reductionism has by definition a major
flaw. The part can never
abstract the whole.
Therefore, a new holistic approach must be adapted in which meaning
is considered at all levels of complexity.
How
is it possible for us to observe nature if all of nature is
subjective? Bohm
explanation is that “meaning” is self-referential allowing
consciousness to observe itself. And contrary to what most of us
intuitively believe, the process of observation is not passive.
Similar to the scientific method, experimentation between our
brain and the environment are constantly taking place in a process
of active “attunement”.
Such
attunement is a skill that requires practice. For example, visual
observation requires a subtle unconscious movement of the eyeball
itself. When subjects
are placed in sensory deprivation where practice of these skills
cease, perception can completely break down. Taking a Kantian
viewpoint, Bohm says that observation allows us only an abstraction
of the universe filtered through our senses.
We are not creating reality through this continual interplay
between our nervous systems and nature.
We are only creating an “inner show”, which allows us a
subjective insight into the universe.
Therefore, an abstract comprehension of the universe is
possible due to the constant interplay between nature and ourselves,
and because the universe is subjective and contextual.
The idea
of a contextual universe is not such a radical step for two reasons.
First, Niels Bohr himself made the position and momentum of
particles context dependent by bringing in the measuring device (the
observer) as the determining factor for the outcome of the
experiment. An
“observation” is required in quantum experiments to make the
particle determinate.
And second, Bohr realized that there could be no division between
the classical and quantum worlds.
In fact, in his later years Bohr proclaimed: “There is no
quantum world.” And yet
there seems to be a dichotomy.
Classical physics is supposed to be causal, objective, and
deterministic, while quantum physics is non-local, acausal, and
indeterministic.
For
example, two similar processes exemplify one of the most mysterious
aspects of quantum theory: The acausal jump of the electron from one
orbit to another around the nucleus of an atom, and the acausal
process of nuclear decay.
Both processes are indeterministic in that there is no cause
for and individual electron to make a jump, nor an unstable atom to
decay. Only a
statistical average can be determined.
But if, as Bohm claims, the universe is context driven then
“meaning” can be the unifying factor of the quantum and classical
worlds. For if we
realize that the original source of all cause and effect is
“meaning”, then both classical and quantum physics would be
contextual, and therefore no division would exist.
One of David Bohm’s colleagues once said of Bohm’s ideas:
Some are brilliant, many are obscure, and some are just plain
nonsense”. In reading
this book, I discovered much of the brilliance as well as some of
the obscurity of David Bohm.
And though I found his description of the physics famous
double-slit experiment, as well as his “pilot-wave” idea
unsatisfying, I would not want to proclaim it “nonsense”.
This book was truly mind-expanding, and I highly recommend it
for the contemplative individual.
The Next Revolution in Physics
Julian Barbour
Julian Barbour’s Book, The End of Time: The Next Revolution in
Physics, is truly
revolutionary. Barbour’s
simple idea is that time does not exist in nature.
But since time is explicitly entwined with motion, Barbour
faces a much more monumental task—banishing the very motion
responsible for the abstract concept of time.
All of this he does quite convincingly but, as yet, his world
picture is not supported by mathematical theory or experiment.
Speculation that time and motion are illusory is not a recent
proposition. One of the
first systematic treatments of motion was presented by the
rationalist Greek philosopher Zeno (550-451 BC,).
His famous paradoxes attempted to demonstrate that an arrow
cannot fly, and motion cannot exist. (Quantum Reality: A new
philosophical Perspective P.4).
Twenty five hundred years later, Barbour would agree with the
results, but not the premises.
Zeno’s vision hinged on the concepts of objective time and
space, both of which have been discredited since the advent of
Relativity.
Objective or absolute space and time remained at the heart of
scientific theory from the time of the Greek philosophers, to the
Copernican revolution, and up through Newtonian era.
But the idea of absolute motion became suspect well before
the arrival of Newton as result of observations by Galileo.
He realized, as he sat in the cabin of a sailing ship, that
the motion of objects, birds, and insects were unaffected by the
constant steady motion of the ship through the water.
This was the first formal realization that motion was
relative. But what was
it relative too? Most
thought that the motion of objects was relative to the fixed stars.
When it was found that the stars moved also, Newton proposed
that objects moved relative to absolute space.
The next
great leap came with Mach who suggested, that motion must be
relative to the sum total of all other matter in the universe.
Mach states: “It
is utterly beyond our power to measure the changes of things by
time. Quite the contrary,
time is an abstraction, at which we arrive by means of the changes
of things.” And this is
our understanding of time today.
Time is motion and nothing more.
This is more or less the crux of Julian Barbour’s paradigm.
To
understand his idea, imagine that the entire universe is made up of
just three objects. You
can imagine, if you’d like, that you are on one of these floating in
the vastness of “space”.
(Barbour denies space thus the quotes.)
The three objects form a triangle. You see the other two
objects appear to change position, getting smaller than larger in
respect to each other as your triangle universe changes
configurations, but it’s impossible to tell if you are moving or if
the other objects are moving.
From this
vantage point it’s easy to grasp the idea of relative motion, and
you remember Galileo’s notion that there is absolutely no difference
between an object in motion and an object at rest.
There is nothing to judge the various configurations of the
three objects: no reference point, no background space, just change
of the entire universe—the three objects forming your triangle
universe. Just as
it is meaningless to ask what lies beyond the universe, or if there
was time before the big bang, it is meaningless to ask, “what was
the configuration prior to the one you are in at this very moment?”
Each snapshot, each configuration,
is the universe.
There is nothing that lies within this universe.
The objects themselves
are the universe.
There is nothing outside your universe to “time” it as it
goes through its phases.
Only internal change relative to the universe can have “time”.
But nothing lies within. It’s everything that is. There is no
clock—No God’s eye view. There is no motion, only different
configurations of your small universe.
Now
imagine expanding this concept to our present universe.
The scenario is the same as above.
Every object in the universe
is the universe.
Nothing lies within this “container” There is no motion, only
different configurations of all objects in the universe.
Motion is time and nothing more.
Therefore no motion—no time.
The things we see around us that we perceive as motion are
simply part of the great configuration of the universe.
Each instant of “time” is simply a different configuration.
Implications and speculations
It
is often stated that Relativity destroyed the concept of “now” and
this bothered Einstein greatly.
Physicist David Bohm described this idea succinctly: Imagine
a line representing time.
The line is divided into two equal parts.
The part of the line to the left of the midpoint represents
the past. The part to
the right represents the future.
This leaves nothing for the “now”.
Einstein imagined that the past and future exist all at once.
(The block Time concept)
Barbour says that Einstein’s view is very close to his own.
The difference being that; whereas, Relativity denies
simultaneity of events, Barbour’s world doesn’t rule out the
relative simultaneity of
events.
There aren’t multiple “nows” for different observers in motion,
rather there is only one “now” for all observers in a static
universe. Any other
“now” in a different region of space/time is simply a past world
that no longer exists. Barbour refers to this as the “many instants
interpretation” of reality.
Barbour says that his many instants interpretation differs
from the more famous version by Hugh Everett called the many worlds
interpretation, in several ways.
While Everett’s view of the two-hole experiment is that the
wave function never collapses and that all possibilities are
realized, each in its own separate universe which splits off along
with the observer at that instant, Barbour believes that it’s the
same universe in “mass”, but in a different configuration.
The results of a sub-atomic experiment cannot be explained
from what happened previously.
There is no causality from past to future.
There is only “horizontal causality” from the totality of
things happening at this instant.
Let’s imagine what a typical two-hole experiment might look
like in a Barbour universe if I understand what he’s says.
I will call Barbour’s universe the “B”
universe.
A photon
of light is emitted from a source and travels on its way to a
detection screen.
Between the source and the screen there is a barrier with two holes.
The photon will appear to go through both holes as if it were
a “wave” unless there is an observer on the far side of the barrier
in which case the photon will “choose” one hole or the other and
behave more like a particle.
How does the photon “know” in advance if there will be an
observer? This has been
one of the paradoxes of the two-hole experiment.
In the B universe
each particular setup of the experiment is one configuration of the
universe. There is no
unfolding of events as the photon passes through the experiment.
There is no motion in the
B universe.
Therefore, one entire configuration—one snapshot—is the
entire experimental setup with an observer in place.
The results are always the same in that situation.
The photon goes through one of the holes.
When no observer is present the probability wave is the
result. There is no need
for the photon to “know” anything in advance because no time passes.
This is also consistent with Relativity.
For a photon traveling at the speed of light, no time passes
and all distances shrink to zero; the source, the holes, the
observer, the screen, are all in one place simultaneously.
There is no before or after —A “Barbour relativistic ‘now’.”
This is not a completely satisfying description because not
all the paradoxes are cleared up.
Why the wave particle duality?
One inference is Barbour’s static wave idea. The appearance
of movement is caused by two separate statistical static waves out
of phase by one quarter of a wavelength according to Barbour.
If that is so, then we could surmise that when no observer is
present both particles, which are traditionally called the wave
function, go through their separate holes and interfere with one
another. The results are
recorded on the screen. That’s one experiment—one configuration.
When an observer is present in a different configuration, a
different experiment, and a different world, only one possibility
occurs. Since the wave,
which are really two static waves slightly out of phase “go through
the holes” it is not simultaneous.
It is not one configuration of the universe, but rather two.
The “lead” particle goes through the setup “first” and that
is the result. There is
no “time” for the second one to go through.
That is a different configuration, so it never occurs.
The universe camera has already taken its “snapshot” and the
result recorded. It’s
difficult with our sense of time to get away from words like “goes
through” and “first.”
These are in affect just different configurations of the universe.
Barbour himself never addresses what precisely would happen
in the two-hole experiment and I agree this is a stretch on my part,
but I had anticipated a B
universe back in 1994.
(Quantum Reality, page 89-100)
A few
final points: First there were two things that bothered Einstein
about quantum physics. One was the observer affect.
The other had to do with probability.
As a believer in a deterministic universe of cause and
effect, he wondered how it was possible that the laws of probability
governed the universe at the most fundamental level of reality.
He would have appreciated the
B universe.
To speculate once again, in a
B universe there is no
way for probabilities to build up over time, because time in
nonexistent. There is
not a 50/50 probability of the photon going through one hole or the
other as experiment suggests.
There is only one determined possibility for each individual
configuration. When we
throw a coin a hundred times we know that the compiled, recorded
results over time will produce about 50 heads and 50 tails.
These records are not past records in a
B universe, but present records of different worlds.
Just as if 100 people individually threw a single die
simultaneously. The same scenario holds for the photon.
Each configuration could be deterministic, but appear to be
probabilistic when the records of many different universes are
recorded and observed in the present universal configuration.
On a
separate point, it seems to me that not even Barbour took his ideas
far enough. Relativity
tells us that space, time and motion are inseparable.
He took care of space and time.
What about distance?
Is distance really anything concrete?
For example, imagine that we double the size of the entire
universe. Would we
notice? No, every
measuring stick would likewise be doubled! How about if we measured
things with a beam of light? Light speed is invariant and not
relative. That wouldn’t
change things either because time would double along with space.
A second of “time” would double in duration. Distance seems
arbitrary as well.
Finally,
contrary to what one might think Barbour said Einstein was
absolutely correct in his theory of Relativity.
Why? I’m not
sure. There was much of
the book I didn’t understand.
My guess is the Barbour would say that everything works out
as Einstein predicted if motion is considered “locally” within
individual configurations and not in a static holistic universe.
The B universe
theory is simply a more comprehensive addition.
Gevin Giorbran
In this
exhaustive masterpiece Gevin Giorbran gives us a unique and original
“hypothesis of everything”.
From the big bang, or Alpha state, to the final Omega state,
the universe is not winding down as the traditional explanation of
the second law of thermodynamics is currently understood.
Rather, the universe is going from one type of order,
“grouping order” to another type “symmetry order.”
Time, energy, the
forces of nature, and meaning are all a result of the universe
moving toward its most probable future, a future of perfect balance.
According to Giorbran,
energy is a product of imbalance, and time is simply the
transformation of matter into the fullness of space symmetry.
Gravity can be
understood as pockets of “time in reverse”.
Gravity represents our past order, pulling time backwards.
The
electromagnetic force represents the arrow of time moving toward the
future. The opposite
charges of the electron and proton are simply the tendency of these
particles to seek balance.
“Forces are simply the shapes of probability waves, and those
shapes bond particles together, in groups, in lattices, in
symmetries.”
The past and the
future are quantum potentials, and conscious beings are constantly
creating the most likely futures and the most likely pasts.
Meaning arises is a result of the decoherence of these
potential states.
Everything Forever is the most significant book concerning the nature of
reality I’ve read in years.
I highly recommend this book for those looking for a simple
and elegant hypothesis of the infinite, meaningful universe.
From Quantum Computing to M Theory
The New Physics of Information
Tom Siegfried
Just as the
clock defined society in medieval times, and the steam engine
defined life in the nineteenth century, so the computer has defined
society today. But Tom
Siegfried believes that the metaphor of the computer is much more
profound than the tools of previous eras.
In an interdisciplinary effort, Siegfried interviews some to
the most renowned scientists and thinkers of our day to illuminate
his belief that information is what the universe is made at the most
fundamental level of reality.
An interview with
physicist John Wheeler reinforces his hypothesis.
Wheeler states that his view of the universe has changed over
his lifetime. Initially,
Wheeler believed that the most basic entity was the particle, then
in later years he believed that the universe was made up of fields,
and now he believes, as does Siegfried, that everything is
information. To state it
another way, information is real—a physical aspect of the universe
as real as atoms.
Wheeler, in turn,
addresses the quantum measurement problem.
In a departure from a rising tide of opinion among scientists
that quantum measurements merely select a reality that already
exists, he believes that we are not acquiring information from the
great void, but in actuality, “creating” information out of a
chaotic confusion of “yes—no” possibilities.
Information is not an
abstraction, but is always tied to something physical whether it be
ink on paper, holes in a punch card, magnetic patterns on a floppy
discs, or the arrangement of atoms in DNA.
Information is always physical in some way.
Just what
constitutes an observer has been a controversial subject in quantum
theory since the Copenhagen Interpretation of reality.
But Murray Gell—Mann, considered one of the deepest thinkers
in the world of Physics, maintains that an observer is simply a
complex system that has the ability to compress information about
their environment— an entity that can take bits and construct an
algorithm
From Wheeler to
Hugh Everett, from Physics to Biology, Tom Siegfried makes a strong
case for the hypothesis that information is physical.
Meetings with
the Ancient Teachers of Mankind
Graham Hancock
Graham Hancock’s journeys have taken him into some of the
most inaccessible caves of Europe and to the farthest reaches of the
mind in an attempt to answer the question: What happened around
40,000 years ago that changed humanity?
Before this period in history there is little that we would
recognize as modern human activities, but quite abruptly after this
period, modern human behavior emerged.
Hancock believes that this sudden change was the result of
the widespread use of psychoactive plants.
Hancock
begins his guided tour into the Pech Merle cave in southwestern
France where, around 25,000 years ago, during the epoch known as the
Upper Paleolithic, ancient artisans began to use the cramped,
inaccessible, dimly lit, and damp walls of the caves as a canvas for
their paintings of red ochre and black manganese.
There, Hancock discovers many of the traditional scenes of
hunters and game animals, such as the woolly mammoths and bears, but
there are also surrealistic drawings of animals with
disproportionately small heads, cartoon type figures of fairy like
humans, the “wounded man” paintings of humans pierced with arrows,
and the half-man half- beast images, known as therianthropes that
are common to much prehistoric art.
It appears that around 40,000 years ago a symbolic revolution
began in which humans began to be concerned with more than just
survival. They began to
ponder the after-life, gods, deities, and supernatural beings.
Since our anatomy and genetic make-up have not changed in at
least 200,000 years, it is the belief of Professor David
Lewis-Williams and a growing number of anthropologists and
paleoarcheologists that about 40,000 years ago religions sprang up
as a result of the hallucinogenic visions brought on by the
widespread ingestion of psychoactive plants.
It is now a widely held view among scholars that these
ancient cave painting were an expression of the psychedelic visions
of shamans.
Lewis-Williams, the architect of his neuropsychological model of
cave art and religion has, since the 1980s, spent an enormous amount
of time and effort defending his proposition.
With so much invested, Graham Hancock was miffed that
Lewis-Williams had never consumed any hallucinogenic products
himself. Hancock felt
that for the sake of research it was necessary to experience these
substance first hand, so he took it upon himself to become a
psychonaut, and over the next few years began experimenting with ibogaine,
psilocybin, DMT, and ayahuasca.
By this time, Hancock was already familiar with psychiatrist
Dr. Rick Strassman’s research program at the University of New
Mexico, in which
he administered DMT (Di-methyl-Tryptamine), a highly
psychoactive drug, to volunteer subjects in the early 1990s.
DMT is one of the active ingredients in ayahuasca, a
psycho-active plant used by the shamans in the Amazon regions of
Brazil. DMT is found in
many plants and is, in fact, produced in small quantities in the
human body; however, it is only when it reaches a certain critical
threshold that it produces a storm of hallucinations.
The similarities of what was reported by Strassman’s subjects
and what has been reported by abductees in the UFO research field
and what has been seen in the shamanistic cave art from antiquity is
remarkable. These
visions included: therianthropes (half man half beasts), bird-like
creatures, threads of light, tunnels, stairways, spaceships,
small humanoids about four feet tall with large heads and
black almond shaped eyes, spaceships, insectoids, reptiles, and most
importantly coiled serpents, a staple of most hallucinogenic
experiences.
Strassman’s subjects were not satisfied with the explanation that
these were mere hallucinations hardwired into the neural structure
of the brain only to be released upon the introduction of a chemical
substance. To them the
experiences were too real.
Nor, was Hancock satisfied.
He points out that this reductionist viewpoint makes no sense
either evolutionarily or culturally.
Consumption of psychoactive plants serves no survival
purpose, nor is it likely that these visions are culturally induced,
since the same images appeared to the ancient shamans in their cave
art 25,000 years ago as they did for their counterparts in
Strassman’s experiments in the 1990’s.
Yet, no two experiences are ever alike.
One of Strassman’s subjects who had a multitude of sessions
said that when she returned for a visit with the entities, it wasn’t
like things picked up where they left off, it was like returning to
what had happened since her last encounter.
Strassman, himself, wondered if it were possible that these
DMT induced experiences were not necessarily hallucinations in the
conventional sense, but rather, an aid to bringing other realities
into our awareness.
Hancock, too, wondered if these alternate realities are actually
real, existing parallel to our everyday reality but imperceptible to
our senses. This is just
one of several ideas that Hancock wanted to explore through his own
use of psychoactive substances—substances that are illegal in most
parts of the world, with extreme punishments for offenders, often
resulting in incarceration for long periods of time—a situation that
Hancock feels is a violation of human rights if not outright
barbaric.
Lewis-Williams the
founder of the neuropsychological model of cave art and religion
believes that the cave art is nothing more than the illusions of
brain chemistry, but like Hancock,
other renowned specialists in the field believe that this
explanation is too simple.
William James, the renowned psychologist and philosopher,
believes that our normal waking consciousness is just one type of
consciousness that has evolved through evolutionary processes, but
he contends that we should not infer that it is the only
consciousness. Author
Aldous Huxley agrees.
Drawing on his experiences with peyote, mescaline, psilocybin and
LSD, Huxley has come to the conclusion that our brains and central
nervous systems act more like a filter, screening out other forms of
reality that we cannot deal with, leaving us with a very narrow view
of reality.
Others, such as Albert Hoffman, who was the first to synthesize LSD,
stated that… “The true importance of LSD and related hallucinogens
lies in their capacity to shift the wavelength setting or the
receiving ‘self’,” and thereby to evoke alterations in reality
consciousness... (103). Both
Huxley and Hoffman view the brain as, not only a producer of
consciousness, but a finely tuned receiver.
It should
be noted that ingesting hallucinogens is not the only way to reach
altered states of consciousness.
Many cultures around the world from Africa to the Americas
have used other methods to reach altered states that range from
self-inflicted pain, fasting, drumming, and rhythmic dancing.
Remarkably, 2% of the population can spontaneously go into
altered states of consciousness, and Hancock is one who thinks that
these are the same people who report abduction experiences.
He says up front that he does not believe that UFOs are
nuts-and-bolts spaceships from other planets.
UFO advocates need not be
discouraged, because Hancock does entertain another idea that opens
new possibilities of alien interaction—the possibility that there is
an encoded message on our DNA.
If
some advanced civilization wanted to leave us a message to read when
our civilization had attained a certain scientific sophistication,
where would they leave it? It is plausible that they might encode a
message in some of the great megalithic structures found around the
world, such as the pyramids on the Giza Plateau in Egypt or the
pyramids of the Sun and Moon at Teotihuacan, Mexico.
These messages could be in the form of a hall of records
concealed within or buried underneath these structures in deep
caverns. They could have
left mathematical messages incorporated into the very design of the
structures themselves, or in the precise placement of the structures
in relation to points on the globe or alignments with stars.
They might well have done this, but the most ideal place to
store a vast amount of information that won’t be degraded or
compromised by the passage of time is to store it on the DNA
molecule—the double helix.
Coiled up in each and every cell in our body are two meters
of DNA just ten atoms wide.
DNA has the capacity to store over one hundred trillion times
more information by volume than our most sophisticated computers. It
is the perfect storage system because the information can be copied
nearly flawlessly and passed anew from one generation to the next.
During this process there are proteins whose sole
responsibility is to proofread the message and correct for errors.
It is a remarkable system that we are only now beginning to
understand. It is
noteworthy that many people in altered states of consciousness
report seeing serpents joined together in a configuration of a
double helix DNA molecule, the serpents intertwined in reverse,
heads facing tales, just as the reversed alignment of the double DNA
strands. It is now
legend that the co-discoverer of the structure of the DNA molecule,
Francis Crick, envisioned the double helix structure in a dream;
however, Crick confided in his friends that when he worked at the
Cavendish laboratory in Cambridge in the early 1950’s, he frequently
used LSD, a substance that was legal at that time.
He, in fact, told his colleagues that he was under the
influence of LSD when he saw the double helix shape.
Interesting too, is the remarkable coincidence that
tryptophan, the amino acid that is involved in the construction of
the DNA molecule is the parent molecule of tryptamine, the active
molecule in nearly all psychoactive substances including DMT,
ayahuasca, and LSD.
Perhaps tryptamine which is ubiquitous in nature provides the key to
tap into the message left on the DNA molecule.
One of Strassman’s subjects felt that “…the plant was a being
in its own right and that it was passing knowledge to him.
This is in line with the indigenous conception of plants as
teachers…”
(280) If this is
true, then we can say that tryptamine doesn’t
cause hallucinations, but instead, provides access to an alternate
realm of reality.
Is there evidence that a message was transcribed on DNA for us to
read; furthermore, if there is a message, who or what left the
message?
In 1939,
George Zipf discovered an interesting aspect of all languages.
He found that in each and every language there is a formula
for determining the frequency of a word in a text.
Graham Hancock describes Zipf’s law as follows:
If one searched a novel of any of the world’s
languages, for example, one would find that if the first
ranked word—the most frequently used word—occurs 10,000 times then
the tenth ranked word will appear 1,000 times and the 100th
most frequently used word will appear 100 times.
If the word frequency in a text is plotted on a graph, one
gets a straight diagonal line. In the above example the frequency
decreases by a factor of ten.
When researchers at Boston University and Harvard Medical
School examined DNA base pairs they found that Zipf’s law did
not apply to the 3% of
our DNA that code for the proteins that run the machinery of life.
However, when they analyzed the 78% of our DNA that is called
“junk DNA” they discovered that Zipf’s law corresponded perfectly.
The research suggests that the so-called junk DNA might be a
language. But this was
not the only indication that a language is encoded on our DNA.
Claude Shannon, who was the first to realize that the second
law of thermodynamics could be applied to information, discovered
that any true language has a standard measure of redundancy built
into any string of characters.
This redundancy distinguishes a language from a string of
nonsense. When Shannon’s
law was applied to the sequences of DNA that code for proteins, the
redundancy was, again, not
found. Redundancy isn’t
necessary for perfect copies of DNA, but the redundancy required for
Shannon’s principle did corresponded with the junk DNA.
So, we have two separate tests predicting that the junk DNA
is encoded with some kind of language.
Why would this so called Junk DNA be in our cells in the
first place? Nature
tends to be very frugal and it seems very unlikely that all of this
DNA is in our cells for absolutely no purpose.
All of
the evidence seems to point to the fact that DNA contains some sort
of message. The next
question is how did it get there?
We can only speculate at this point.
Some UFO researchers contend that it was encoded on our DNA
by an extraterrestrial intelligence.
Francis Crick suggests that the molecules of life are so
unlikely to have occurred by random chance in the relatively short
time that our earth has been a host for life that some sort of
intelligence might be involved.
But even if we assumed that life on earth was seeded by some
far-flung civilization that has been around since the dawn of the
universe, the likelihood of accidental occurrence of life is only
slightly enhanced by this increased time frame, in my opinion.
I believe
that part of the mystery of how life began lies in the fact that we
still don’t have a good definition of life.
We still take the reductionist view of a bottom-up approach
completely leaving out the idea of self-organizing principles of
cognition. In his
book Mind in Life Evan
Thompson suggests that the process of understanding life and
understanding mind are continuous—to understand life is to
understand mind. What
distinguishes life from non-life, according to several contributors
in his book, is that living organisms are self-organizing,
self-replicating systems that are cognitive or have “purpose.”
But this is not purpose in the conventional sense.
In other words, the DNA molecule is not purposeful in
relation to our needs and goals. DNA cares nothing about our
individual goals. But DNA is purposeful to proteins, and proteins
are purposeful in relation to the individual cells, and the cells
are purposeful to tissue, and tissue to organs, and organs to the
organism, and the organism is purposeful to mind.
Purpose is an idea that science does not like because it
reverses the principle of cause and effect and it smacks of
religion. Science has
long ago discounted Aristotle’s notion of teleology, which portends
that the acorn strives to become a tree.
Yet, when you watch a video of the machine-like ribosome
factories in the cells reading, unzipping, transcribing, and
spitting out strings of DNA code, it certainly looks purposeful.
One of
the mysteries of the origins of life is that life requires both DNA
and proteins working together to create and maintain an organism.
DNA is required to make proteins that, in turn, are necessary
to transport, transcribe, and replicate the DNA. Neither is possible
without the other. It is
the chicken and the egg riddle.
How did either independently arise without the other?
Niels Bohr borrowed the Buddhist concept of the yin and the
yang, an idea he called, “complementarity” to explain the
wave/particle duality.
The wave and particle exist only in relation to each other and each
exclusively of each other.
Many complementarities are recognized in nature.
The quark, for example, does not exist in isolation but
exists only in relation to the atom.
This idea of complementarities might be what we are looking
for to explain how DNA and proteins evolved at the same instant, and
in addition, it can explain the concept of purpose.
If we recognize that the DNA molecule and the proteins are a
complementary system, exclusive of each other, yet dependent upon
each other for their existence, then it is easier to understand how
they co-evolved. Any
physical system is a system that exchanges information and meaning.
These systems are information/meaning complementarities; they
compute; they are intelligent and they have purpose.
Life didn’t occur by accident, therefore, but came about as
self-organizing and cognitive principles of nature.
Complementarities appear to show purpose, from the DNA and Proteins,
to the cell, tissues, organs, and the mind.
But what if complementarities do not end there?
What if the chain of complementarities forms a loop with the
mind being complementary to the DNA itself?
This idea would link the whole system into a continuous
complementarity and it would explain how people who go into altered
states of consciousness can directly access the DNA code.
It is conceivable that the psychoactive molecules might block
our ordinary perception of reality and allow direct contact with our
own DNA. If our DNA is a
great hall of records, and psychoactive plants are the key to
unlocking the vault, then the reports of hybrids, reptiles,
entities, abductions, and spaceships by psychonauts, might not only
be an indication of our past interactions with extraterrestrial
life, but it might contain a window into our future interactions
with extraterrestrial life as well.
References:
Thompson, Evan.
Mind in Life: Biology,
Phenomenology, and the Sciences of Mind, London: Belknap Press
of Harvard University Press, 2007.
Lloyd, Seth.
Programming the Universe: A Quantum Computer Scientist Takes On The
Cosmos, New York: Alfred A Knopf, 2006.
Laurence Gardner
International best-selling author
Laurence Gardner has fulfilled all expectations with this well
researched and scholarly approach to a mystery that spans thousands
of years.
Gardner’s
contention is that the Biblical Ark of the Covenant was in reality:
a capacitor that made it a powerful and deadly weapon, a
superconductor that allowed it to levitate above the ground, and a
machine for manufacturing a substance called:
mfkzt by the ancient Egyptians,
Shem-an-na by the
Mesopotamians, manna by
the Hebrews, and the philosophers’
stone by alchemists.
At first glance, these propositions seem preposterous, and
yet, after following his trail of logic, one wonders how it could
have been anything other than what he insists it was.
Using his vast
background in antiquities and history, and his recently acquired
research into quantum physics, Gardner is able to convincingly offer
a solution to the riddle of the Sacred Ark of the Covenant.
In recounting the history of the Ark the author makes it
clear that biblical stories only loosely correlate with historical
facts.
Gardner begins
his tale with a 1904 expedition, funded by the Egypt Exploration
Society and headed by Sir W.M Flinders Petrie.
The Petrie expedition, whose initial purpose was to explore
the turquoise and copper mining region of the Sinai Peninsula,
scaled Mount Horib.
Mount Horib, according to the King James Version of the bible, was
the place where Moses received the Ten Commandments.
On the plateau of the 2,600 ft. mountain, Petrie and his
party were astonished to find the ruins of an Egyptian temple, with
hundreds of artifacts including: huge statues, obelisks, a man-made
cave with inscriptions dating to 2,600 BC, round tables, trays,
crucibles, vases, wands, and significantly, as we shall discover
later, conical stones and tons of a pure white powder concealed
underneath flagstones.
Clearly this was an active site long before biblical times.
What was this
mysterious white powder?
Many of the inscriptions on the temple walls refer to a substance
called mfkzt. Scholars
and Alchemists have ascertained that it was made from gold; a fine
powder that was often referred to as
white bread, or manna, and
called the giver of life. Numerous stele at the temple show it as a
conical loaf being presented to the king.
But this is not the only reference to mfkzt.
The substance is referenced throughout ancient history.
For example, there are depictions at the Temple of Karnack.
As Gardner explains:
“As reproduced in a bas-relief at the temple of Karnack…There
are a number of cone-shaped items.”
They are explained as being made: one cone of silver and
thirty of Gold, and carrying a description: “white bread”’.
Other biblical
references refer to the white powder as
manna.
“And Moses said unto them, this is the bread which the Lord
hath given you to eat.”
In Revelations (2:17) it is
said, “To him that overcometh will I give to eat of the hidden
manna, and will give him a white stone.”
“Stone” in the ancient Egyptian language was defined as a
valuable unstable substance.
The “paradise
stone” was featured in a parable of Alexander the Great, in which it
is said to, “give youth to the old and to outweigh its own quantity
of gold, although even a feather could tip the scales against it.”
Other
Egyptian writings refer to mfkzt as the purest of pure gold that
resists fire like a stone, but is of very fine powder.
In Exodus of the Old Testament it was stated that “shewbread”
or “bread of the presence”, was made at Mount Horeb by the skilled
goldsmith and Craftsman Bezaleel, who as no small matter, is also
given credit at the builder of the ark of the covenant.
And Moses too had the secret, for it was written that
Moses burnt the golden calf
with fire, ground it into a fine powder, mixed it with water, and
made the children of Israel drink it.
Obviously, people
were consuming a white powder made from gold, but what were the
benefits, and how can a white powder be made from gold?
Heating gold simply turns it into molten gold. Was the white
mfkzt found at Mount Horib Gold?
All references indicate that it was, yet when the substance
was initially analyzed it was said to be a combination of Iron,
Silica, and Aluminum.
The author leaves us in suspense at this point and turns to modern
physics for answers.
Gardner
explains that when heated with an electric arc under the right
conditions, atoms of certain elements enter into what is referred to
as a “high spin state”.
Normally the outer shell of electrons is outside of what is called
the screening potential of the nucleus.
These outer electrons usually travel in pairs with opposite
spins. When the atom
goes into a high spin state all of the electrons come under the
influence of the nucleus and they all begin to resonate as one.
The electrons turn into pure white light and the substance in
effect becomes a superconductor.
The individual atoms of the substance can no longer bond
together and the substance falls apart into a fine powder of
individual atoms—the mfkzt.
Other metals exhibit these properties as well, and they are
called the platinum group metals (PGMs). These metals include
platinum, palladium, rhodium, iridium, and osmium.
Here again is more evidence that the substance at Mount Horeb
was Gold or one of the Platinum group metals.
Could it be, as Gardner contends that the ark itself was used
to manufacture mfkzt? If
so, how would this be done?
To make gold or PGMs become monatomic it is necessary to
provide direct electric arc heating over a period of time.
The ark must have somehow provided this power.
Biblical
references are quite specific as to the dimensions of the ark, and
these are confirmed by writings of king Solomon.
The dimensions of the ark were about 45 inches in length, 27
inches wide, and 27 inches in height. The ark box was made of wood
sandwiched by pure gold on each side.
On top was the “mercy seat”; a lid made of pure gold. On each
end were two wooden poles for the transport by four or possibly
eight men. It so happens
that these are also the specifications for a capacitor.
A capacitor is basically two conductive materials separated
by an insulator.
Atmospheric charges, which would have been common on Mount Horeb,
would be all that was needed to make the ark into a powerful and
deadly device. And more
importantly, it has been estimated that a capacitor the size of the
ark could charge to several thousand volts, enough arcing power to
give it the capability to manufacture mfkzt – the pure white
monoatomic powder of gold.
It is interesting
that the ancients made special mention that the mercy seat was made
of “pure gold”. How did
they know for instance that the gold they mined was not pure?
They in fact had seven nouns for gold of which only one
referred to pure gold.
And if they somehow knew that it was impure how did they make it
pure? The process for
making pure gold wasn’t developed until the 19th century.
The process called “electrolysis” requires a direct electric
current to be applied to a tank containing an ionized salt and the
molten metal, with the positive anode at one end and the negatively
charged cathode at the other end.
The atoms of the element build up on one of the metal plates
in a process called electroplating.
In light of these
facts, it is perplexing to note that the over 2,500 years ago
craftsmen were making gold leaf.
Gold leafing, which is pure gold beaten into a thickness of
only one micron, cannot be accomplished without all of the
impurities removed from gold.
And electrolysis is the only know method to accomplish this
purification. Gold
plated chisels were found as well in Mesopotamia dating back to
3,000 BC.
At this point
Gardner interrupts his historical account, to tell the extraordinary
story of David Hudson, a wealthy cotton farmer from Phoenix,
Arizona. The soil in the
Yuma Valley is very Alkaline, and in order to make the ground
suitable for planting, farmers like David Hudson contract chemical
companies to inject sulfuric acid into the soil.
Calcium carbonate is then added to act as a buffer.
Hudson noticed
upon analyzing the soil that some of the soil products were not
broken down by the acid.
When the soil was dried in the hot Arizona sun at low humidity a
remarkable thing happened.
The sample disappeared in a burst of white light!
Intrigued, Hudson
decided to have the soil sample analyzed in a laboratory.
The only thing that could be determined by the analysis was
that it was denser than lead.
At first he thought that the material might be gold or
silver, because even though lead has a higher atomic weight on the
periodic chart, Gold is denser and has a higher specific gravity
than does lead because the individual atoms form tighter bonds.
But this hypothesis was
short-lived because, when the substance was heated, allowed to cool,
and then struck with a hammer it shattered like glass.
Hudson then took the material to another laboratory, where
after extensive testing, it was determined that the material was
made up of common iron, silica, and aluminum, precisely the same
analysis as the white powder found at Mount Horib.
Yet, Hudson knew that the sulfuric acid would have destroyed
those elements.
The
substance was then tested by Emission Spectroscopy where it was
submitted to an electric arc for 15 seconds at 5,500 degrees
centigrade, but still no satisfactory results.
From there it went to a lab in Oxfordshire, England for
neutron activation analysis, but no suitable reading could be
obtained. It had been
suggested by the Soviet Academy of sciences that the arcing time
should be increased to 300 seconds, twenty times longer than
previous tests. The test
was repeated and Gardner describes the dramatic situation as
follows:
Then using an
original raw sample, the tests were done again.
As expected, during the
First 15 seconds it read: iron, silica, and aluminum,…20
seconds went by, 25, 30, 35,
40—and all the way to 70 seconds: still nothing.
Then, suddenly, the substance become real again, registering
as palladium: after another 20 seconds, it recorded platinum, and
after that came ruthenium, rhodium, iridium, and osmium at 220
seconds. It transpired
that the tiny white bead was composed entirely of platinum group
metals, which by standard Western testing had previously registered
nothing.
The richest known platinum group deposits in the world are
half a mile underground at the Bushveld Igneous Complex in South
Africa…It was discovered that the Phoenix farm soil contained 7,500
times this amount, at an astonishing 2,400 ounces per ton!
Is there
any doubt that the white powder at Mount Horib, said to be iron,
silica, and aluminum were anything but PGM’s?
Subsequently,
David Hudson has filed for 22 patents for the newly discovered
monatomic elements for which six more years of testing were required
to provide the necessary data.
More astonishing results were in store.
Gardner describes what happened when the substance was heated
and then cooled:
But the great surprise came when the substance changed from
its original dullness to the familiar whiteness of its bright bead
and subsequent powder.
At that moment the sample’s weight fell dramatically to 56% of its
starting weight. Where
did the other 44% go?
Further heating at 1160 degrees centigrade in a vacuum then
transformed the precious substance into a wonderfully clear glass,
at which point the material weight returned to its original 100%.
It was seemingly impossible, but it happened time and time
again
When
repeatedly heated and cooled under inert gasses, the cooling
processes took the sample to an amazing 400% of its starting weight.
But when heated again it weighed less than nothing—way below
zero. When removed from
its pan, the pan actually weighed more than it did with the material
in it, and it was perceived that the white sample had the ability to
transfer its weightlessness to its supporting host.
Even the pan was levitating!
Surprisingly, a superconductor material is not conductive.
For example, when a voltage potential was applied to the
white powder it acted like an insulator.
A material becomes superconductive when an external magnetic
force is applied, and this can be as slight as the earth’s own
magnetic field. When
this force is applied the electrons turn into a single frequency
light and the substance will continue to flow forever without losing
energy. It is in effect
a perpetual motion engine.
When a material is superconducting, it repels all magnetic
fields, and therefore will levitate over a magnet.
This is why the white powder lost 44% of its weight.
It was repelling the earth’s own magnetic field. And if this
isn’t bizarre enough, a superconductor exhibits a quantum oddity
called non-locality.
When two or more superconductors are correlated they act as
one single quantum entity.
Time and distance are irrelevant.
Any change in one will instantaneously affect the other no
matter how far they are separated in the universe.
It is no wonder, that superconductors have been described by
some physicists as having the strangest properties of any substance
in the universe.
Here again
is another clue to the puzzle of the ark.
You’ll remember that the lid of the ark was made of pure
gold. It has been
calculated that the lid alone would have weighed around 2,714
pounds. The box of the
ark probably weighed an equal amount or more, yet we know that the
ark was often depicted being toted around by four men, and was even
marched around the walls of Jericho.
This would seem highly unlikely unless the ark had the
ability to reduce its own weight by levitation.
Gardner speculates that not only was the ark a manufacturer
of mfkzt, but probably stored the substance as well.
This might have been enough to cause the ark to levitate,
making it easier to move.
Many biblical
references refer to two portable hand-held items that often
accompanied the ark called the
Urim and the Thummim. These were said
to emit charges of lighting and they had the power of levitation.
If the mfkzt in the ark was acting as a superconductor, it is
quite possible that the two devices had a non-local quantum coupling
allowing them to be used as fierce weapons of destruction.
Was this what caused the collapse of the walls of Jericho?
It seems that
there is only one great mystery left for Gardner to tackle.
Why were the ancients consuming mfkzt?
Could there be any benefits to eating powdered monatomic
gold?
Recently,
biochemists have become very interested in monatomic PGMs.
Research has shown that when a single atom of ruthenium or
platinum interact with DNA in tumor cells, the atom begins to
resonate with the malignant cell causing it to correct its
deformity. Additionally,
a single atom of ruthenium attached to each end of a DNA strand will
make it 10,000 times more conductive, in effect turning it into a
superconductor.
It has been
demonstrated, as well, that monatomic PGMs have anti-aging effects.
They increase melatonine production by activating the
endocrine system, stimulating the pineal and pituitary glands;
thereby reactivating “Junk DNA”. Gardner and others are themselves
experimenting by consuming PGM’s.
It is interesting to note that Moses was said to have lived
about eight hundred years.
Was this due to the consumption of monatomic PGM’s?
We won’t be
finding out very soon apparently, because the sage of David Hudson
as told by Gardner isn’t a happy one. After sinking millions of
dollars of his own money into a processing plant, to extract and
manufacture the PGM’s, the United States government through EPA
regulations effectively shut him down.
Presumably our government could not afford to have this type
of technology in the hands of just a few private citizens.
Hudson is now nearly broke but continues to lecture around
the country.
How was it that
the ancients had this advanced technology?
Gardner speculates that they didn’t really have a
sophisticated knowledge of what they possessed, but rather stumbled
into it. This seems
plausible, but one wonders, if perhaps there was a previous highly
advanced culture, either terrestrial or extraterrestrial, that
passed the technology down to the cultures of the biblical era.
And Gardner himself speculates that the Great Pyramid, and
other huge monoliths around the world might have had help from
levitation. As the
author explains: The
common myth that ramps were used to build the great pyramid have for
all practical purposes been disclaimed.
A ramp to build the Cheops pyramid would have had to be about
a mile in length and contain three times the material to build the
pyramid structure itself.
We’re told
that the pyramids were tombs for the kings, yet no pyramid has ever
been found with a corpse.
What then was found in the granite coffer in the great
pyramid? Only a granular
substance, that when submitted to conventional analysis, were said
to be feldspar and mica, which are minerals of the aluminum silicate
group!
Lost Secrets
of the Sacred Ark, has all the intrigue of a detective novel.
The sometimes tedious historical references are certainly
worth the effort to attain the rewards this book holds.
I highly recommend it.
Quantum Questions—Quantum Answers
David Parrish M. D.
David Parrish says that the fields of psychology, mysticism,
and quantum physics have all contributed to the notion that the
reality we perceive in the world is not an objective reality, but a
reality which is colored by our beliefs and our conditioning.
Modern psychology, mysticism and quantum theory have all
elucidated the fact that we are participants in creating and
altering our reality.
Parrish says that
modern psychology has shed the Freudian notion that our behavior is
determined by subconscious obsessions and desires, and external
events that are beyond our control.
In its place existential and cognitive psychology have for
the most part disavowed the subconscious mind and have embraced a
more seamless and holistic consciousness, a consciousness of freedom
in which we are responsible for our own acts.
Likewise
mysticism, which can be defined as a pathway to understanding
reality through a transcendence or direct experience, has for
centuries taught us that the subject/object duality is a grand
illusion, and quantum theory has once and for all demonstrated
through theory and experiment that in the subatomic realm there are
non-local connections between particles that have been intertwined.
Parrish does an
admirable job of drawing together the three disciplines, especially
psychology and mysticism, however, his understanding of the
principles of physics are a bit disappointing for the knowledgeable
reader when he attempts to describe such subjects as entropy,
The EPR proposal, Bell’s theorem of inequality, and some of
the basic tenants of quantum theory.
For example, he
defines entropy as a state of chaos when, in fact, entropy is simply
the tendency of systems to move toward equilibrium—a system’s most
probable state. He says
that the old scientific paradigm is characterized by chance and
separateness, while the new paradigm is one of wholeness and
harmony, when in fact the old scientific paradigm was deterministic
while the new paradigm --quantum theory-- is characterized by chance
and probability of the purest form.
He describes Bell’s theorem in terms of “non-local
causation”, and energy fields, but Bell’s theorem of inequality put
the final nail into the coffin of hidden variables such as energy
fields as an explanation for non-local events. And he misrepresents
The EPR argument when he says, “They proposed through errorless
mathematical reasoning that if quantum theory were correct, then a
change in the spin of one particle in a two-particle system would
affect its twin simultaneously, even if the two had been widely
separated in the meantime.”
Instead, Einstein argued the opposite. And not to belabor the
point he states:
“Research has found that in the world of sub-particle matter, the
state of consciousness of the observer determines the outcome.”
This completely misstates the “observer affect”.
Many experiments have been conducted in which a conscious
observer is unnecessary in the collapse of the quantum wave
function. In an
experiment conducted by investigators at the University of Rochester
and featured in the November 1991 issue of “Scientific American”,
researcher Leonard Mandel said, “The mere possibility that the paths
[of the photons] can be distinguished is enough to wipe out the
interference pattern.”
Extracting information from a system alone can collapse the wave
function.
Parrish’s
main premise is valid and I think he succeeds in demonstrating that
the emerging fields of science have come to an understanding that
the universe is an undivided whole, and that we are responsible for
creating our own reality, but by the end of the book he destroys his
own premises by dividing and subdividing consciousness into layers,
and making religious references to God, spiritualism, and miracles.
The book began with so much promise, but it quickly spun out
of control. For that
reason I give it three stars.
Colin Mcginn
The Mysterious Flame
is a clear and intuitive treatment of the mystery of
consciousness. Colin
Mcginn believes that consciousness is ubiquitous among biological
organisms and that it is no more complex than digestion or sexual
reproduction.
Mcginn
discounts both materialism and dualism as a rational explanation of
the existence of consciousness.
Materialists contend that the brain and the mind are
identical and that conscious experience can be reduced to neural
processes, a hypothesis that does not explain the subjective
experience of the senses, such as color perception.
Duelists, in contrast, believe that consciousness is an extra
feature of the universe and the brain is not necessary for
consciousness to occur; both untenable propositions as Mcginn
clearly demonstrates using extremely clear examples.
We
may have run up against our cognitive limitations when it comes to
understanding consciousness, according to Mcginn, because conscious
experience cannot be reduced to electrochemical processes of the
familiar kind.
There may be a link between space and consciousness-perhaps a
higher dimensional reality that we don’t perceive. Just as our
spatial universe was created in the Big Bang by a non-spatial
entity, so, congruently, our spatial brain gives rise to a
non-spatial consciousness.
Mcginn
believes that the Turing test, devised by the English mathematician
Alan Turing does not provide a necessary condition for determining
if a machine is intelligent since a conscious being such as a cat
would flunk the Turing test.
Neither does it provide a sufficient condition since a
machine can mimic conscious behavior without being conscious—a
materialist claim which Mcginn successfully rejects.
The question of machine intelligence is dependent upon the
definition of “machine.”
If by machine one means an “artifact” than it would be possible to
have an intelligent machine if man could create a brain, but it is
unlikely that a conscious machine could be created from silicon
chips.
The
Mysterious Flame one of the best books I’ve read on subject of
consciousness.
Meaning, Medicine and the “Placebo
Effect”
Daniel Moerman
Daniel Moerman places the words
placebo effect in
quotations because he
believes that the placebo effect should be redefined.
A placebo, he explains is inert.
It has no causal effect.
A more appropriate definition of the “placebo effect” he
asserts is, the meaning
response.
It
is because of our beliefs and the meaning we associate with a
placebo that determines its effectiveness.
Despite this simple formula for determining who will respond
to a placebo, it is not a very good predictor for a given individual
at a given time. Studies
indicate that there is no method to determine which individuals will
respond to a placebo.
Attempts have been made to remove placebo responders from studies.
Occasionally, researchers will conduct a precursor trial run
with a completely unrelated substance to identify those who might
respond to a placebo in an effort to cull these responders from the
“real” study. These
attempts have been futile.
No reliable indicators have ever been found that identify individual
placebo responders. In
fact, a person who responds to a placebo in one study has no
increased likely hood of responding to a placebo in subsequent
studies. More
remarkably, if one eliminates the approximately one third of the
populace who initially respond to a given placebo, the remaining
group will contain about the same proportion of responders in
subsequent studies.
Moerman, never makes the connection between these facts and the
parallels to natural physical laws at the quantum level.
And though they might be only coincidental, I think it worth
the comparisons.
Note
that a placebo has no causal effect, but instead it is
meaning that determines
the “effect” of a placebo.
The late physicist David Bohm asserted that the entire
universe is organized at all levels according to meaning.
If this is true, then it substantiates Moerman’s claim that
meaning is operating at the macro level.
But the similarities to physical law don’t end here.
Moerman observed
that when placebo responders are eliminated from a group, the same
statistical relationships hold for the remainder of the
group—approximately one third of the remaining group will still be
responders in the next study.
Similarities can
be drawn with quantum processes such as the jump of the electron in
orbit around the nucleus of an atom or the well-known process of
nuclear decay. If one knows the half-life of a mass, it is possible
to calculate exactly what proportion of the substance will remain
after a given amount of time, yet nothing can be said about the
transmutation of any given atom.
Divide the mass into two portions, and the half-life of each
portion remains the same.
As Morerman has shown this is exactly what we witness in
placebo studies. It is
possible to calculate statistically how many in a group will
respond, but nothing can be said about which specific individuals
will respond. In both
cases, whether dealing with the placebo responders or nuclear decay,
the process is determinate for the whole, but indeterminate for the
individual person or particle.
I have previously described this as a new law-- “nature
conserves meaning”.
Moerman documents
many studies involving placebos from around the world.
He notes that cultural differences, knowledge, and the
practitioner all statistically contribute to the
meaning response. In
particular, it has been demonstrated that the character and
personality of the physician has more to do with the outcome of
placebo studies than the make-up of the patient.
Moerman contends that a positive and upbeat clinician or
doctor transmits subtle cues to the patient making for a more
positive outcome. He
states that it is what the doctor “knows” that is important.
If the doctor believes his patient has a possibility of
getting a powerful drug, patients will do better than if he knows
they will only be getting a placebo.
Even though both groups get a placebo, the doctor’s knowledge
is what is important.
The conclusion is sound, but the mechanism, I believe is dubious.
I’m doubtful, that some sort of “subtle” cues are passed on
to the patient in such a consistent manner.
I wonder if it is possible once again to find the answer in
physical law.
In the
famous “double-slit” or “two-hole” experiments as it is often
called, it has been demonstrated that an “observer” is not necessary
to change the behavior of particles. In fact it is the mere
possibility that the path or route of the particle can be determined
at some point in the future that determines of outcome of the
experiment. In
experiments done by Marlan Scully at the University of California at
Berkeley it was found that it is our knowledge that determines the
behavior of particles.
Surprisingly, it is not necessary for
the experimenter actually to go ahead and measure the photons’
polarization to determine the paths they have taken--for the change
in detector behavior to be observed.
The mere threat of obtaining such information is enough to destroy the
ghostly superposition of hybrid phantom realities. It is our
potential knowledge of the
quantum system, not our actual knowledge that helps decide the
outcome. (Davis, 1996)
Of course these
associations with quantum experiments are merely conjecture.
Daniel Merman’s
book is well documented and it is obvious that much research went
into this publication.
If one wants a sound understanding of the placebo effect, or the
meaning response, this
book is the one to read.
Well done.
Information and the Nature of
Reality:
From Physics to Metaphysics
Edited by Paul Davies
and Niels Henrik Gregersen
This book is
divided into four main Categories: History, Physics, Biology, and
Philosophy and Theology, with contributions by 15 prominent authors
including the two editors, Paul Davies, and Niels Henrik Gregersen.
Information, like the concepts of matter and energy has been
difficult to define.
According to Terrence Deacon, the definition of energy wasn’t fully
realized until it was discovered that energy is not a substance, but
rather, a dynamic process of change that is always conserved.
Just as with the concept of energy, he said, we must give up
the idea of thinking of information as some “artifact” or
“commodity”. In
the broadest sense, says John F. Haught, information can mean
whatever gives form, order, pattern, or identify to something.
Today most
physicists divide information into two broad categories:
syntactic information and semantic information.
Syntactic information is sometimes called Shannon information
after Claude Shannon who discovered that information can be thought
of as a measure of entropy and probability.
This is both a quantitative and physical definition, which
describes how much information any system can carry and is not
concerned with the meaning of the information.
The more information a system carries the less entropy it
contains, which also happens to be the least probable state of the
system. Likewise, the
most probable state of a system has a high degree of entropy and
carries little information.
So we can think of information as a complementarity between
the message and the medium.
Both are needed for a complete description of information.
The second type of information is called semantic
information, and it deals with the content of the message—what it
means.
Paul
Davies says that most physicists now believe that information and
not particles and fields are the ground of all being.
Beginning with the ancient Greeks up until recent times it
has been assumed that the laws of physics, and their mathematically
descriptive counterpart were objective aspects of the universe cast
in stone, and it was the job of the physicist to uncover these
objective truths. This
idea was furthered by monotheistic thinking which suggested that the
discovery of these objective truths were a window into the mind of
God, an idea that has gone unchallenged for three centuries.
Davis states:
“The fusion of Platonism and Monotheism created the powerful
orthodox scientific concept of the laws of physics as ideal, perfect
infinitely precise, immutable, eternal unchanging mathematical forms
that reside in an abstract platonic heaven beyond space and time.
All of these assumptions must be jettisoned to come to an
understanding that the laws and states of the universe co-evolve.”
For many—from
Plato to physicist/ mathematician, Roger Penrose— mathematics has
been assumed to be an objective construct of the universe from which
matter and information find expression, but an evolving view among
physicists is that information is the basic entity of reality from
which the laws of physics, and matter emerge.
After all says Davies, “Laws are an informational statement.”
Mathematics has been successful in describing the laws of
physics, not because mathematics is somehow an objective aspect of
the universe, but because mathematics and the laws of physics
co-emerge from computations carried out since the beginning of time
by the ultimate quantum computer—the universe at large.
There can be no separation between the information processing
nature of the universe and the information processing revolution of
life itself. Both the
syntactic and the semantic concept of information are involved in
the interplay between organisms and their environment in the sense
that far from equilibrium system (organisms) need to be associated
with an environment that supports the organisms condition.
Both the environment (the signal medium) and the organism
(the message) are needed for the co-evolution of the
organism/environmental system.
According to
Keith Ward and Arthur Peacocke, the information contained in DNA is
not semantic information because no understanding is required for
the translation and transcription processes that code for proteins.
This kind of information belongs to a third category he calls
“Shaping” or coded information and it requires no sentience.
The functioning of the parts can only be explained by how
they contribute to the organism as a whole, and this is true whether
we are speaking of the universe as a whole or a living organism.
Since consciousness is primordial and contains all possible
states, we should not look to the simple to explain the complex, but
rather the complex to explain the simple.
John Haught maintains
that the idea of “God” as a designer is getting harder and harder to
defend in light of the fact that the universe is constantly
evolving. Information is
a complementarity of order and disorder.
Too much order is too rigid and does not allow for novelty
and evolution. “If the
universe or life were simply designed,” says Haught, “it would be
frozen in a fixed and eternally unchanging identity. Design is a
dead end.” Though Haught
says that whether or not one calls such a primordial consciousness
“God” is partly a matter of taste it hasn’t stopped him and other
contributors to the last section of this work in making a desperate
attempt to shoe-horn God into the equation.
This work was a
very exhaustive and comprehensive treatment of the topic of
information, and it greatly informed me on the subject.
I would highly recommend this to anyone willing to wade
through some fairly dense material in order to get to a clear
understanding of the nature of information.
Chapter 2 History:
Matter was seen by Aristotle as the bearer of potentiality
(16) Aristotle’s idea was that of complementarity.
Aristotle’s substance had form and matter each requiring the
other in order to constitute a thing.
Prime matter was supposed to be indeterminate.
Descartes said
the stuff of the world was matter with its extension a combination
of their volume and their shape.
But there were problems.
He later added the characteristics of impenetrability,
mobility, inertia, and density. (19).
In the corpuscular model, density was explained by the degree
of packing. These were
the primary qualities of matter--the objective attributes.
The secondary qualities are observer dependent.
Newton brought in the
concept of “mass” to explain inertia and gravitation.
Einstein said that mass and energy are equivalent.
A photon which has zero rest mass will exert gravitational
force as a result of its kinetic energy.
Chapter 3—Philip Clayton.
Materialism
consisted of 5 central theses:
1)
Matter is
the fundamental constituent of the
natural world
2)
Forces act
on matter
3)
The
fundamental entities of reality are the particles and their fields.
These forces determine the motion of objects in
nature-determinism.
4)
Nothing
exists that are not part of the forces and fields.
There is no such thing as vitalism or consciousness.
(dualism)
5)
Materialism
excludes top-down causation or strong emergence.
Materialism is an ontological position because it describes
all that can exist
Plato, Aristotle, Thomas Aquinas, Descartes, Leibniz, Hegel,
and Whitehead all fail to give a consistent definition of matter.
(40)
Thales—All was
ultimately water, Empedocles is was the four elements of earth, air,
fire,for and water, for Parmenides, the logos or reason, and for
Heraclitus it was change.
Plato thought
that the ultimate reality was the world of “Forms.”
Plato’s notion implied that the material world is in some
sense illusory. “The
further one descends from the realmof the forms, the less reality is
possessed by the objects one encounters.
Aristotle was disturbed by Plato’s notion that matter might
be illusion. He believed
that each object was a unity of form and matter—a complementarity.
Thomas Aquinas
adopted Aristotle’s notion of form and matter, but he believed that
God created the matter of the universe, yet if God is pure Spirit as
he proclaimed, “How could God create something different from
himself?
Rene Descartes, “The father of modern philosophy” said there
were two kinds of substances: “thought” and “matter”. Descartes
could never solve the problem of the interaction of the mind and
body because he defined each as two separate and distinct features
of the world.
Leibnitz tried to solve the problem of matter by combining
consciousness and matter.
He suggested that all matter possesses some understanding and
the smoothly running machine of the universe is accredited to God.
The philosophy
of Leibnitz is a monism of spirit.
German philosopher Georg Wilhelm Friedrich Hegel, offered a
synthesis of mind and matter, yet the idea of the spirit dominated
his philosophy.
The definition
of matter remains illusive (50). Clayton says that every attempt to
understand matter ends up focusing on the active principle of the
intellect—that which makes understanding possible—rather than on
what was to be understood which is matter.
Hobbes began with the premise that all is “matter in
motion”, the crucial task that then arises is to understand exactly
how things move.
Newton’s mechanics seemed to offer support for the
metaphysical position of materialism, the view that all things are
composed of matter. His
three laws of motion could explain the motion of all objects in the
universe, from inertia to gravitation.
And Darwin’s idea of natural selection gave a naturalist
explanation of the evolution of life itself.
Consciousness was left out of the equation. (53)
When the knowledge of all things has been reduced to
fundamental particles and to
universal laws, the
victory of materialism, they maintained, would be complete.
The project of
materialist reduction began to run into difficulty with special and
general relativity and quantum mechanics.
Radioactivity demonstrated that one substance and transpose
into another without an initial cause.
Einstein showed that matter and energy are not discrete, but
are one of the same. The idea of complementarity demonstrated that
one phenomenon can be described in multiple ways depending upon the
observation or measurement performed.
Heisenberg demonstrated that particles do not have precise
location in space, they are indeterminate until an observation is
made.
“Physics may also suggest an entanglement of matter and
meaning” (59)
Chapter 4 Physics:
Paul Davies
What is the
underpinning of reality?
Most theologians have abandoned the idea that God exits necessarily.
The concept of a necessary being is fraught with
philosophical and theological difficulties, not least of which is
the fact that such a being does not bear a resemblance to
traditional notions of God.
Science evaded
these complications by resting content to accepting the physical
universe itself at each instant of time, as the basement level of
reality, without the need for a god(necessary or otherwise) to
underpin it.
A
subtle shift occurred, among theoretical physicists in which the
ground of reality first became transferred to the laws of physics
themselves, and then to their mathematical surrogates, such as
Lagrangian Hilbert spaces, etc.
The logical conclusion of going down the path is to treat the
physical universe as if it is simple mathematics.
For many physicists mathematics is the ground of being.
What is the
origin of these laws, and as Einstein asked:
Could the laws have been different? Einstein’s suggestion has
little support. It is
easy to construct universes, according to Davies that are
mathematically and logically self-consistent.
The long
held idea that the laws of the universe are perfect and immutable
“cast in tablets of stone from everlasting to everlasting” is a
relic that comes directly from monotheism, which asserts that a
rational being designed he universe according to a set of perfect
laws.
Newton
and his contemporaries believed that they were uncovering the divine
plan for the universe in the form of its underlying mathematical
order.
Rene Descartes:
It is God who established the laws of nature.
This view has
gone largely unchallenged for 300 years of secular science.
Many
mathematicians are Platonists, believing that mathematical objects
have real existence, even though they are not situated in the
physical universe.
(72) The fusion of
Platonism and monotheism created the powerful orthodox scientific
concept of the laws of physics as idea perfect infinitely precise,
immutable, eternal unchanging mathematical forms, that reside in an
abstract Platonic heaven beyond space and time.
All of these
assumptions must be jettisoned to come to an understanding that the
laws and states of the universe co-evolve.
The challenge
to these assumptions builds on the work of John Wheeler and Rolf
Landauer that originally sprang for the theory of information and
computation (75).
The traditional
symbolic expression is: Mathematicsà Physicsà Information.
According to
the orthodox view, mathematical relationships are the most basic
aspect of existence. The
physical world is an expression of a subset of mathematical
relationships, whereas information is a secondary or derived,
concept that characterizes certain specific states of matter.
The alternative
view gaining ground is that information is the primary entity from
which physical reality is built.
The new
expressions are:
Informationà Laws of
physicsà Matter.
After all the
laws are informational statements.
Is the
information content of the universe finite or infinite?
No information
can travel faster than the speed of light.
Quantum
mechanics says that the states of matter are fundamentally discrete
rather than continuous, so they form a countable set.
It is then possible to work out how many bits of information
any given volume of the universe contains by virtue of quantum
discreteness. The answer is 10 raised to the 222 power bits for the
region within the horizon of time.
The holographic
principle implies that the total information content of a region of
space cannot exceed one quarter of the surface are in Planck units
that confines it. So the information content of the universe is
finite. The implications
are that nothing in the universe can be specified or described by
more than 10 raised to the 122 power bits of information.
The
information content of the universe is a fundamental limitation on
all of nature, and not just on states of the world that humans
perceive. (82) The laws
of physics are inherent in and emergent with the universe, not
transcendent of it.
The conclusion
is stark. If the cosmic
information bound is set, and if information is ontologically real,
then the laws of physics have intrinsically finite accuracy.
It would not be possible to create a state of more than 400
entangled quantum particles as quantum computer scientists want to
do. Davies predicts a breakdown of the unitary evolution of the wave
function at that point.
If one
accepts that mathematics is meaningful only if it is the product of
real computational processes, then there is a self-consistent loop:
the laws of physics determine what can be computed, which in turn,
determines the informational basis of those same laws of physics. It
appears that mathematics and the laws of physics co-emerge.
Chapter 5: Seth
Lloyd.
The
invention of the printing press was an information –processing
revolution of the first magnitude.
A small town of
1000 people, reproducing sexually with a generation time of 30 years
produces the same amount of genetic variation as a culture of one
trillion bacteria reproducing asexually every 30 minutes.
The mechanism
of storing genetic information in DNA, and reproducing with
variation, is a remarkable invention that gave rise to the beautiful
and rich world around us.
Life is the original information-processing revolution.
But the very first information-processing revolution began
with the universe itself.
Each of these
particles carried with it bits of information, and every time two
particles bounced off each other, those bits were transformed and
processed. The big bang
was a bit bang. The
universe computes.
To think of the
universe not just as a machine, but also as a machine that computes,
is a potentially powerful extension of the mechanistic paradigm.
(96)
Could the
universe have originated from randomness alone?
No! Randomness taken on its own, gives rise to gibberish.
The million monkeys typing on a typewriter would only produce
one line of Hamlet’s Soliloquy if they typed since the beginning of
the universe. But if
they typed short program instructions instead of single characters
the results would be different.
The universe
provides fresh bits through quantum decoherence.
Chapter 6:
Henry Stapp
If
one accepts as fundamental this Aristotelian idea of potentia of
objective tendencies, then the whole scheme of things becomes
intuitively understandable.
There is nothing intrinsically incomprehensible about the
idea of tendencies.
No one
has yet shown how the interventions of our minds-or some
surrogates-required by contemporary orthodox quantum theory can
consistently be eliminated.
This unavoidable entry of mental realities into the laws of
physics arises in connection with the choice of which action will be
performed on a system being observed.
This theory offers a conception of nature that enforces
everything that physics says about the structure of human
experience.
At the
atomic level deterministic laws describe actions.
It is only when the consequence of the atomic level processes
is extended to the macro level that the radically new quantum
features come into play.
It is only then that one encounters the shift from deterministic
processes to the Heisenberg/Aristotelian notion of the potentia for
psycho-physical events to occur. (108)
(My thought)
If meaning is woven into every level of complexity then
observer created reality and decoherence are one of the same.
Human
experience is an aspect of the reduction of the cloud of potentia to
those that contain the neural correlate of that experience.
The psycho-physical actions/events are of two kinds.
The first kind is a choice of how the observed system is to
be probed. The actions
of the second kind are predicted to conform to certain quantum
probability rules.
The quantum Zeno
effect is a decoherence effect, and it is not diminished by the
environment-induced decoherence: It survives intact in a large,
warm, wet brain.
The
physical described world is not a world of material substances, as
normally conceived, but is rather a world of potentialities for
future experiences.
I claimed above
that quantum mechanics, properly presented, and more specifically
the quantum mechanical conception of nature, is in line with
intuition. It is rather
classical physics that is non-intuitive. Any conception of reality
that cannot explain how our conscious effort influences our bodily
actions is problematic.
Heisenberg said
that reality is not built out of matter, but out of psycho-physical
events.
Part III: Biology:
Chapter 7: John
Maynard Smith
Smith
disagrees with some biologist who believes that classical
information theory does not apply to biology.
In both
protein synthesis and later development, genes are symbols, in that
there is no necessary connection between their form (sequence) and
their effects. The
sequence of a gene has been determined by past natural selection. In
biology, the use of informational terms implies intentionality, in
that both the form of the signal, and the response to it, has
evolved by selection.
Where an engineer sees design, a biologist sees natural selection.
Natural selection in the past produced the sequence of bases
out of many possible sequences that, via the information channel
specifies a protein that has a “meaning,” in the sense of
functioning in a way that favors the survival of the organism.
Molecular biologist
do make use of the information analogy in their daily work, such as
Transcription, translation, code, redundancy, synonymous, messenger,
editing, proofreading, library.
The information
coded in DNA came about as a result of past natural selection.
The difficulty
of applying information theory to genetics is not in the transfer of
information, but in its meaning.
In the
early days it was customary to assert that the theory was not
concerned with meaning, but only with quantity of
information—Shannon.
In biology, the question is:
how does genetic information specify form and function?
Some claim the
chemical processes are not a signal that carries a message?
Why not? In a
message can be carried by a sound wave, an electromagnetic wave,
etc., why not by a set of chemical molecules?
A gene
requires the translating machinery of a cell-ribosomes, tRNA, etc,
but his does not invalidate the analogy: a computer program needs a
computer before it can do anything..
In evolution
changes in genes can cause changes in proteins, while the laws of
chemistry remain unchanged.
The image of
development that is emerging is one of a complex hierarchy of
regulatory genes, and of a signaling system that is essentially
symbolic. (142)
The
notion of information as it is used in biology implies
intentionality. (143)
Genes carry
information but enzymes do not because there is no “receiver” of a
“message” from the enzyme.
Intentional
information seems like a better candidate for the sense in which
genes carry developmental information and nothing else does.
In biology there is an evolved structure carrying information
and an evolved structure that receives it. (143).
Chapter (8): Deacon
The
concept of energy took many decades to clarify.
It was ultimately demystified by recognizing that energy was
not a substance, but rather a constant dynamic parameter that was
transformed and yet conserved in processes of induced change.
Similarly, to develop a full scientific understanding of
information we will be required to give up thinking about it, as
some artifact or commodity.
Two kinds
of information must be discussed.
Information can refer to the sign or signal themselves,
irrespective of any meaning, or reference, as in the information
content in bits. This is
referred to as syntactic information.
Or it can refer to the ideas I am hoping to communicate, or
semantic or pragmatic information.
All
physical processes are potential computations, but no physical
process is intrinsically a computation.
In both
classical physics and quantum physics only the syntactic concepts of
information is involved.
Claude Shannon’s statistical approach to the analysis of signals and
their capacity to carry information has excluded any reference to
problems of defining content or significance.
Information is, by definition, something in relation to something
else, but it can mean “what is conveyed, or what provides the
conveyance. Shannon’s
concept totally ignores the issue of what information is about.
It is about the amount of uncertainty that is removed with
the receipt of a given signal. So to measure information requires
comparing the potential variety of signals that could have been
transmitted with what was transmitted.
It is about measuring the entropy of the physical system.
According to Shannon the information-bearing capacity of a
signal is proportional to the improbability of its current physical
state.
The generation of
information is an emergent property of life.
The process of
evolution, in its most general form, can be understood as the
necessary originative source for information.
Where there is no evolutionary dynamic there is no
information in the full sense of the concept.
In the process
of natural selection an analogous role is played by conditions in
the environment that favor the reproduction and persistence of some
variants and not others. It is in this sense that we feel justified
in claiming that the traits that are present in any given generation
of organism are adaptations to certain of those conditions.
They could be said to be “about” those conditions.
The capability
of the Darwinian process to generate new information about
organism-environment interdependency is the ultimate demonstration
of the post-hoc nature of information.
(167)
(My words)
Both the syntactic and the semantic concepts of information
are involved in the interplay between organisms and their
environment in the sense that far from equilibrium systems
(organisms) need to be associated with an environment that supports
the organisms condition.
Both the environment the signal medium and the organism the message
are needed for the full expression of information.
Both are needed to describe what it is about. The
organism/environment coupling can be seen as the signal/medium
system necessary for an expression of information.
(My conclusion)
Both the syntactic and semantic definitions of information
are expressed by the environment/organism system in that the signal
and the medium are necessary for an expression of information, and
the emergence of intelligence is the semantics of the equation.
Chapter 9: Bernd-Olaf
Kuppers
Information exists only in a relative sense—that is, in relation to
some other information.
From an
evolutionary standpoint, there are good grounds to assert that
“language” is indeed a natural phenomenon, which originates in the
molecular language of the genome.
The information
content of a message is coupled to its expectation value:
the lower the expectation value of a message, the higher its
novelty and thus its information content.
Information is
always related to an entity that receives and evaluates the
information. This in
turn means that evaluation presupposes some other information that
underlies the process of registration and processing of the incoming
information.
Information
conveying meaning cannot be compressed with change in, or even loss
of its meaning. Viewed
technically, no algorithms exist that can extrapolate arbitrarily
chosen parts of the
message and thus generate he rest of the message.
Therefore, to understand a piece of information of a certain
complexity, one always requires background information that is at
least of the same complexity.
Chapter 10:
Jesper Hoffmeyer
Charles Sanders
Peirce introduced the concept of a sign as a substitute for
information to make sure that the interpretative side of information
is not neglected. A sign
is simply something that refers to something else: Semiotics=sing
activity. (189)
It is a fact
that the universe has given rise to entities that do, daily, modify
the universe to their own ends.
This is a measuring process, whereby the agent is enabled to
modify its environment in a select way, furthering its own ends.
This is not derived from natural selection for the simple
reason that it is required for natural selection in the first place.
If an organism did not strive there would be no competition
for survival.(suspect)
The
evolutionary formation of this kind of autonomous macro-entity is
the quintessence of what is called downward causation
(Page 201:
interesting account of bacteria’s symbiotic relationship with
a shark.)
The
point of all of this is that the semiotic emergence whereby this
system was gradually established in the course of evolution
necessarily also demanded the invention of semiotic means for
assuring the stability of the system, and these semiotic means are
precisely what we understand by downward causation.
The biosemiotic
approach thereby overturns the need for intelligent design.
Chapter 11:
Holmes Rolston, III
Every biologist
must concede that “care” is there.
A consensus
claim by those in complexity studies is that complex systems must be
understood at multiple levels.
Another is that simpler systems can generate more complex
ones. Even so neither complexity nor caring can be formalized into
any comprehensive algorithmic routine.
Where there is
information being transmitted there arises the possibility of error.
The genetic code is a “code for” something, set for control
of the upcoming molecules that it will participate in forming.
There is proactive “intention” about the future.
The human brain
is of such complexity that descriptive numbers are astronomical and
difficult to fathom. A
typical estimate is 10 to the 12th power of neurons, each
with several thousand synapses.
IV
Philosophy and Theology
Chapter 12: Arthur Peacocke
They are
“emergent” in that we did not anticipate the properties exhibited by
the whole system given what we knew of the parts
The term
downward causation denotes the way in which the network of an
organism’s relationships to it environment and its
behavior patterns together determine over the course of time
the actual DNA sequences at the molecular level present in an
evolved organism.
The parts would
not be behaving as observed it they were not parts of that
particular system (the whole).
The state of the system as a whole is influencing what the
parts actually do.
Naturalism:
A view of the world, and of man’s relation to it in which
only the operation of natural laws and forms is assumed.
It was Darwin’s eventually accepted proposal of a plausible
mechanism for the changes in living organisms that led to the
ultimate demise of the external deistic notion of God.
Chapter 13: Keith Ward
Scientists who
speculate on philosophical questions usually agree that classical
materialism—the view that reality consists of nothing but small
massy particles bumping into one another in an absolute and unique
space-time—is intellectually dead.
Most contributors to
this volume distinguish three main types of information—Shannon
information, “shaping” information, and semantic information.
Shannon information is a matter of how to input the maximum amount
of information into a closed physical system.
It is concerned it might be said, with quantity rather than
quality, in that it totally ignores questions of the significance of
function of the information that a physical system might contain.
Shaping or
coding information” the sort of thing we might have in mind when
thinking of how DNA carries the information for constructing
proteins and organic cells and bodies.
We should
not necessarily look to the simplest of organisms to explain the
complex, perhaps we should instead try to understand the simplest by
looking at the complex.
The most comprehensive sate would include the specification of all
possible states.
The laws of
nature would not then be arbitrary, but necessary to the fruition of
a coherent, complex organized, and integrated universe of unique and
inexponable value. The
set of all possible mathematical states plus a selective principle
of evaluation ( a rule for ordering these states) world provide the
informational code for construction an actual universe.
So if we
look at the universe in terms of the types of information, we have
all possible states which would be the Shannon information, the
shaping information which would be the code to select the correct
principle, and then the semantic information which would be the
meaning.
This sense of
information would not be part of the physical system for which it
was a container and transmitter of information, but it would be
analogous to shaping information it would contain the patterns of
all possible physical configurations, and a principle of selection
between possibilities.
To me writes
Penrose, “the world of perfect forms is primary…its existence being
almost a logical necessity.
If we posit
consciousness as a distinctive kind of existent, we move to the
third use of the term “information”—the semantic use.
Without that
act of understanding, there is no information.
There is only the material substratum that stores information
(perhaps Shannon information).
That is why the
information carried by DNA molecules is not information in the
semantic sense. The code
does provide a program for constructing an organism, but no person
has constructed it and no consciousness needs to understand and
apply the program. Like
a computer program, it operates without the need for conscious
interpretation. (288).
In my view,
such things as conscious intention and understanding have real
existential status. They are irreducible and distinctive forms of
reality.
Consciousness
could be explained as a cosmic, holistic entity, in which the
development of the parts is explained by their contribution to the
existence of an integrated totality (huge)
This
suggests the idea of a primordial consciousness that is
ontologically prior to all physical realities that contains the
“coded” information for constructing any possible universe.
Whether or not
one calls such a primordial consciousness “God” is partly a matter
of taste. For some the
idea of God is too anthropomorphic, too primitive and sentimental,
to be of use. (good quote.)
As Wheeler
suggests the simple originating phenomena of the universe may not
even exist unless they are conceived, evaluated, and intentionally
actualized by consciousness.
For some physicists like Wheeler, it is the final conscious
state of the universe itself that is a causal factor in its own
physical origin. The
universe generates a cosmic intelligence that then becomes cause of
its own originating processes. (Huge)
Chapter 14: John F. Haught
The image of
Goad as “designer” has become increasingly questionable, especially
in view of evolutionary accounts of life.
In the
broadest sense “information” can mean whatever gives form, order,
patter, or identity to something, whether it be an electron, a
crystal, the human mind, a civilization, or the economic system.
In
communication theory information means the reduction or removal of
indefiniteness or uncertainty.
The more uncertainty is removed, the more information.
The amount of
information in a message varies in direct proportion to the
improbability of its content.
The paradox is
that the more informative a message is, the less immediately
comprehensible it may be in terms of meaning.
It makes sense that
the most significant information carried by the cosmos transcends
scientific understanding.
The real world
is a blend of order and indefiniteness.
In order to
function as potential bits of information, the letters of the
alphabet must have a random, noisy nature—the characters must be
capable of being figuratively disassembled and place in a mixing
pot. A cod has to have
the capacity to disassemble in order to reassemble.
As long as it
remains impossible to break down such rigidity, information would be
impossible—In other words, too much order—or design—would prevent
the transmission of information.
If the universe or life were simply designed it would be
frozen in a fixed and eternally unchanging identity.
Design is a dead end.
Theology might
be understandable as an influence of information flow.
The novelty in
an emergent universe is analogous to what is now called information
(313)
Just as
information can be given to a word processor without modifying the
rules governing the lower levels in the computation, so new
information in the universe effects higher level emergent
development without altering the laws of nature.
Amen.
How Mind Emerged from Matter
Terrence W. Deacon
Terrence Deacon’s essential idea concerns a philosophy he
calls “absentialism”, a Taoist notion that requires that we pay
attention to what is absent.
What matters is what is not evident.
Concepts such as information, purpose, meaning, and
consciousness are examples of concepts that are defined in terms of
what they are not, and Deacon invents the term “ententional” to
describe these phenomena.
Ententional phenomena are purposeful, end-directed, and
teleological and are usually associated with evolution, biological
processes and consciousness.
Biologists, for example, have no problem using ententional or
teleological language when describing biological processes or
events. They might say
that the bird builds a nest in order to protect its eggs, or that
DNA emerges from the nucleolus in order to be replicated, yet when
asked directly, these same biologists will completely deny any
involvement with teleological or ententional explanations for these
occurrences. Deacon
contends that rather than hiding from what seems evident, we should
instead try to explain how ententional processes arise from physical
matter and energy.
Deacon explains that the processes involved from nonlife to life and
matter to consciousness proceed in a ratcheting-up of a step-by-step
orthograde process beginning with thermodynamic constraints that
step up to morphodynamic constraints and evolve to teleodynamic
constraints. (An example
of thermodynamic constraint would be a piston in an engine that
constrains an expanding gas to produce work.) Each of these steps
from the thermodynamic to the teleodynamic and from life to
consciousness is irreducible.
One cannot deduce the parts from the whole when considering
life or consciousness, because something is lost or constrained at
each stage of the evolutionary process.
Quite literally, according to Deacon, the whole is less than
the sum of the parts.
Whereas a
machine must be assembled from various parts by a designer with a
pre-determined purpose, living organisms intrinsically
differentiate, beginning with the whole and subdividing into parts,
the most basic of which is the cell wall, a constraint necessary for
all living organisms on earth.
Once life emerged it developed from simple organisms to more
advanced, organized, and specialized forms through progressive
series of natural constraints called evolution.
Deacon has
produced a profound addition to our understanding of life and
consciousness. My only
disappointment apart from the lack of constraints on the length of
this work—nearly 600 pages—is his reluctance to include quantum
theory. Deacon seems to
want to disregard what he apparently views as some kind of fringe
aspect of physics when he states:
These [quantum physical phenomena] must be excluded for three
reasons: First, if we
were to make use of any such extraordinary physical phenomena, we
would at best only have substituted one mystery for another…”
Also, Deacon states, “Although quantum phenomena often seem
strange, they are not strange in the sense of exhibiting the kind of
causality that organisms and minds do…more important the scale at
which we do unambiguously recognize ententional properties is vastly
larger than the scale of quantum events.”
These propositions as I understand them are not correct.
First, classical physics which encompass the laws of
thermodynamics and cause and effect are a special case in physics.
Classical physics is a subset of quantum theory—not the other
way around. Secondly,
the scale of quantum effects is not too small to account for the
processes involved in the macromolecules of biological life as
Deacon says. Quantum
effects are essential for the alignment of the complementary strands
of DNA during replication, and the two and three-dimensional folding
of proteins that carry out most of the functions of the organism.
Deacon
did a wonderful job applying his thesis to explain many of the
mysteries of life and consciousness and I advanced my own thinking
on the nature of reality.
Gerald L. Schroeder, Ph.D.
Gerald
Schroeder’s premise is that there is no conflict between science and
the Christian bible in interpreting the origin of life and the
universe itself. And
just as scientists are constantly updating scientific theories, the
bible must also be reinterpreted as new texts come to light and the
Hebrew alphabet’s secrets are ascertained.
Schroeder’s
credentials are sound and his scientific expertise and easy style of
writing can draw in even the most skeptical atheists.
But, there are many statements in his book that one can take
issue. He states that
“There is now overwhelmingly strong evidence, both statistical and
paleontological, that life could not have been started on earth by a
series of random chemical reactions”
(25).
And in this he is correct.
Fred Hoyle,
author of the intelligent universe, and creator of the steady state
theory of the universe, which has since been overturned by the Big
Bang theory wrote in his book:
A generation ago a profound
disservice was done to popular thought by the notion that a horde of
monkeys thumping away on typewriters could eventually arrive at the
plays of Shakespeare.
This idea is wrong, so wrong that one has to wonder how it came to
be broadcast so widely (11).
It has now been
calculated that given the entire history of the universe not even
one line of a Shakespearean play would be composed by this random
method. And life is even
more improbable by random means.
But the unlikely odds of spontaneous generation do not
suggest some teleological guiding hand.
In his
landmark book, Programming the Universe, quantum computer scientist Seth Lloyd says
that the universe we observe did not come about by a grand designer,
or by purely random classical events.
The complexity we observe in the universe is a result of
random programs run on the largest computer imaginable—the universe
itself. Like a giant
computer the universe is constantly computing and the laws of
nature, gravity, the electromagnet force, and the two nuclear forces
are its algorithms.
Our very
existence has come about as a result of the play of these forces.
In the vast reaches of space unimaginable quantities of
matter are drawn together by the force of gravity.
At a critical point, the crushing gravity becomes so intense
that nuclear fusion takes over and a star is born.
Hydrogen is converted to helium, as the outward pressures of
fusion holds off the force of gravity.
If the star
is large enough fusion continues creating all the elements of the
periodic chart through element 26, Iron.
When all fuel is exhausted and no more fusion can take place,
gravity once again takes over, and the star collapses upon itself
setting the stage for an enormous rebound explosion called a
supernova, in which the heavier elements of the periodic chart are
created and flung into space.
This matter eventually coalesces once again, as happened in
the formation of our own solar system.
All the elements, hydrogen, oxygen, carbon, nitrogen, and
sulfur, which were created by gravity and fusion, are now available
to form the organic compounds of life by the chemistry of the
electromagnetic force.
The
algorithms, the laws of nature, have brought about the elements of
life without a supernatural force, but how could these laws turn
inorganic molecules into life?
I have been all too aware of some of the shortcomings of
evolutionary theory:
1)
If evolutionary theory is a complete theory, why does man
need such a highly developed eye?
Evolution is a very frugal process and no new adaptation
happens without a survival advantage.
Would an eye which is not quite so advanced have caused man
to lose the battle of survival?
2)
If evolutionary theory is a completed theory, how did a wing
develop? Evolutionary
theory teaches us that advances come in gradual incremental states
over many generations.
How then, according to natural selection could a wing evolve?
Certainly nothing short of a full-blown wing capable of
flight would be of any use to a creature.
How could this wing have evolved in stages over thousands of
years as present theory suggests?
3)
If evolutionary theory is a complete theory, how do we
explain cell specialization?
In a dividing blastula cell, a fertilized egg, all the
genetic material is reproduced by duplication and passed to the new
cell so that each cell contains the same genetic information.
Initially, division happens without any specialization but
after a certain amount of time cells begin to cluster to form
specific body parts with specific functions.
Where does the new information come for this specialization?
4)
If evolutionary theory is a complete theory, how do we
explain the more highly organized complex forms of life?
Simple bacteria have proved the most hardy and adaptable life
forms. What is the
mechanism that would allow for the appearance of more complex
biological systems which are less efficient, and less adaptable?
According to the second law of thermodynamics, systems always tend
to move toward more and more entropy, not more complexity.
All of this,
along with missing links in the fossil record, casts a shadow on
evolutionary theory. But
recent advances in information theory have put bottom-up evolution
on solid ground once again.
In his book, The Bit and the Pendulum, Tom Siegfried interviewed quantum
physicist John Wheeler, who took over Albert Einstein’s chair at
Princeton and is best known for coining the term “Black Hole”.
Wheeler says that in his early days he believed that the most
basic entity in the universe was the particle.
In later years he believed it to be the “field”.
Now Wheeler believes that information is the essential
fundamental unit in the universe.
It is
difficult to think of information as a quantity, but information is
real and it always has a physical representation,
whether it be markings on paper, holes in a punch card,
magnetically arranged atoms on a CD, or the up/down spin of an
electron. The smallest
unit of information is the bit which can be represented as a “1” or
“0”. Like any physical thing in our universe, the bit is subject to
the laws of nature, including the laws of thermodynamics and
Relativity. Information,
like energy, is conserved; information always moves toward the most
probable state, maximum entropy; and no information can travel
faster than the speed of light.
And as it turns out, information tends to be self-organizing.
In the
1970’s mathematician John Horton Conway discovered a self-organizing
principle that has since become known as
cellular automata.
He found that arrays of
inanimate objects can evolve complex global designs using simple
local Boolean instructions.
For example, one can randomly wire together an array of light
bulbs. Simple Boolean
logical rules or instructions can then be implemented to control the
flow of electricity to the light bulbs.
An instruction could be something as simple as, “If any of
the light bulbs in the adjacent cells are turned on, then turn off
the light bulb in this cell.”
The instructions are reiterative and after each instruction
the lights will take on a different pattern.
The instructions are again implemented, perhaps hundreds of
times every second.
Depending on how they are wired and the type of instruction, the
array of lights will often take on complex patterns that appear to
be “alive”
.
In this manner, the universe is constantly computing using
reiterative programs, and the simplest programs are the ones we see
most often repeated in nature. The spiral can be seen in a nautilus
or in the giant pin-wheel galaxies in deep space.
The Fibonacci series (1, 1, 2, 3, 5, 8, 13, 21…) a
reiterative process by which a number in the series is attained by
adding itself to the previous number, is ubiquitous in nature,
describing everything form the growth of bacteria in a Petri dish,
to the growth over time of most plants.
These simple programs occur spontaneously over and over in
nature.
Self-organizing
cellular automata-like processes occur at all levels of reality
including the neural networks of the brain.
In the Quantum Brain, Jeffrey Satinover writes:
From the computational perspective,
physical reality is inherently like a cellular automaton, and thus
facilitates computation and self-organization at all scales.
The implementation of basic brain structure—neural
embryogenesis—occurs via cellular automata-like processes….From a
mathematical point of view, neural networks and cellular automata
are almost identical—only the physical basis and appearance are
different:
Self-organization at one scale yields the capacity for
self-organization at the next (87).
From these
processes we can imagine that life really could have developed
spontaneously, but still we must deal with those embarrassing gaps
in the fossil record.
The author states:
“Yet today’s fossil record is as discontinuous as that of
Darwin’s time” (p.134). “Punctuated equilibrium [The idea that a new
species bears little resemblance to the species that predated it] is
now the accepted theory for many paleontologists.
Smooth evolution is admittedly inadequate as a theory”. (163)
Schroeder then goes on to say that “Biblically, the transition from
the nonliving to the living and the appearance of plants and animals
are all marked by statements of God” (136).
Yet,
Schroeder is firmly grounded in the biochemistry of life, and does
not deny in any way the power of genetics in the evolution of life.
In what seems to be a contradiction he states:
Based on the advances in molecular genetics during the past
thirty years, it is now believed that the adaptive variant was
already contained within the genetic material of the parent
generations….When the environment demanded this trait for survival,
the number of individuals with this trait increased greatly. (137)
While Schroeder might
have cast some doubt on the validity of traditional evolutionary
theory with this statement, he simultaneously opened the door to a
scientific explanation for gaps in the fossil record, which require
no need for supernatural “words from God”. What Schroeder is
implying here is that “God’s plan” is being carried out now as it
was before, through deterministic DNA codes.
This
deterministic method may or may not be proven to be true.
Either way it doesn’t warrant a supernatural cause. But there
is another possibility on the scientific horizon that does not
require determinism.
In his book The Biology of Belief Bruce Lipton wrote:
Geneticists experienced a comparable
shock when, contrary to their expectations of over 120,000 genes,
they found that the entire human genome consists of approximately
25,000 genes. More than
eighty percent of the presumed and
required DNA does not
exist! (62)
It was
thought that one gene codes for each protein in the cells.
But new evidence shows that one gene can create over 2,000
variations of proteins.
Using quantum tunneling, these proteins can fold into trillions of
different shapes to carry out their function in the cell, and
environmental factors can modify these function.
Most of
the functions of the cells can be carried out without the nucleus.
For example, a prokaryotes bacterium has no nucleus
whatsoever, yet a bacterium eats, digests, breathes and excretes.
All this action takes place across the cell’s membrane which
Lipton calls the “brains of the cell”.
He says:
So the nucleus is not the brain of the cell—the nucleus is
the cell’s gonad!
Confusing the gonad with the brain is an understandable error
because science has always been and still is a patriarchal endeavor.
(66)
This could explain the leaps in the fossil record, and demonstrates
that organisms are in flux and constantly changing according to
information gleaned from their environment.
No “command” from the genetic code is needed for change.
Schroeder saved his most spectacular sleight of hand
to explain the six days of creation of Genesis.
When the Bible describes the
day-by-day development of our universe in the six days following the
creation, it is truly referring to six 24-hour days.
But the reference frame by which those days were measured was
one which contained the total universe.
This first week of Genesis is not some tale to satisfy the
curiosity of children, to be discarded in the wisdom of adulthood.
Quite the contrary, it contains hints of events that mankind
is only now beginning to comprehend (53).
He explains that
this temporal incompatibility is due to relativity theory; that God
is in some “other place” in the universe where, apparently, time is
measured differently.
What other place does an omnipresent God occupy?
Most
people are surprised to learn what relativity theory actually
states: The
laws of the universe are the
same for observers in all frames of reference.
In fact, Einstein had originally planned to call his theory,
“the theory of invariance”.
A
person in a rocket ship traveling at close to the speed of light
will find no abnormalities in the laws of physics.
The strange oddities of Relativity only occur when
comparisons are made between observes in different frames of
reference. What
comparisons can be made for a God who is all places at once?
Schroeder really stretches the bounds of credulity here.
Gerald L.
Schroeder’s command of Relativity is excellent and I thoroughly
enjoyed this chapter, but he certainly didn’t attain his goal of
convincing me that science and the bible are compatible.
Even with these limitations, this was a fun and informative
book, and I was glad I gave it a chance.
The Spirit Molecule
Rick Strassman, M.D.
Rich
Strassman’s account of his ground-breaking research program with the
psychedelic drug DMT is more than a scientific chronicle.
His tale of the rigors of such an extraordinary undertaking,
from the years he spent gaining permission for the study, getting
access to the drug, and carrying out the experiments with human
subjects, is suspenseful to the end.
DMT or di-methyl-tryptamine is produced by many organisms and
is found abundantly in plants and animal tissue.
In humans it is believed that it is produced in the tiny
pineal gland, which is situated deep within brain.
Its location in the brain corresponds with Eastern Ayurvedic
traditions of the highest
chakra. In reptiles,
the pineal is a light sensitive organ, and though this function has
been lost in humans, it has been referred to as the “third eye”.
Descartes called it the “seat of the soul”, and in modern
times it has been a focal point of the study of consciousness.
Though the academic purpose for the research at the University of
New Mexico from 1990 to 1995 was to find the purpose and function of
endogenous (produced by
the body)
DMT, as well as its
relationship to psychosis, it was Stassman’s emersion in the
teaching of Buddhism that was his inspiration throughout.
Ironically, the findings of his research undermined many of
his beliefs and left him alienated from the Buddhist community.
Many of his
research subjects were experienced psychedelic users, but they were
unprepared for the intensity of DMT.
One volunteer described his acute ten minute voyage into
another dimensions as being hit by a “nuclear cannon.” While LSD
allows the user a self-guided trip, the DMT experience has its own
agenda, stripping the subject of any goals, expectations, and ego.
As Strassman said, “DMT" as the true spirit molecule, gave
the volunteers the trip they needed, rather than the one they
wanted.”
But it was not only the
volunteers of the experiments that were shocked.
Strassman was completely unprepared for his subjects’ reports
of contact with alien beings, reptiles, and other strange entities.
In other cases, they described out-of-body experiences, going
through tunnels of white light, and meeting relatives, spirits, and
angels. Strassman soon
realized that these reports were very similar to the modern cultural
phenomena of alien abductions, and near death experiences (NDE)
—neither of which were familiar to him prior to his research
project.
He theorized that these
two phenomena might be caused by excess releases of endogenous DMT
from the pineal gland under conditions of stress, such as child
birth or severe trauma, precipitating an NDE.
Similarly, a close cousin of DMT, namely melatonin and
perhaps DMT itself is released during the nighttime hours.
This, Strassman thought, could give rise to the alien
abduction experience, which most often happens in the early morning
hours. This seems to be a very likely explanation.
Near death
experiences have often been used to support religious teachings.
Perhaps the reverse should be considered.
Religion was invented to explain the strange phenomena
experienced due to an excess release of DMT in the body.
Because of the
intense reality of the DMT experience, and the volunteers’ rejection
of psychological, or biological explanations for their journey,
Strassman was forced to consider an even more intriguing
explanation. Perhaps,
DMT does not cause hallucinogenic experiences, but rather, allows
our brain to sense different forms of existing reality.
The information we receive from the world is limited by our
five senses. It is
possible that DMT allows us to sense other dimensions and other
realities.
In the end the
rigors and stresses of the study took a huge toll on Strassman
costing him his job and alienation from the Buddhist community.
Even though many of the monks had entered the monistic
community as a result of LSD use, their rise to elected power left
them unable to support Strassman’s research.
As Strassman stated, “Holiness won out over truth.”
Rich Strassman’s
writing style makes this book extremely readable, and the DMT
subjects’ reports and anecdotal stories make the book difficult to
put down. This book
warrants a “5 star” rating.
I highly recommend DMT the Spirit Molecule.
David Wade
As I began to
read Crystal & Dragon, a large beautiful book laden with wonderful
illustrations, art from eastern and western cultures, and wide
margins filled with quotations to support David Wade’s work, I
looked for a singular meaning to the terms crystal
and dragon. I soon
realized that David Wade had imbued these two terms with layers of
meaning.
Primarily, crystal and
dragon symbolizes the opposing views of eastern and western modes of
thought. The dragon, the
symbol of eastern culture, is representative of the smooth
continuous ever changing universe, a universe that is holistic,
spontaneous, and integral to the life it supports.
The crystal is symbolic of western thought.
Crystalline
structures, which comprise most of the solid matter in the universe,
are characterized by rigid symmetrical arrangements of molecules in
lattice-like layers that produce very distinct and enduring
geometric forms. These
precise symmetrical forms are a metaphor for many of the political,
religious, and scientific aspects of western cultures.
Secondarily, and
I think more importantly,
crystal and dragon reflect the Buddhist and Taoist philosophy
that nature is neither dualistic nor monistic, but rather a unity of
opposing forces which the Chinese refer to as the ti and the t’ien.
These opposing, natural, self-organizing principles express
themselves in the interplay between symmetry and chaos, between form
and energy, and between determinism and chance.
The ti and the t’ien are responsible for the creation of the
universe and for the very definition of asymmetrical time itself:
the change of one type of energy into another in the relentless
drive toward entropy.
Since the dawn of
the twentieth century, western science, which has become synonymous
with the philosophy of dualism, has come full circle to embrace many
aspects of the ti and the t’ien.
Niels Bohr, himself a student of eastern philosophy, used the
word complementarity to describe the mutually exclusive properties
of the particle/wave duality.
Not only did he see that the wave and particle were
complimentary aspects of the same thing, but in the process he rid
the world of the mind/matter duality.
In a holistic universe there can be no separation between the
observer and the observed.
The seeds
of western science were sewn by the Greek philosophers, who provided
us with a crystal view of reality that was achieved by nothing more
than passive observation and pure rational thought.
According to Wade, the earliest proponent of this school of
thought was Parmenides of Elea, who thought that change was merely a
temporary expression of the more perfect world of permanent forms.
Plato crystallized
Parmenides ideas when he said that the world of perfect forms is
part of the objective external realm which is at least partially
accessible by the intellect.
The best chance of understanding these forms, he said, was
through the study of objective numbers.
This mind/matter duality provided a clear and necessary
distinction between the observer and the observed, setting the stage
for an objective analysis of the material world.
Many of the
principles ascertained by the early Greeks through the deductive
method of science were resurrected in the early twentieth century
and were verified by an entirely different mode of scientific
analysis—the inductive experimental method. In fact, a whole new
revolution in science emerged based on the Greek word “quanta”
meaning “packet” or “smallest unit”:
Almost twenty five hundred years later Zeno was vindicated by
Werner Heisenberg’s theory of indeterminism which showed that a
particle’s velocity and momentum could not be simultaneously
established to infinity.
The more we know about the momentum of a particle, the less we know
about its position and vice versa. This was not a shortcoming of
technology, but rather an all- encompassing mathematical law and
philosophical principle.
Max
Planck, the father of the quantum revolution, mathematically
demonstrated that emissions of heat and light from a black body
could be explained only if energy were not continuous, but came in
“quanta” or chunks.
Just a few
years later Albert Einstein, in one of three papers published in
1905, theorized that the buffeting of tiny pollen grains seen under
powerful telescopes was the result of molecular motion.
This finally convinced most physicists of the atomic theory
of matter.
In his second
Nobel Prize winning paper on the photoelectric effect, Einstein
found that when cohesive light was shone on a metal, electrons are
“kicked out”. The energy
of these emissions is not dependent upon the amplitude of the light,
but only upon the frequency of the light in multiples of Planck’s
constant. These
particles of light were soon to be called
photons. The
quantization of the world had seemingly become established, and the
holistic paradigm of the east seemed distant indeed.
Two
major paradigms in the sixteenth and seventeenth centuries
solidified the reductionist crystallization of reality: the
Copernican heliocentric revolution, and the “clock-work universe of
Francis Bacon and Rene Descartes, which provided the foundation of
scientific thought during the “age of reason”.
Copernicus removed us from the center of the universe in a
physical way, and the clock-work model removed us in a psychological
way by reducing our humanity to nothing more than the result of the
deterministic laws of matter and energy.
Few can
deny that the dualist, reductionist, and deterministic approaches to
the world have been extremely successful.
The inductive scientific method of inquiry has divined the
basic structure of matter, explained how chemical interactions take
place, provided a cosmological explanation of how the universe
evolved, and given us an understanding of biology from the
perspective of the cell and DNA, but it has come at a price. It has
separated us from nature and made free will untenable.
It seemed that humanity and life in general were the result
of nothing more than the mindless scurrying of matter.
The
promiscuity of reductionism, and the descent into the crystal-like
world of dualism seemed to go unchecked, but there was a glimmer of
change on the horizon.
After twenty five hundred years of materialistic determinism a
simultaneous synthesis was emerging in the West.
The early twentieth century saw the advent of two major
scientific theories that changed the world: Relativity and quantum
theory. Both turned the
tide toward a more participatory role for the observer, and began
the process of merging the crystal West with the dragon East.
Ironically, this
synthesis fell to one of the most prominent determinist of the age,
Albert Einstein. He,
more than anyone, was responsible for slowing the tide of the
reductionist wave, and reconciling the dichotomy of the crystal and
the dragon. Two of the
principles that emerged from special relativity were his famous
equation showing that form and energy were two aspects of the same
thing, and that the subjective qualities of space and time could be
synthesized into a more objective principle of space/time.
In his
theory of general relativity he proposed the equivalency of gravity
and acceleration. Though he sought objectivity and invariance in
nature, he discovered many concepts of the world that were relative
to the observer.
It was during
this period of time that Einstein’s friend and colleague, Niels
Bohr, presented his revolutionary idea that elementary particles, or
more correctly, entities of matter and light, are neither particle nor wave.
As a matter of fact, elementary particles of matter and light
do not manifest as “real” classical waves, but are only statistical
waves, and the determination of which of these two aspects of
reality are revealed, most astonishingly, depends entirely upon the
observer.
Clearly, the observer was
once again taking a more central and integrated role in the universe
of which we are apart, but the quantum revolution did not concern
only the infinitesimal, it also included the infinite universe on
the grandest of scale.
In what was one
of the most important discoveries of the age, Edmund Hubble
discovered that the entire universe was expanding at a prodigious
rate. Born in an
unimaginable violent explosion from an infinitesimally small
perpetration of space, from the embryonic seed, the alpha, the black
egg, the void, the entire universe and everything in it is expanding
outward dragging space and time along with it.
This discovery was a blow to Fred Hoyle’s “steady state”
universe and the crystal idea of permanence of form.
Motion, as Heraclites of Ephesus in Asia Minor intuited
almost twenty five hundred years ago, and Galileo confirmed by
experiment many years later, is the natural state of matter.
Though the new
scientific understanding has not filtered down to the public at
large, most of the scientific community has come to fully embrace
the dragon to the extent that it is not at all surprising to hear
scientists and philosophers discussing the strong anthropic
principle with complete and utter sobriety.
Western science
has merged with the ancient traditions of Confucianism, Buddhism,
and Taoism to once again entertain the idea that the universe has
purpose, not the monotheistic notion of a grand designer, but
purpose in the Buddhist sense in which consciousness is the ground
of all being.
Consciousness
might be the ultimate link in the interplay between two dualisms
that have garnered much study in recent years:
information and
meaning.
Many physicists now consider
information to be the most basic entity in the universe.
Information is not an abstraction according to the modern
interpretation.
Information is crystal-like in that it always has a physical
representation, whether it is petroglyphs on a rock, ink on paper,
magnetic alignments of atoms on a CD, the up/down spin of electrons,
or the angle of the polarization of light.
Like energy, information adheres to the laws of
thermodynamics and relativity with one interesting exception:
information can be created from nothing.
Meaning, on the other
hand, is more subjective and dragon-like. Petroglyphs are
meaningless until the information they encode is comprehended, words
on paper are merely scrawling until the meaning of the alphabet is
learned, and magnetic arrangement of atoms on a CD are meaningless
until they are turned into music.
Quantum
nonlocality experiments with polarized photons demonstrate that
information can be extracted in a binary form. Counter-intuitively,
light has no polarization until it is measured. When measured, light
is polarized either to the angle measured or 90 degrees from the
angle measured. The
dynamic, observer-created
attribute of polarization is a binary either/or proposition.
When two
correlated photons are shot in opposite directions the measurement
of the angle of polarization of one photon will instantaneously
affect the angle of polarization of the other photon despite their
distance of separation in time and space.
While information can not violate faster-than-light travel,
meaning is not
relativistically constrained and is conserved holistically.
Physicist David Bohm referred to the ubiquitous and non-local
aspects of meaning as the “implicate order.
Ironically,
quantum physical experiments are beginning to suggest that the
subjective qualities of consciousness and meaning are an objective
aspect of the universe.
Quite possibly the very nature of consciousness itself might be
bound up with the relationship between the information/meaning
complementarity.
I would be remiss
not to mention one of the most prominent features of this
wonderfully visual book, and that is its art.
Wade says that there is no clearer example of the duality of
crystal and dragon than in the art of eastern and western cultures.
He says of the two cultures of art:
“The art of Islam is to a great extent the art of
decoration…Islam has demonstrated an almost obsessive fascination
with pattern and symmetry, a genre that seems to refer directly to
the pure geometry of the crystalline order.
By contrast those arts of China that show the influence of
Taoist sentiments are aschematic and, if anything, show a preference
for asymmetric composition.
In this art there is no underlying structure and no adherence
to form (50).”
No aspect of
humanity escapes the crystal and the dragon. Perhaps this is a
result of what philosophers and scientists from John Locke to
Immanuel Kant and Niels Bohr to David Bohm have suggested:
We can never know nature directly, because our investigation
of nature must always be subjective.
It has been stated that we have no better understanding of
what gravity is than did the ancient Greeks, nor is there any clear
scientific understanding of the terms form and energy.
Science can only describe how these aspects of reality
interact with nature.
The scientific process, which by its very nature must be dualistic,
will always be a subjective endeavor in a holistic universe.
It might be that
we see the world as dualistic because or our physical brains. The
left half of our brain, the crystal, is more concerned with logic
and language, while the right half tends to be more holistic and
intuitive. Author Julian
Jaynes hypothesized that the integration of our brains via the
Corpus Callosum is a recent evolutionary advance.
Perhaps this is the reason that our science is moving toward
a more complementary and integrated synthesis of the crystal and the
dragon.
The
longevity of our republic might also be a result of the perfect
combination of the crystal and dragon.
The constitution of the United States was hammered out with
equal consideration for those who wanted a strong central government
and those who favored state’s rights.
The constitution provides for a rigid system of three
branches of government, and yet, each of these branches is
responsive to the people it serves.
Finally,
one last dualism remains: The very physical layout of David Wade’s
book is the epitome of the crystal and dragon.
Each of the chapters of his book could stand alone, and yet,
each is intricately woven into a holistic tapestry of pure elegance.
The illustrations alone are enough to recommend it, but you
won’t want to miss out on every last detail of the content.
How Life and Consciousness are the
keys to Understanding True Nature of the Universe
Robert Lanza, M.D.
With
Bob Berman
One of the most intriguing mysteries in science involves our
existence in the universe.
How did it happen that all the conditions necessary for the
emergence of life, from the values of the four forces of nature, to
the mass of elementary particles, to the conditions within stars,
are tailor made for the existence of life?
If any one of the hundreds of fundamental values of nature
were slightly different we wouldn’t be here to ponder our existence.
Robert Lanza says
that, currently, there are only three hypotheses to explain these
remarkable coincidences:
The first idea, which merely averts any serious treatment of the
subject, is to simply say God did it all.
The second
contender is the anthropic principle which has two versions, the
weak and the strong:
The weak anthropic
principle states that we would not be here to ponder our existence
unless the conditions were just perfect to spawn life.
In a Darwinian sense, it is possible that our universe is one
of trillions of universes that popped into existence, and it is
simply mere chance circumstances that our universe supports living
organisms. Not only is
there no observational data to support this idea, but it is no more
satisfying than chance mutation for an explanation of biological
evolution.
The strong
anthropic principle, which is similar to the
participatory universe advocated by the renowned physicist John
Wheeler, concludes that it is no accident that the universe has the
conditions to support life, because it is we, the observers, who
have brought forth our present universe. Those unfamiliar with
quantum experiments might find this idea a bit outrageous, but ample
experimental evidence prevails.
Thought experiments initially proposed by John Wheeler and
others, where put to the test in the 1980s and results conclusively
indicated that an observation in the present can retroactively and
instantaneously change the path of a particle in the distant past
without regard to time and space.
The third
proposition, similar to the strong anthropic principle, and
advocated by Lanza is called Biocentrism.
Lanza professes that the universe exists in a state
of superposition. The
observer is necessary to collapse this state of probabilities into a
discrete reality. The
idea that a particle only exists as a probability until observed was
first proposed by Max Born who discovered to Einstein’s dismay that
the electron exists only as a statistical probability until it is
observed. This is now
the cornerstone of the Copenhagen interpretation of reality.
Lanza purposes that the universe itself exists in a state of
probability until it is observed:
“There is no separate physical universe outside of life and
consciousness. Nothing
is real that is not perceived. There was never a time when an
external, dumb, physical universe existed, or that life sprang
randomly from it at a later date.
Space and time exist only as constructs of the mind, as tools
of perception.” (159)
Relativity
theory, which treats space and time as malleable and dependent upon
the frame of reference of the observer, lends credence to Lanza’s
notion that space and time are merely perceptual constructs. It is
very counterintuitive to imagine that space and time are not part of
reality, yet Isaac Newton’s three laws of motion, Irwin
Schrodinger’s wave mechanics, Werner Heisenberg’s matrix equations,
Richard Feynman’s sum over histories equations and Albert Einstein’s
field equations are all time symmetrical.
We have always equated change and movement with time whether
it be the rotation of the earth to measure one day, the revolution
of the earth around the sun to indicate one year, or the mechanical
movement of the hands of a clock to mark off the minutes and hours,
yet the relationship between time and motion is nothing more than a
persistent illusion.
Only the second law of thermodynamics appears time asymmetrical.
Systems tend to move from a state of order to a more probable
state of entropy. Lanza
says this fact has nothing to do with time.
The second law of thermodynamics does not necessitate
time—“Time” necessitates the second law of thermodynamics.
I think that
“Biocentrism” the book and the word itself will become as common to
our language as “the big bang” or “black-holes”, but I’m not certain
that this work is that unconventional or revolutionary, because of
its similarities to John Wheeler’s Participatory universe idea.
The difference between John Wheeler’s participatory universe
and Lanza’s biocentrism is slight but significant.
While Wheeler believes that the material universe has come
into its present state as a result of initial observations, Lanza
believes that observations are ongoing and continual and that
consciousness and the material world are “correlative.
The moon, he would contend, ceases to exist when we are not
looking. I, for one,
tend to believe that my bed is there even as I sleep. Even so, Lanza
makes an extraordinarily rational argument for his Biocentrism
thesis. This is
one of those intriguing books that come along all too infrequently.
John E. Sarno, M.D.
Based on
24 years of successfully treating psychogenic disorders John Sarno
has concluded that Tension Myositis Syndrome (TMS) is the most
common emotional induced disorder in the Western World.
Tension
Myositis Syndrome is a painful but harmless change in the state of
muscles which manifests itself in many forms including fibromyalgia
(a muscular condition in which most of the muscles of the torso are
affected), gastrointestinal maladies, heartburn, pre-ulcer symptoms,
hiatus hernia, colitis, spastic colon, tension-induced headaches,
back aches, knee and hip problems, asthma, and a host of other
illnesses.
TMS is a
strategy used by the irrational unconscious mind to focus attention
on the body through physical pain to prevent fearful thoughts from
emerging from the unconscious mind into the rational conscious mind.
Childhood rage, self-imposed pressures, and the stresses and
strains of everyday life are triggers for TMS.
In one
study MRIs were done on ninety-eight people with no history of low
back or leg pain.
Thirty-six percent had normal discs at all levels, 52 percent had a
bulging disc at one or more levels, 27 percent had a disc protrusion
and 1 percent had an extrusion.
The conclusion is that back pain and the occurrence of bulges
or protrusions might be coincidental.
In treating patients with TMS, Sarno noticed, that there
were several steps to alleviating the physical pain.
The most important of these is to merely acknowledge that
there is no structural abnormality, and accept that the pain is
psychosomatic. By doing this we change the focus of attention from
the body to the psyche, rendering the pain useless and thwarting the
brains strategy. Sarno
found that merely acknowledging that the pain was not “physical”
often resulted in the alleviation of the pain completely and
permanently. This is not
a placebo effect since most placebos are temporary and the pain
often returns according to Sarno.
In addition
Sarno noted that physical rehabilitation was often
counter-productive since exercises focus on the “physical”,
reinforcing the mind’s attempt to draw attention toward the body and
away from the emergence of fearful thoughts into consciousness.
People
suffering from a psychogenic illness such as TMS must realize that
it is very common in Western society, and does not in any way imply
that one has psychological disorders.
Who’s in Charge?
Free Will and the Science of the Brain
Michael S. Gazzaniga
Philosophers have debated the concept of free will at least since
the time of the ancient Greeks. As with any philosophical issue one
most first define and agree upon the definition of the term “free
will” before any headway can be made either scientifically or
philosophically. Until the 1960s the debate over whether or not
human beings have free will was at a stalemate, because neuroscience
had not progressed far enough to accurately define what is meant by
free will. This all changed as a result of experiments designed by
cognitive neuroscientist Michael Gazzaniga when he was an
undergraduate student at Dartmouth College.
Michael Gazzaniga had a unique opportunity to use his newly devised
set of tests on patients who had undergone commissurotomy surgery—a
surgery that severs the corpus callosum dividing the two hemispheres
of the brain. A patient who has undergone this type of surgery feels
normal with little loss of cognitive abilities.
Gazzaniga’s first test on a patient with split brain surgery
demonstrated that things were not as normal as the patients
reported. For example, when presenting a visual image of an object
to the right field of vision that travels only to the left
hemisphere of the brain, “split-brain” patients will invariably
respond that they saw the object; however, when the same image is
presented to the left field of vision which travels only to the
right side of the brain, patients (speaking from the language area
of the brain located in the left hemisphere) state that they see
nothing. But the right hemisphere is not blind to the object. When
asked to draw the object using the left hand controlled by the right
brain, these patients can draw the object presented to the right
hemisphere. In general, Gazzaniga’s tests show that visual, tactile,
auditory and olfactory information received by only one half of the
brain is processed by that hemisphere of the brain without awareness
by the other half of the brain. The two hemispheres act
independently and have specialized systems of operation, but
severing the two hemispheres does not create two conscious
individuals in one body. The two hemispheres are not equal. The left
hemisphere is superior at logic, language, and making inferences,
while the right brain is superior at spatial skills and is very
literal. The new model of the brain suggests that the conscious
experience is not so much left/right brain oriented, but is a result
of specialized systems or modules that compete for conscious
awareness across the entire brain.
Gazzaniga says that what makes us different from other animals is
not brain size, but brain specialization. Because brain size is
limited for many practical reasons, evolutionary processes have
evolved a brain that specializes and automates by relegating many
functions to modules that operate unconsciously and more
economically in terms of energy consumption. This specialization is
not only in the way the brain makes connections, but it is also in
the very neurons themselves—a fact that has only recently been
discovered.
Like all complex systems the brain has no external organizing
principle or central control; therefore, consciousness arises not
from a central command center, but as an emergent property of a
decentralized, self-regulating system of modules all competing for
attention. Gazzaniga admits that calling consciousness an emergent
process of the brain doesn’t explain what consciousness is, but it
puts the problem into the same framework as emergent processes in
other fields of study such as biology, chemistry, sociology, and
even art. Emergent processes are irreducible in that the parts
cannot be extrapolated from the whole. For example, one could not
predict Newtonian law by studying the atom or the complexity of a
cell by studying the chemistry that makes up the cell. At each step
of complexity entirely new properties emerge such that the whole is
greater than the sum of the parts.
From his experiments, Gazzaniga has come to understand at least two
important principles. First, that the left brain is the interpreter.
The left hemisphere is constantly confabulating and concocting
stories to rationalize the actions and beliefs of the deterministic
brain ad hoc. If a person survives a plane crash, he will often
attribute the good fortune to divine intervention rather than simply
saying it was a random luck. The left hemisphere attempts to find
meaning and patterns of order in chaos. And second, consciousness is
relatively slow compared to the autonomic and automatic processes
taking place unconsciously. We will jump back when we think we see a
snake or we will quickly remove our hand from a hot stove long
before it comes to our awareness. Our conscious selves believe that
these actions occur in real time in relation to our consciousness,
but these responses occur well before we are consciously aware of
them.
Gazzaniga maintains that the brain is deterministic in nature, but
this is not your grandfather’s determinism. It isn’t the determinism
of Descartes, Newton or Dawkins; it is the new determinism of
emergent properties that are irreducible. So do we have choice?
Gazzaniga might answer yes and no. Both bottom up and top down
processes are constantly taking place, but it is difficult to say
that one is the cause of the other. For instance, DNA makes RNA, and
RNA makes
proteins that carry out the life processes of the cells and organs
of the body. The RNA and
the proteins are involved in transporting, transcribing, and
translating proteins that are responsible for maintaining and
replicating new strands of DNA. Neither is possible without the other. So what came first
the DNA or the protein? And which one causes the other? This is the
dilemma that has led to complementarity—a concept that fits nicely
into the interplay between the deterministic bottom-up processes of
the brain and the conscious top-down processes of choice. One is not
necessarily the cause of the other. They exist as complementarities.
With this type of scenario in place, Gazzaniga suggests we simply
abandon the whole concept of free will, because it is no longer
relevant to even pose the question. We like to believe that we are
somehow independent of the deterministic brain, and it is we who
are calling the shots. Well, in fact we are, but it isn’t the
Freudian “I” who is calling the shots. Most of our choices and
decisions are not made by the aware conscious mind. As the author
states: “What on earth do humans want to be free from? Indeed what
does free will even mean? However actions are caused, we want them
to be carried out with accuracy, consistency, and purpose.” We don’t
want to be free of our anatomical systems, our emotions, or our life
experiences, for these are all involved in consistent and sound
decision making despite the fact that they are often not conscious.
Michael Gazzaniga does a fantastic job of helping us to understand
why the concept of free will belongs in the trash bin of antiquity
right along-side concepts such as the aether of space, and the vital
force of life.
Accumulating evidence in the fields of archeology, anthropology, and
archeo-astronomy are giving credibility to the idea that one or more
advanced civilizations have lived on this planet in pre-recorded
history. New archeological finds such as the twelve thousand year
old stone circles of Gobekli Tepe in Turkey, and the reexamination
of megalithic structures of Egypt, Baalbek, Lebanon, Teotihuacan,
Mexico, Tiahuanaco, Macchu Picchu, and Puma Punku in Peru and
Bolivia are providing incontrovertible evidence that a highly
advanced civilization lived in our remote past. What has not been
clear up until now is whether these advanced civilizations were a
product of the cyclical rise and fall of terrestrial civilizations,
or the result of alien intervention from an extraterrestrial race.
In a methodical and scientific approach, Wayne Herschel has unleashed upon the world a magnificent and boldly illustrated work of scholarship and art called “The Hidden Records.” In his first chapter called ‘Star Map Egypt’, Herschel graphically shows how the pyramids of Egypt are laid out as a representation of the constellations. Herschel began his own quest into the subject after reading Robert Bauval’s book “The Orion Mystery,” in which Bauval claimed that the pyramids on the Giza plateau mimicked the layout of the stars of the constellation of Orion. Bauval’s book was a sensation, but as time passed and the details of his work were more fully scrutinized, it was discovered that some of the stars of Orion did not have corresponding pyramids and some pyramids did not have corresponding stars. Other researchers such as Graham Hancock and Richard Hoagland picked up the torch in an attempt to find other celestial correlations representing the structures on the Giza Plateau. The ground breaking work by geologists Anthony West and Robert Shock had already given geologic support to the notion that the Sphinx was constructed at an earlier era than conventional Egyptologist had dated the structure and this opened the door for a whole new round of speculation about this monument. Hoagland and other researchers, who believe that the Sphinx was constructed as a representation of the constellation of Leo, corroborated West and Shock’s date of the Sphinx’s construction from a cosmic perspective. Using a sophisticated computerized star map program, Hoagland and his colleagues were able to run their star program backward in time. They discovered that the last time the Sphinx would have been able to look into the eastern horizon to see the sun rising in the constellation of Leo would have been about 12,000 years ago. This they claimed, demonstrated that the Sphinx was built at a time when the constellation of Leo was in the house of the rising sun, thus bolstering West and Schlock’s geological hypothesis for the earlier dating of the construction of the Sphinx. But Herschel, who concurs that the Sphinx represents the constellation of Leo, believes that the reason for the Sphinx’s positioning is much simpler than that. He says that the other researchers failed to look at the complex of 50 pyramids in Lower Egypt in their entirety. If they had done that, they would have realized that the Sphinx is looking at its own reflection in the Nile River (the Milky Way) at the same angle as Leo is looking toward the middle star of Orion’s belt. Both the pyramids and the belt stars appear to be lying flat while Leo and the Sphinx are in an upright position. What Bauval, Hancock, Hoagland and others failed to understand, according to Herschel, is that the entire complex of pyramids in Lower Egypt taken together form a star map of the constellations on one side of the Milky Way Galaxy. As Hermes wrote thousands of years ago: All pyramids represent the stars!
Realizing that his own hypothesis would be met with skepticism among
scholar, Herschel self-imposed a set of quantifiable criteria that
would withstand the onslaught of scientific scrutiny. These criteria
include the following: 1) there should be a correlation between the
apparent magnitude of the stars and the sizes of the pyramids. 2)
The angles of the layout plan of the pyramids should be very similar
to the angles between the stars that they represent, and 3) the
relative distances between the pyramids should correlate with the
apparent relative distances between the stars.
The most recognizable and brightest stars in the constellation of
Orion are the three “belt” stars. Located on opposite corners of the
constellation of Orion, perpendicular to the belt stars, are two
high magnitude stars Rigel and Betelgeuse. Also in the area, and on
either side of Orion, is Sirius, the brightest star in the heavens
and opposite of it, relative to Orion is the star Aldebaran in the
constellation of Taurus the Bull. If you draw an imaginary line from
Sirius, through the belt of Orion and then through Aldebaran you
come to a faint cluster of stars called the Pleiades—“the seven
sisters” as they are often referred.
All these stars were important to the Egyptians, but as Herschel was
soon to discover it wasn’t because the stars in and around Orion
were sacred in and of themselves. They were important because they
pointed the way to the Pleiades the home of our ancestors. The
Pleiades in the constellation of Taurus the bull were so revered
that bull worship was practiced by ancient people from the Far East
to the Mideast and from the Mideast to the Americas.
It was in 1997 that Herschel first noticed that the northern
pyramids of Lower Egypt matched the stars of Sirius, Orion, and the
Pleiades. The magnitude of the stars was in correlation with the
sizes of the pyramids and the angles between these stars were the
same as the angles between the pyramids. The only discrepancies were
related to the size of the pyramids representing the Pleiades. Here
the scale of the pyramids increased in size. Herschel discovered
that this might have been a result of the importance the ancients
placed on the Pleiades. On his first map depicting the northern half
of the Egyptian complex of pyramids you’ll notice the similarity of
the celestial and terrestrial plans. The most northern pyramid of
the complex at Abu Ruwash has as its counterpart the star Sirius.
The three main pyramids, which include the Great Pyramid, Khafre,
and Menkaure, are representative of the three stars of Orion’s belt,
Alnitak, Alnilam, and Mintaka respectively. Aldebaran, the brightest
star in Taurus matches the pyramid at Zaweyet Al-Aryan that points
the way to the Pleiades represented by the complex of pyramids at
Abusir.
Everything seemed to be falling into place for Herschel, but there
were a couple of problems at this early stage of his investigation.
The first is that the angle of Orion’s belt did not match the layout
of the angle of the three pyramids of Giza. At first Herschel
assumed that the pyramids were built at this angle because of
practical considerations. If they were built at the angle of Orion’s
belt one of the pyramids would be in the Nile. Later we will see how
he reconciled this incongruity. The other problem was that two of
Orion’s the most prominent stars Rigel and Betelgeuse were not
represented by pyramids.
Herschel began to wonder if the desert had covered up the missing
pyramids. Through meticulous research of ancient maps, Herschel
found the remnants of two pyramids that were in the correct position
to replicate the missing stars of Rigel and Betelgeuse. He labeled
these ruin A and ruin B. He noticed that if one draws an imaginary
line connecting the two ruins, it intersects the line drawn between
the apex of the pyramids Khafre and Menkaure at a right angle just
as the line joining Regel and Betelgeuse intersects the imaginary
line drawn between the two stars of Orion’s belt at a right angle.
Herschel noticed another geometric relationship showing that the
Giza complex represents the constellations. An imaginary line
linking the brightest star of Leo, Regulus, with the middle star of
Orion’s belt is at the same angle as an imaginary line linking the
Sphinx with the middle pyramid, Khafre. The link between the Sphinx
and the constellation of Leo is now undeniable.
At this point Herschel realized that there were two problems for his
hypothesis. Remember that two of the rules Herschel imposed upon
himself were that the sizes of the pyramids should correspond to the
apparent magnitude of the stars they represented, and second, that
the scale of the distances between the pyramids should be
proportional to the apparent distances between the stars. Both of
these rules were violated at the Abusir grouping representing the
Pleiades. The pyramids at Abusir were on a larger scale, and the
scale of the map itself had increased. He solved the second problem
right away. Taking the entire complex of pyramids as a whole,
Herschel apprehended that if the center of the map, which includes
Abusir and the Sun Temple, is considered to be on a scale of “one,”
then the distances between the pyramids to the north are at a scale
of ¾ and the pyramids to the south are on a scale of 1/8. This
scaling was no doubt purposeful. Herschel believes the reduced scale
is likely due to the vast distances between the groupings of
pyramids. If they were placed on a one- to-one scale, someone
looking from above would not be able to tell that they were a
representation of the constellations. The resolution of the first
problem having to do with the sizes of the pyramids at Abusir,
representing the Pleiades, was a result of their importance. In
almost all ancient murals and paintings, the Pleiades were depicted
larger than they should be relative to the stars. This he came to
realize was because of their special significance to the ancients.
Herschel believes that the center of the pyramid complex at Abusir
was the most important for the Egyptians. This was the earliest city
called Memphis meaning “first place” and it was the most sacred to
the Egyptians. Offset from the Abusir grouping is what Herschel has
called the Sun Temple whose most prominent feature is an Obelisk,
now in ruins. True to its designation as the “first place,” Herschel
believes that this structure represents the location of the star
system from which our ancestors arrived over 17,000 years ago. The
sun temple in Egypt was revered more than any of the other pyramids.
Its name can be translated as the ‘pyramid that is established of
places,’ or in other words, our place of origin. Herschel calls this
obelisk structure the ‘Sun Pyramid of Ra.’ According to his
hypothesis, the people of the time who worshipped the sun were not
worshipping our sun, but the sun from the solar system our ancestors
came from in the constellation of Taurus the bull. Historians have
thought that Ra was the Sun, but Herschel thinks that Ra is the
Obelisk itself, or at least the star that the obelisk represents. In
Egyptian legend Ra was the god of the sun that visited the earth in
his celestial ‘boat.’
Herschel became interested in the off-set angle of the obelisk
representing the mystery star, in relation to the Abusir pyramid
groupings representing the Pleiades. Was it possible that the
mystery star’s location could be determined by measuring the angle
between the Abusir pyramids and the Sun Temple? And if so, are there
other pyramid complexes with associated star maps that would verify
the mystery star’s location? He was not disappointed. Herschel found
numerous complexes around the world that have associated star maps
at Teotihuacan in Mexico, Tikal in the Yucatan Peninsula, Angkor in
Cambodia, Nazca in Peru, Stonehenge in Britain, and Cydonia on Mars.
Star map Teotihuacan
The largest pyramid culture in the world is not in Egypt or the Far
East but in Central and North America. An exact count of the number
of pyramids in Central America is not possible because so many ruins
are still buried in the undergrowth of jungle. As a result the
estimates are wide ranging from tens of thousands to hundreds of
thousands. Herschel decided to limit his research to the oldest
pyramids located at Teotihuacan, because many of the smaller
pyramids were built in more recent times by post-apocalyptic peoples
who were unaware of the original reasons for building these
structures. Herschel decided to concentrate his efforts on the
pyramids at Teotihuacan in Mexico, said to have been built by the
Aztecs, despite the fact that the Aztecs themselves have always
maintained that the pyramids were there when they arrived.
Herschel’s interest was piqued when his research indicated that the
minor causeway was designed to align with the barely visible star
cluster—the Pleiades. This was a start. But the main causeway was a
bit mysterious in that it was not aligned to the East-West Axis, nor
did it seem to be aligned with any particular star in the sky.
Believing that the three main pyramids must represent the belt stars
of Orion, he used his star program to find the further most star of
Orion’s belt and then ran a line at the same angle as the prime
causeway. He was pleasantly surprised to find that the angle of the
prime causeway aligned with the ‘mystery star’ at the same angle as
the sun temple in Egypt aligned with the pyramids at Abusir in
Egypt. Now he went looking for a representation of the Pleiades.
From an aerial view, Herschel noticed that there were a cluster of
hills at the end of the main causeway. When he compared his star map
of Orion with the terrestrial view he discovered that the three
pyramids of Teotihuacan and the causeway were mirror images of
Orion’s belt, the mystery star, and the cluster of hills
representing the Pleiades. But there was a problem. According to
Herschel’s own standards, the magnitude of the stars had to match
the size of the pyramids and this was obviously not the case. Then
Herschel had an idea. Suppose the ancients were using absolute
magnitude rather than relative magnitude as did the Egyptians.
Relative magnitude is how bright the star appears to an observer.
For instance, a small star might seem much brighter than a much
larger star simply because the former is much closer to earth.
Absolute magnitude on the other hand is based on the actual
brightness of the star relative to the brightness of our sun.
Herschel found that if the absolute magnitude is used as a
determining factor then the sizes of the pyramids are in
correspondence to the absolute brightness of the stars. Is this just
coincidence or did the ancients have knowledge as a result of their
interaction with their god Quetzalcoatl, the representative from the
mystery star?
Quetzalcoatl’s counterpart among the Mayan people of Central America
was the god-king Pacal, who is said to have traveled amongst the
stars of the Milky Way. According to some interpretations, the lid
of Pascal’s sarcophagus depicts him clad in a space suit, breathing
apparatus, and hands at the controls of a space ship spewing exhaust
as it travels through space. Like the Aztecs, the Mayans also held
the star cluster of the Pleiades in special regard. They believed
that their gods were from the stars and that they came to visit
periodically. Tikal, the pyramid ruins in the Guatemalan jungle,
also feature a cluster of pyramids referred to as the “plaza of the
seven pyramid temples” a possible reference to the ‘seven sisters’
of the Pleiades. Herschel noticed that the causeway emanating from
the temples representing the Pleiades and leading to temple 4 at
Tikal has the same angle as the pyramid structures at Abusir in
relation to the Sun Temple of Ra in Egypt. This was quite
remarkable, but Herschel was now considering another possibility. He
began to wonder if it were possible that our ancestors came from two
separate star systems as suggested by the two causeways leading from
the “plaza of seven pyramid temples.” This idea would have to wait
for more evidence.
The name Cambodia means descendants of Kambu, who is said to have
come from above. This is yet another indication from a completely
different culture that their civilization was established by
visitors from another star system. This did not go unnoticed by
Graham Hancock. In 1998 Hancock published his book called “Heaven’s
Mirror” whose premise was that the layout of the pyramids at Angkor,
Cambodia mirrored the constellation of Draco. The book was published
in haste, and his work was met with skepticism among scholars and
astronomers for good reason. The accuracy of the alignments was not
at all convincing and his work soon fell into disrepute. This left
an opening for Herschel to apply his tried and true methods. As
before, Herschel started his search looking for the tell-tale
pyramids that might represent the Pleiades. He found this
relationship in the pyramids at Roluos in the south eastern part of
Angkor. At a site called Preah Ko he came upon a cluster of stupas
known as “the seven temples of the ancestors.” Nearby, stands the
most sacred structure of the ancient Khmer, the Bakong temple.
Herschel believed that this temple might be similar to the Sun
Temple of Egypt or temple 4 at Tikal in its significance for
representing the mystery star of our ancestors. Herschel tried many
transparencies to find a star map that would fit. Finally, he found
a match in the constellation of Perseus. Unlike the Egyptians,
Aztecs, and Maya, the Khmer did not use the constellation of Orion
to point the way to the Pleiades. Rather, they used the
constellation of Perseus, a constellation that modern astronomers
use to find the Pleiades when Orion is below the horizon. As it
turned out the Perseus constellation is a near perfect match for the
layout plan of the structures of Angkor. A line drawn from Angkor
Wat to Preah Ko, the structure representing the Pleiades, is at the
same angle and apparent relative distance as a line drawn from a
star designated as ATK, in the constellation of Perseus, to the
Pleiades.
As a thorough young researcher, Herschel decided to seek out a
professional South African astronomer by the name of Bill Hollenbach
to verify his work. Hollenbach is well known for his images in
Astronomy Magazine and is proprietor of Kromdraai observatory near
Johannesburg. Hollenbach concurred with Herschel’s hypothesis and
appeared on a television show with Herschel agreeing that his star
maps were right on target and could not be coincidence.
Two common themes emerged as a result of Herschel’s research. Not
only did diverse cultures on almost every continent on earth build
monumental structures of the same style, but these cultures
apparently all built these monuments because of a similar belief
system, namely, the belief that our ancestors came from the sky.
Herschel’s developing premise was that these ancient people all
believed that our ancestors came from the stars, more specifically,
that they came from a star located in the sky near the star cluster
Pleiades in the constellation of Taurus the bull. These beliefs
spawned bull worship among the ancient. Bull worship was practiced
by the Greek and Roman civilizations and, in fact, bull worship was
practiced by every ancient cradle civilization in the northern
hemisphere persisting through the ages long after the original
meaning for the ceremony had been forgotten. Most of us remember the
biblical story of Moses reprimanding his followers for worshipping
the golden calf. And even today the cow is considered to be sacred
by Hindus in India. Herschel set out to find evidence in the form of
ancient texts, artifacts, and murals to support or refute the idea
that the ancient’s religious beliefs were a means to acknowledge
their heritage.
Incontrovertible evidence came from the Takamatsu Zuka Kofun pyramid
in Japan, a pyramid which has been closed to human access since its
discovery. The pyramid, which was probed by a remote camera, showed
a sophisticated star map on the ceiling. The Pleiades, in in the
shape of the leg of the bull, was depicted much larger than it would
normally look relative to the other stars.
In the Egyptian Museum in Cairo, one of the oldest known tablets
that has resisted attempts by scholars to translate its inscription,
Herschel found the tell tail signs of bull worship. This included
the leg-like object representing the Pleiades at the center top of
both sides of the artifact. The leg was flanked by two bulls’ heads
on each side.
Even earlier evidence has been found by Herschel and Graham Hancock
demonstrating the ancient’s fascination with the Pleiades. Seventeen
thousand year old cave paintings in Lascaux, France near the town of
Cro-Magnon, known for the excavation of ancient human remains,
depict a star map of Orion and the constellation of Taurus with
seven stars representing the Pleiades. The sophistication of the
paintings and the difficulty of accessing these paintings deep in
the unlit caves make it obvious that these stars were of extreme
importance to early Cro-Magnon man. How did these early people,
supposedly without language skills, develop the ability to create
these paintings without artificial light, in the darkest recesses of
the caves? Through the lens of his developing hypothesis, Herschel’s
careful examination of hundreds of murals and artifacts led him to
an unorthodox interpretation of many ancient Egyptian symbols. Among
these findings is that the leg of the bull depicted in so many
paintings and drawings is synonymous with the Pleiades.
Herschel wondered if the same system of star maps could apply to
Stonehenge. He rented a plane so that he could get an aerial view of
the layout plan. He wanted to investigate the barrows near
Stonehenge to see if he could detect the now familiar leg of the
bull representing the Pleiades. The barrows are manmade mounds, some
pyramid shaped, some conically shaped, with interior chambers whose
purpose is still unknown. The oldest mounds are about 6,000 years
old, predating Stonehenge itself. From his aerial position, Herschel
was pleasantly surprised when he spotted the familiar leg of the
bull among the mounds. What Herschel needed to know now is whether
or not there was a mystery star in the vicinity. What he came to
realize was absolutely stunning! Stonehenge itself was at the
correct angle to be the mystery star! Herschel went looking for
additional evidence. He found it in full public view at Wiltshire,
Britain. Engraved on a rock he found the three stars of Orion and a
serpent pointing the way to the Pleiades with a single off-set star
representing the mystery star. Next to this display he noticed a
double ring circle seemingly representing Stonehenge itself with a
causeway emanating from the center.
This was enough evidence for Herschel to begin his search for the
mystery star. From his research, it was now possible to calculate
the approximate angle and approximate ‘apparent’ distance of the
mystery star from the Pleiades. Using his computerized star map,
Herschel was disappointed when he could not find an appropriate star
in the sky representing the position of Stonehenge. Herschel’s
criteria for finding the mystery star were that: 1) the mystery star
would have to be sun-like—at least a G-2 or more—in other words at
least a second or third generation star. 2) The mystery star would
have to be as old as, or older than our sun, presumably to give the
process of evolution time to develop an advanced civilization. And
3) the star should be within about 20 light years of earth allowing
for the possibility of a journey from their planet to earth in a
reasonable amount of time. It is possible of course that some
advanced civilization might have gotten around the law of special
relativity that prevents any mass from traveling faster than the
speed of light, but this cannot be assumed.
Herschel’s initial searches for our ancestor’s homeland proved
fruitless. He had failed to take into account that a star within 20
light years of earth would have changed its relative position over
the millennia. He knew that the Pleiades would not have changed much
in relation to our position in the galaxy, because that cluster of
stars is almost 400 light years away, but if the mystery star was a
mere 20 light years away or less, than it would have made a
significant shift in position.
In the meantime, Herschel began to wonder if there were other things
he could discover about the geometric layout of Stonehenge. Working
with photographs and drawings of Stonehenge he identified key
astronomical points having to do the sun and moon positions relative
to lines of sight within the monument. Through a series of steps he
joined key points with lines and discovered that the entire complex
was designed as a five pointed star representing the human codex.
Like the pyramid complexes, Stonehenge was meant to be deciphered
from above. Just as we left our flag on the moon and inscriptions on
our deep space probes to identify our species, so the descendants of
the gods left their marker for anyone who could decipher it. The
human codex suggests that the ancients who arrived were of human
form.
Bolstered by his discoveries, Herschel revisited his maps of the
Egyptian pyramids and found the human codex incorporated into the
geometry of the structures of Giza as well. By linking various key
points on the Giza complex the five sided human codex emerged once
again. Now the question was: Could the human codex be found on other
planets in our solar system?
Most people are now familiar with an area on Mars photographed by
NASA’s Viking orbiter called Cydonia. Richard Hoagland and Graham
Hancock have extensively investigated this area and have produced
works such as “The Monuments of Mars” and “The Mars Mystery.”
When looking at the layout of the Cydonia Complex, Herschel noticed
almost instantly that a star map was apparent. The area known as the
city, which is a cluster of structures almost perfectly replicating
the Pleiades star cluster, was pointing to the mystery star
represented by the face. It wasn’t long until Herschel decoded two
human codices. The first and most obvious was the five sided pyramid
called the D & M Pyramid after its discoverer Vincent DiPietro, an
electrical engineer and Gregory Molenaar, a computer scientist. The
second codex involved the Cydonia complex in its entirety. Drawing a
line from the apex of the structures within the so called city
produces a perfect human codex. From the evidence on Mars and Tikal,
Herschel wondered if there were possibly two stars from which our
ancestors arrived. To Herschel the map on Mars reads: “The origins
of human form are from the star system corresponding with the
five-sided D &M anomaly. Later space explorers from this star system
colonized the ‘face’ star system. The message now suggests that
these people, from the ‘newly’ colonized ‘face’ star system were the
people who colonized Mars and who would later colonize Earth.”
The hunt was now on for not one, but two, mystery stars and the
criterion was set. The stars had to be at the correct angles
established by the angles of the monuments and the pieces of art
work Herschel investigated. In addition, the stars had to be
sun-like, a G-2 or more, as old as, or older than our Sun and within
20 light years of earth to make travel practical.
Our earthly ancestors built monolithic structures as welcome mats
for the return of the gods and coded messages for later generations
to decipher. Since that time, the meaning of these star maps has
been lost and superstition and monotheism have taken the place of
the flesh and blood gods. At last we have come full circle to
understand that these ancients were not heathens, but guardians of
the knowledge of a high tech race who came to Earth to teach and
guide us on our own journey into the future. Wayne Herschel has done
his part in bringing us closer to answering two of life’s great
mysteries—who are we, and where did we come from?
If I had never heard of Terrence Mckenna, this romantic biography by
his brother Dennis would be no less diminished. Dennis Mckenna
recounts his experiences with his brother from the early days
growing up in Paonia, Colorado, to their experiments with mushrooms
and ayahuasca at La Chorrera, Columbia in the Amazon, and to the
emotional end of Terrance’s last days on earth. Being two years
older than Dennis and two years younger than Terrance, I imagined
myself as the phantom “middle brother”. Though I might not have been
quite as adventuresome, I felt that I was right there with them
reliving my own days of the 1960s and 1970s counter-culture with all
that it encompassed—free love, psychedelic drugs, and unbridled
optimism. What more could one ask of a book.
The Final Pieces of the Puzzle
Herschel went to his star program and began running it backward in
time to compensate for the vast amount of time that has passed since
our ancestors left us the message. The program had a limitation of
7,000 years and he was hoping that it was enough to locate the
mystery star. Since there are only a handful of sun-like stars
within 20 light years that can be identified by astronomers,
Herschel knew that the chances of finding a star, let alone two
stars, in the vicinity he was looking were extremely small. He
waited as the program went back in time. He watched the computer
screen in great anticipation until he saw Leo coming up over the
horizon. The program continued to run and then he paused the program
as he looked on in amazement. There they were in all their glory!
Leo, Orion, and Taurus were all in their upright position in the sky
and the cosmic scene matched the layout of the structures at Giza.
Now for the first time the angle of the pyramids matched the angle
of Orion’s belt. The pyramids point the way to the Pleiades and Leo
looks over the Milky Way just as the Sphinx is looking over the Nile
River. And sure enough, there were Herschel’s two stars HD 283271,
and HE 282943, both sun-like stars within 20 light years of planet
Earth. Herschel’s hypothesis had been elevated to a theory.
My Life with Terrence Mckenna
Dennis Mckenna
Dennis and Terrence’s experimentations started as a very young age.
Both were well read in philosophy and were more concerned with the
philosophical implications of substances such as Psilocybin, LSD,
and DMT, than its use as a recreational drug. Their grand adventure
culminated with “the experiment at La Chorrera.” It had such an
effect on these two brothers that Dennis states: “In some aspects
everything in life before we arrived as La Chorrera was a prelude to
the events that engulfed us there; and everything afterward has been
a reflection of them.”
Their experiment at La Chorrera involves an innate question about
the nature of reality that can be stated thusly: Is the mushroom and
the teacher one of the same? According to Mckenna, science has yet
to close the gap between subjective conscious experience, and
“objective” reality, if indeed such a demarcation between subjective
and objective reality exists.
On his reflections of La Chorrera, Dennis wonders if there was more
to their experience than mere pathological delusion. Were any of the
phenomena “real” in an objective sense? Is there a symbiosis between
DNA and tryptamine based drugs? Were psilocybin and DMT, a substance
ubiquitous in nature, purposely put here by an alien civilization to
interact with our DNA to enhance our cognition and consciousness? Is
there a higher level language or an encoded message on our DNA that
goes beyond the known mechanistic processes of gene replication and
protein synthesis? Do the psilocybin mushrooms unleash this message
and give us a glimpse of our own destiny? A growing number of
psychedelic experiencers have a tentative answer to these questions.
Many have come to the conclusion that we are an immature species
attempting to find our way to a much older and wiser galactic
civilization through the psychedelic experience.
McKenna says, “The mushroom…demands that we reject faith. All we
need is the courage to experience the phenomenon and judge it
ourselves…[We may discover a model] that suggests there just may be
such a thing as human destiny—and our existence, as individuals and
as a species, may have meaning after all.”
Mckenna describes one of his most memorable “trips” at a conference
in Sao Paulo, Brazil in which 500 people, many of whom were
physicians and academics, participated in the ayahuasca ceremony.
After his second dose of ayahuasca, Mckenna had one of his most
intense and prophetic journeys when he experienced the chemical
process of photosynthesis from the perspective of a molecule of
water. Near the end of this molecular journey through the vascular
system of the plant, Mckenna felt an overwhelming sense of
depression and rage at our destructive species. We are destroying
the source of our own existence. What would happen, he wondered, if
we destroyed the Amazon rain forest? Then the answer came in another
vision. The plant teacher said. “You don’t really think we would let
this happen, do you.” A sense of relief returned as Mckenna realized
who was really in charge. He experienced the “force” first hand and
it brought him great comfort.
Their experimentations profoundly influenced their lives. Terrence
became an icon of the counter-culture, co-authoring books with his
brother, becoming a popular international lecturer on the topic of
psychedelic drugs, and collecting specimens of endogenic plants for
propagating and scientific investigation. Dennis, an ethno
pharmacologist, in turn, gathered and studied hallucinogenic plants
for forty years, published many scientific papers in the fields of
pharmacology, ethno botany, and neuroscience, and holds a Doctorate
degree from the University of British Columbia. What an exciting and
productive life these two brothers led.
Ayahuasca and Depression:
The Clarity of Darkness
Justin Taylor
(Kindle version only)
Justin Taylor highlights his personal quest to find meaning in life through his experience with Ayahuasca, the psychoactive brew of the indigenous people of the Amazon. Taylor’s easy flowing, yet nonlinear writing style, is narrowly focused keeping the reader engrossed throughout the book. Perhaps this style of writing came about as a direct result of his Ayahuasca experience which taught him that we should appreciate what the universe has to offer and not be distracted by the superficiality that occurs all around us in this modern world.
During his graduate work in Honolulu, Hawaii Taylor was hit with bouts of gastro-intestinal problems that he eventually became convinced were due to stress. Years of ill health and negative emotions led to his dependence on prescription medications that only masked the problem of depression. In desperation, he turned to other modalities such as behavioral therapy, homeopathic tinctures, dietary changes, and yoga, but the joy and synchronicities that pulsed through his life as a young adult did not return. He became numb to the world.
We next find the author in Brazil where he meets practitioners of the ayahuasca ceremonies who espouse the benefits and healing potential of this plant derived substance. He is enthralled about the possibilities but knows that he cannot partake due to his continued use of antidepressants which often produce unwanted side effects when combined with DMT, the active ingredient in ayahuasca.
His research of a 1994 Finnish study that demonstrated improvements in the mental and physical health of its participants, and his interactions with friends he met in his travels who used ayahuasca, influenced and inspired him to seek out the psychoactive plant for his own consumption.
In one particular instance, Taylor noticed that a friend who had experienced a life of traumatic experiences was much more serene and grounded after she consumed ayahuasca. This bolstered his determination to experiment with the plant substance. Freeing himself from both his addictive cigarette habit and his antidepressant medications, Taylor solicited the help of Shaman Alia, to guide him in his journey. After first consuming a less intense substance called Huachuma, derived from a cactus native to the Andes Mountains in Peru and Ecuador, Taylor decided he was ready for ayahuasca, dubbed the mother plant by shamans of the Amazon because, purportedly, its consumption allows them to go into the forest and find other plants that have healing properties.
After his ayahuasca experience Taylor had an intense sense of gratitude to the plant, Alia, and the universe for bringing this opportunity. He says that there is nothing linear or predictable about the ayahuasca experience. He writes: “…The consciousness that animates our human minds and the consciousness in the rest of the Universe, plants, animals and other creatures are vitally connected and inseparable. Indeed, in this state of being one cannot fully distinguish between what is self and what is not self.” He found himself craving solitude, nature, and simplicity even more than he did before the ceremony, and the plants beneficial effects have lasted long after the experience.
This book is enjoyable, informative, and will hold the reader’s attention throughout.
Spontaneous Evolution:
Our Positive Future
(Kindle version)
Bruce Lipton, Ph.D. And Steve Bhaerman
In 1990 the U.S National Institute of Health announced with great
fanfare the Human Genome Project (HGP) initially headed by James
Watson the co-discoverer of the structure of the DNA molecule. The
purpose of the project was: 1) to identify the genetic basis of all
human traits, 2) to create a research database to share with
biotechnical industry and the private sector, and 3) To foster the
development of new medical applications around the world.
Simultaneously, private companies announced their own privately
funded gene sequencing projects as bio-tech companies and drug
companies alike, lined up cash in on the fortune to be made on the
bio-tech companies plan to patent the more than 100,000 gene
sequences and sell them to the drug companies for the manufacturing
of drugs to cure diseases. The grand project ended in a whimper when
it was discovered that there were only about 23,000 genes in the
human DNA bank—about the same number as in the lowly round-worm. The
human genome project was certainly money well spent, but its limited
success marked the end of the standard deterministic model of
biology championed by the likes of Francis Crick, James Watson, and
Richard Dawkins. Though strict determinism had been banished from
the field of physics with the advent of quantum theory over a
century ago, biologists have stubbornly held on to the outmoded
deterministic model.
A more purposefully driven top-down paradigm called epigenetics
has emerged to temper the deterministic model of biology. As
biochemists are beginning to understand, genes do not control our
individual destinies, nor are they primarily responsible for
evolutionary change. Genes are not self-emergent; they cannot turn
themselves on and off, they are not the control center of cellular
activity, and they are not self-replicating. No one-to- one
correspondence exists between DNA instruction and the creation of a
specific protein, since one gene can code for multiple proteins. In
addition, once a protein is constructed it can take on over 30,000
different folding configurations as dictated by the specific needs
of the cell. Genes are merely blue-prints, the gonads of the cells,
whose processes are orchestrated by environmental signals from the
brains of the cell—the cell membrane. Communications with the
organelles of the cell are accomplished by means of thousands of
protein based receptor and effector switches imbedded in the lipid
plasma membrane. Signals sent from outside the cell are received by
receptor proteins that modify their shape to connect with the
effector proteins. The effector proteins then send secondary signals
through the cytoplasm resulting in the regulation of cell
metabolism, a fact that the authors call the real secret of life.
The idea of environmentally controlled top-down genetic information
runs counter to two of Darwinian evolution’s most cherished
principles—1) Adaptation (survival of the fittest) –That evolution
occurs only as a result of a species ability to survive and pass on
its genes to the next generation, and 2) Random mutation—That
favorable traits are expressed in an organism only as a result of
random mutations in the process of genetic replication and coding,
and that this information flows in only in one direction, from the
DNA to RNA to Proteins. This is the essence of genetic determinism.
According to biologist Lynn Margulis, however, evolution has rarely
occurred in a Darwinian or Malthusian way in which species battle
for limited resources. Paleontological history demonstrates that
most evolutionary advances occur as a result of cooperation and
symbiotic relationships. Simple prokaryotic bacteria evolved by
banding together into communities protected by a biofilm membrane.
By doing this they were able to enhance their survivability by
sharing genetic information, specializing in task functions, and
increasing their collective awareness. A major evolutionary advance
occurred when these loose communities of prokaryotes underwent
further specialization by creating organelles such as mitochondria,
ribosomes, and nuclei within a single large cytoskeletal membrane
giving the eukaryotic cells thousands of times more membrane surface
area then their precursors and vastly increasing their awareness.
During the Precambrian period, simple colonies were able to increase
their awareness by banding together into mats or layers of identical
cells such as Stromatolites that produced the oxygen in our early
atmosphere. The law of diminishing returns put a limit on the size
of these communities, and as a result, various cells within the
community began to group into specialized epithelia, bone, muscle,
and brain cells to carry out specialized tasks. As Margulis points
out, evolution did not proceed by struggle, but by networking, an
ongoing process that, according to the authors of this book, will
happen in a similar way for our own destiny, not by producing a new
species of man, but as a result of increasing levels of communal
complexity and interrelationships.
In the late 1960s, geneticist Howard Temin at the University of
Wisconsin challenged the second tenant of Darwinian evolution
postulating that DNA information can only travel in one direction
when he suggested that RNA information could be transcribed from the
RNA molecule into an organism’s DNA. He was awarded the Nobel Prize
in Physiology in 1975 for discovering reverse transcriptase,
the enzyme that did exactly that. Information from viruses can enter
a cell and change the DNA of the host organism demonstrating that
information can flow from the environment into the DNA of a cell.
A second broader challenge to the dogma of random mutation was
established as a result of experimental research conducted by John
Cairns in 1988. Cairns placed bacteria that were unable to
metabolize lactose in a medium where only lactose was present. It
was assumed that the colonies of bacteria would perish, but
information from their environment was feeding back into the
organisms and accelerating the bacteria’s mutation mechanisms. It
soon became apparent that stressed, non-dividing bacteria can
purposely engage a unique error-prone DNA copying enzyme to make
mutated copies of genes associated with a particular dysfunction.
Through this process of generating genetic variants, the organism
attempts to create a more functional gene that will allow it to
overcome the environmental stressors. This purposeful, accelerated
generation of random mutations is called somatic hypermutation.
When one of these gene variants is able to produce a protein product
that can effectively resolve the stress, the bacterium cuts the
original ineffective gene out of the chromosome and replaces it with
the new version. So, yes random mutations do occur, but those random
mutations can be purposefully accelerated through awareness of the
environment. This process is a reflection of quantum physics’
discovery that a single reality can be created from a probability
wave by simple observation or measurement. Our biological destiny is
driven by bottom-up determinism and top down intelligence acting in
a complementary interplay of both processes. The role of
complementarities established by Niels Bohr in physics has
established a foot-hold in many other disciplines and is likely to
be a major factor in answering many mysteries in biology.
Perhaps it should not be so surprising that biology strayed so far
afield from the philosophical niche that physics has carved out of
reality. After all, it is difficult to do objective science when
studying the very principles that create the scientist and life in
general. We humans are attempting to study ourselves with the same
tools that created us. It is time to step out of our shells just as
humankind did during the enlightenment that followed the Copernican
Revolution, when humanity believed we were the center of our solar
system and the center of the entire universe. Thereafter, humankind
came to realize that we were not the center of our solar system, the
center of our galaxy, nor the center of the universe. We are simply
self-repeating patterns in an intelligent fractal universe with no
central control. With this in mind, we might imagine that the next
great paradigm shift will occur when we entertain the possibility
that we are not even at the controls of our own egos. Ninety five
percent of our decisions, actions, and emotions are unconscious,
subject to programing established from the time we were in the womb
to the age of six. Does the idea of free will even make sense under
these circumstances? What do we want to be free from—ninety five
present of our cognitive being? We must acknowledge that there is no
demarcation between the observer and the observed and this was the
essence of the great battle between Albert Einstein and Niels Bohr.
Einstein felt that the universe exists independent of observation
and that the field was the ultimate reality. He stated,
“There is no place in this new kind of physics both for the field
and matter, for the field is the only reality.” The authors
apparently agreed with this statement when they equated the concept
of the field to what happens to entangled particles in quantum
physics experiments. They state for example, “No structure from a
drop of water to a human being can ever be separated from the field
which is the source.” The concept of a field and nonlocal actions in
quantum theory are completely different however. A field is
something physical that obeys the laws of thermodynamics and
Relativity. On the other hand, nonlocal actions, as described by
John Stewart Bell in his theorem of inequality, are not physical;
they are not part of a field, and they do not obey the laws of
thermodynamics or Relativity. Explaining our existence as part of
some morphogenic field is nothing more than reification. It explains
nothing. Rather, what is more convincing is the author’s contention
that a cell and a human body are self-similar fractal images that
share self-similar functions. The organs of our bodies that carry
out the functions of awareness, digestion, respiration, and
reproduction are also the functions that are carried out by most of
the 50 trillion cells in our bodies. We are a community of cells.
Lipton and Bhaerman explain how important our belief systems have
been throughout recorded history. According to archaeologists and
historians, four major paradigms have reigned and each has had their
own belief systems reflecting their response to three of the major
questions that have beset mankind since the beginning of
civilization. The four major paradigms are: Animism, polytheism,
monotheism, and materialism and each had their own answers to these
key questions: 1) How did we get here? 2) Why are we here? 3) Now
that we are here, how do we make the best of it?
Animism, which had its origins around 8000 B.C.E. and is founded on
the belief that the spirit is universal existing in all things
animate or inanimate with little distinction between the self and
the environment, answered the questions this way: 1) We are children
of mother earth and father sky. 2) We are here to tend the Garden
and thrive. 3) We are here to live in balance with nature.
Polytheism which came to prominence in the Greek era around 2,000
B.C.E separated spirit from matter into an earthly realm and a
heavenly realm ruled by gods representing nature’s elements.
Polytheism answered the three questions this way: 1) We came from
chaos. 2) We are here to please the whimsy of the gods. 3) We should
not anger the gods. Monotheism followed ushering in the belief in an
omniscient, omnipotent, and omnipresent universal God and answered
the questions this way: 1) We got here by divine intervention. 2) We
live here to serve God. 3) We should obey the scriptures. And
finally, our present paradigm, Materialism, answers the questions
this way: 1) We got here through random events. 2) We are here to
enjoy life, be creative, and proliferate, 3) We live by the laws of
the jungle. New scientific advances in many disciplines dispute
Materialism’s answers to the three major questions listed above. It
is quite evident that we are on the verge of yet another paradigm
shift.
Beliefs are not just incidental to our thinking processes; they are
the creators of physical reality in many respects. To demonstrate
this concept Lipton and Bhaerman recount several placebo experiments
or which I will mention only one to emphasize the extent of our
belief systems on physical reality. In one study, Japanese children
who were allergic to poison ivy-like plants took part in an
experiment in which they were told a poisonous plant leaf would be
rubbed onto their right forearm and a nonpoisonous plant leaf that
looked similar to the poisonous plant would be rubbed onto their
left forearm. Unknown to the children, the plants were switched and
the left forearm was rubbed with the poisonous plant, and the right
forearm was rubbed with the nonpoisonous plant. In most of the cases
the left forearm treated with the poisonous plants did not produce a
rash, while the right forearm treated with the nonpoisonous plant
broke out in a rash! It was the suggestion alone that produced the
rash on the right forearm where the innocuous plant was rubbed, and
the lack of a rash response on the left forearm where the poisonous
plant was rubbed. The authors ask us to consider the percentage of
our illnesses and diseases that might be attributed to negative
thoughts.
This far-flung and wide-ranging work was too much to put between the
covers of one book, whilst making it difficult for any reviewer to
encapsulate its essence. Much more lies between its covers and I
will leave it to the reader to discover the treasure-trove of
interesting and well documented material.
The Cosmic Bridge:
Close Encounters and Human Destiny
Craig Lang