DJK

Office Phone: (319) 338-3092
Home Phone: (319) 354-7383
Email: Sales@DaveKreiter.com

Return to the previous section of my reviews 





The Field: The Quest for the Secret Force of the Universe

Lynne McTaggart

A field is a matrix or medium which connects two or more points in space, usually by a force like gravity or electromagnetism. For example, an electromagnetic field is simply an electrical field and a magnetic field that intersect sending out waves of energy at the speed of light. Einstein recognized that matter was an intense disturbance in the Zero Point Field, a repository of all fields and particles in the universe—a field of fields. He once said that the Zero Point Field is the only reality, and Physicist Richard Feynman said that there is enough energy in a single cubic meter of the Zero Point Field to boil all the oceans of the world.

Hal Puthuff, a self-made man who graduated from the University of Florida and co-authored a book on quantum electronics, received a Ph.D. in electrical engineering at Stanford University where he was involved in laser research. Puthuff first became interested in the Zero Point Field when he read a paper by Timothy Boyer of City University in New York. Boyer believed that, if the Zero Point Field was considered, classical physics could account for reality without the need for implementing some of the more bizarre aspects of quantum theory. For example, the hypothesis developed by Boyer and Puthuff proposed that the stability of the hydrogen atom, and all matter, can be explained by the constant exchange of energy with the Zero Point Field. Puthuff went further and utilized the idea of the Zero Point Field to explain inertia and gravity in a revolutionary way.

Though Newton explained the effects of inertia on moving bodies with his three laws of motion, physicists have been perplexed as to exactly what inertia is. What does an accelerating object push against? Puthuff said the presence of the Zero Point Field could provide the answer. Inertia, he said, was simply resistance to being accelerated through the Zero Point Field. This resistance is caused by matter moving through a magnetic field, a component of the Zero Point Field, reacting with the charged subatomic particles. The larger the object the more particles it contains and the more resistance it encounters. There is no mass according to Puthuff, only electric charge. The force of gravity, therefore, could be explained by a shielding of the Zero Point Field which he called the Casmir effect. Two objects are not pulled toward each other, but rather pushed from all directions, with the least force coming from the direction of the opposing object. As a result, each shields the other and they are pushed together.

Hal understood that if all subatomic particles in the universe are interacting with the Field, then the subatomic waves of the Field are constantly imprinting a record or memory of everything in the universe.

All particles interact with each other by exchanging energy through quantum particles, which are believed to appear out of nowhere, combining and annihilating each other causing random fluctuations. The fleeting particles generated during this brief moment are known as ‘virtual particles,’ which exist only during that exchange, the time of “uncertainty” allowed by the uncertainty principle. Puthuff believed that if one could tap into the Zero Point Field it would be an endless supply of free non-polluting energy.

Fritz-Albert Popp, a theoretical biophysicist at the University of Marburg in Germany, discovered that each and every molecule in the universe has a unique frequency. This fact has helped astronomers determine the composition of matter on stars and planets as their frequencies—their spectra—can be detected over light years. More surprisingly, Popp found that coherent light was being emitted from biological systems and that DNA was capable of sending out a large range of frequencies that seemed to be linked to certain functions. This was astonishing because it was believed that coherent light was emitted only from materials such as super-fluids or superconductors in which substances are cooled to nearly absolute zero causing all the electrons in the substance to behave like a single electron called the Bose-Einstein condensate.

Since all molecules have a unique frequency, and water is so essential to life, scientists began to wonder if the frequency of water molecules could explain how biochemical reactions occur so quickly. Biochemical chain reactions happen almost instantaneously, and as it turns out, this is a result of molecules in a chain all tuning into and vibrating in unison to frequencies in the audible range of about 20 hertz.

French scientist Jacques Benveniste wondered if the frequencies of water molecules could explain the mechanism of action whereupon highly diluted substances retain their potency, so he set up laboratory experiments in the 1980s to determine the efficacy of homeopathic medicine. In the prosses of preparing a homeopathic medicine an active substance is put into a solution of water and incrementally diluted. With each dilution the solution is shaken vigorously. After multiple dilutions the final preparation has little or none of the active ingredient remaining. He found that the effect of the diluted substance on other chemicals remains potent despite the fact that not one single molecule of the active substance remained in solution. He hypothesized that the water molecules act as a template vibrating with the correct frequency of the previously present active ingredient.

If water molecules speak to each other through vibrational frequencies, as he and Popp theorized, Benveniste wondered if molecular substances even needed to be in physical contact with each other. Over thousands of experiments, Benveniste recorded the vibrational activity of molecules on a computer and replayed the frequency to biological systems ordinarily sensitive to that substance. In every instance, the biological systems were fooled into thinking they had been interacting with the substance itself and acted accordingly, initiating biological chain reactions just as if they were in the actual presence of the genuine molecule.

Researchers have found that single wavelength water molecules create coherent domains making them aware of other molecules as they polarize around other charged molecules storing and carrying their frequencies. Water is like a tape recorder, imprinting and carrying information whether the original molecule is still there or not. The shaking of the containers, as is done in homeopathy preparations, appears to act as a method of speeding up this process. Benveniste’s own studies actually demonstrate that molecular signals cannot be transmitted in the body without a medium of water. Water acts as the essential conductor of a molecule’s signature frequency in all biological processes and these water molecules organize themselves to form a pattern on which can be imprinted wave information. If water molecules act as receivers and senders of information through vibrational frequencies, can this give us a clue as to how the brain processes information from the environment?

In the 1920s Wilder Penfield thought he had discovered that memories had a particular location in the brain. In experiments using human subjects, he stimulated their brains with electrodes which elicited scenes from the subject’s past in great detail. Stimulating the same area of the brain always produced the same flashback with the same level of detail.

Years later, Karl Lashley, a renowned American neuropsychologist, attempted to find the exact location in the brain where these memories are stored. In trials lasting nearly thirty years, Lashley performed experiments using rats that had been trained to perform certain tasks. He attempted to deactivate parts of their brains with a hot curling iron. By this elimination technique, he believed he could find the area of the brain where memories are stored, but no matter where or how much of the rat’s brain he destroyed, he found that they could still perform the tasks they had learned.

In other experiments, Lashley discovered that he could sever virtually all of a cat’s optic nerve without apparently interfering whatsoever with its ability to see and comprehend. It appeared that vision was not like a camera that recorded an image on the retina. If only a small part of the visual apparatus was left intact, then the cats vision seemed to be normal.

In over 700 laboratory experiments, Paul Pietsch carried on research similar to Lashley’s and found that when he removed the brains of salamanders they became comatose, but when he put the brains back into the salamanders they resumed normal function. In several experiments, he reversed, cut out, sliced away, shuffled, and even sausage-ground the individual brains of salamanders before putting the brains back into the salamander. Still the salamanders resumed normal function.

These experiments provided Karl Pribram, a researcher and neurosurgeon, with an idea. Performing his own experiments with monkeys and cats, Pribram became convinced that no images were being projected internally and that information must be distributed more holistically. He came up with the idea that information from the environment was being read by transforming ordinary images into wave interference patterns, and then transforming them again into virtual images, just as what happens in holography.

In a classic laser hologram, a laser beam is split. One portion is reflected off an object, the other is reflected by several mirrors. They are then reunited and captured on a piece of photographic film which represents the interference pattern of these waves. The waves look like a series of concentric circles. When laser light is shone through the film, a three-dimensional virtual image appears to be floating in space. If that photographic film is cut into multiple pieces and a laser light is shone through any piece, the entire image will still appear albeit a little hazier. It was the unique ability of quantum waves to store vast amounts of information that interested Pribram. For Pribram, the hologram provided the mechanism that seemed to replicate how the brain actually works. In a sense, holography is just a convenient shorthand for wave interference—the language of the Field.

Pribram says that when we look at something, we don’t see the image in the back of our heads or on the back or our retinas but in three dimensions out in the world. It must be that we are creating and projecting a virtual image of the object out into space, in the same place as the actual object, so that the object and our perception of the object coincide. As with a hologram, the lens of the eye picks up certain interference patterns and then converts them into three-dimensional images.

According to Pribram’s theory, when you first notice something, certain frequencies resonate in the neurons in your brain. These neurons send information about these frequencies to another set of neurons. The second set of neurons makes a Fourier translation of these resonances and sends the resulting information to a third set of neurons which then begins to construct a pattern that makes up the virtual image.

Stuart Hameroff, an anesthesiologist from the University of Arizona, had been thinking about how anesthetic gases turn off consciousness. It fascinated him that gases with disparate chemistry could all bring about loss of consciousness. Hameroff guessed that general anesthetics must interfere with the electrical activity within the microtubules, and that this activity would turn off consciousness.

Microtubules are the scaffolding of the cell maintaining its structure and shape. These microscopic hexagonal lattices of fine filaments of protein, called tubulins, form tiny hollow cylinders of indefinite length. Thirteen strands of tubules wrap around the hollow core in a spiral, and all the microtubules in a cell radiate outward from the center to the cell membrane like a cartwheel. These structures act as tracks in transporting various products along cells, particularly in nerve cells, and they are vital for pulling apart chromosomes during cell division.

Microtubules appeared to be exceptional conductors of pulses. Pulses sent in one end traveled through pockets of protein and arrived unchanged at the other end. Hameroff also discovered a great degree of coherence among neighboring tubules, so that a vibration in one microtubule would tend to resonate in unison through its neighborhood.

It occurred to Hameroff that the microtubules within the cells of dendrites and neurons might be “light pipes” acting as waveguides for photons, sending these waves from cell to cell throughout the brain without loss of energy. They might even act as tiny tracks for these light waves throughout the body. Like Pribram, his equations showed that brain processes occurred at the quantum level, and that the dendritic networks in the brain were operating in tandem though quantum coherence.

Pribram, Yasue, Hameroff, and Scott Hagan, from the department of Physics at McGill University, assembled a collective theory about the nature of human consciousness. According to their theory, microtubules and the membranes of dendrites represented the Internet of the body. Every neuron of the brain could log on at the same time and speak to every other neuron simultaneously via the quantum processes within.

Microtubules helped to marshal discordant energy and create global coherence of the waves in the body, allowing these coherent signals to pulse through the rest of the body. Once coherence was achieved, the photons can travel all along the light pipes as if they were transparent. Photons can penetrate the core of the microtubule and communicate with other photons throughout the body causing collective cooperation of subatomic particles in the microtubules throughout the brain. If this is the case, it would account for unity of thought and consciousness.

Hameroff observed that electrons glide easily along these light pipes without getting entangled in their environment or settling into any one single state. This means they can remain in a quantum state—a condition of all possible states—enabling the brain to choose among them providing an explanation of free will. At every moment, our brains are making quantum choices, taking potential states and making them actual ones.

Dean Radin performed experiments with subjects that were hooked to physiology monitors that record skin conductance, heart rate, and blood pressure. A computer would randomly display color photos of tranquil scenes designed to either calm, shock, or arouse. What Radin discovered was that his subjects were also anticipating what they were about to see, registering physiological responses before they were about to see disturbing or erotic photos possibly indicating that time and space are illusory.

Helmut Schmidt conducted experiments with an EEG machine that randomly produced ones and zeros and then converted them into clicks that were sent to headphones. The participant’s job was to attempt to influence the output by producing more clicks in either the left or right ear. They were successful at doing this. He wanted to determine if time and space influenced the outcome so he recorded the random clicks on a tape. He then asked the participants to do the task again but this time using the randomly recorded tape. To his surprise, the subjects were able to influence the outcome with the same degree of success. Many subsequent tests backed up these results although the influence was small.

He stated that the participants didn’t change what “had” happened. They affected what happened in the first place. Present or future intentions act on initial probabilities and determine what events actually come into being. These results were the most disturbing to Hal Puthoff because he thought of the Zero Point Field as an electromagnetic field of cause and effect. Instead, these experiments could show that we live in a holographic universe where information is stored on a two-dimensional surface projecting our three-dimensional reality where time and space are emergent.

Ervin Laszlo, director and founder of The Laszlo Institute of New Paradigm Research and author/co-author of 90 books, proposes that the Zero Point Field provides the ultimate holographic blueprint of the world for all time, past and future. It is this that we tap into when we see into the past or future. The mere act of observation creates consciousness and consciousness creates time and space.



Digital Physics: The Meaning of the Holographic Universe and its Implications Beyond Theoretical Physics

Ediho Lokanga

Dr. Ediho is an associate member of the Institute of Physics and engaged in doctoral research in the field of theoretical physics and computation. He and his family reside in the United Kingdom.

Ediho Lokanga says that a new and rich paradigm, which he calls the quantum holographic informational universe, has evolved as a result of research in the seemingly disparate fields of information, astrophysics, holography, quantum mechanics, and recent discoveries in the field of brain chemistry.

One of the first clues that the universe might be holographic came from information theory where it became known that information is physical. This idea provided the framework for understanding that Ludwig Boltzmann’s entropy theory of matter and energy and Claude Shannon’s information theory, measured in ‘bits’, were conceptually and mathematically equivalent. As Lokanga describes it, “The total quantity of information measured in bits is related to the total number of degrees of freedom of matter-energy.”

Since information is physical and the first law of thermodynamics maintains that energy and therefore information is conserved, physicist John Wheeler wondered how it was possible that a black hole in space could permanently destroy information thereby violating the laws of thermodynamics. Wheeler’s student Jacob Bekenstein proposed that the entropy of a black hole is proportional to the area of the event horizon, so if an object falls into a black hole and is absorbed, the surface area of the black hole will increase to an equal or greater amount than the entropy or information lost by the falling object.

Bekenstein’s proposal led to mathematical calculations by Stephen Hawking and Leonard Susskind showing that all of the information inside a black hole, in fact, any volume of space, can be completely described by the information encoded on its surface area without reference to its volume. This became known as the holographic principle, possibly the most significant idea in physics in the last thirty years.

The comparisons between the holographic principle as it relates to black holes and our universe and the optical holograph became too obvious to ignore. In a typical optical hologram, a laser light is split into two separate beams of light by a prism. One of the beams strikes a material object such as bicycle and is reflected toward a photographic plate, and the other beam is reflected off a mirror toward the same photographic plate. If the two paths are of equal distance, an interference pattern is created on the photographic plate. When a second laser is shone through the photographic plate a complete three-dimensional image of the bicycle is created seemingly floating in space. If the photographic plate is broken into hundreds of pieces and a laser light is again shown through any one of the pieces, a complete three-dimensional image of the bicycle still will appear demonstrating that the whole is recorded in each and every piece of the hologram down to a single wave-length of light. Since, an enormous amount of information can be stored as waveforms on a two-dimensional photographic plate, and it has been discovered that all of the information in a black hole can be stored on its surface area, it doesn’t take too big a leap of imagination to wonder if we are a three-dimensional projection of the hologram that makes up the boundary of our universe.

The holographic principle has found its way into theories of the brain and memory. Through decades of experimentation with rats in the 1920s, Karl Lashley discovered that no matter how much of a rat’s brain he destroyed, the rats were still able to remember tasks they had learned before the surgery. Lashley also found that if he severed most of a cat’s optic nerve, it still didn’t affect its vision or its ability to comprehend. This research led him to the realization that memories could not be stored in any particular area of the brain.

This evidence provided researcher and neuropsychologist Karl Pribram with an idea. Performing his own experiments with monkeys and cats, Pribram became convinced that no images were being projected internally and that information must be distributed more holistically. Pribram believes that the image of an object forms on the retina of the eye and then gets converted into the spectral domain via Fourier transformations. This information causes neurons to fire distributing the information holistically across the brain. A reverse Fourier transformation takes place and the image is projected back out into the environment as a spatial three-dimensional object just like what happens when a laser is shone through a hologram. Pribram realized that his ideas were similar to physicist David Bohm’s idea of the implicate order and the holographic universe and the two collaborated to formulate the holonomic theory of the brain.

Because all of the above disciplines are converging to suggest that we might live in a holographic universe, more and more scientists in various fields of study are beginning to accept the possibility that the illusory reality we perceive is a projection from a universal hologram representing a deeper more fundamental reality.


**[Lokanga has a propensity for using an astounding number of abbreviations throughout the book (CMBR, HP, HU, QI, QM, QG, etc. etc.) This style might be convenient for the writer, but it’s, at times, a confounding aggravation for the reader.]




Why Materialism is Boloney

Bernardo Kastrup

Bernardo Kastrup has a Ph.D. in computer engineering with specializations in artificial intelligence and reconfigurable computing. He has worked as a scientist in some of the world's foremost research laboratories, including the European Organization for Nuclear Research (CERN) and the Philips Research Laboratories.

During its heyday in the seventeenth and eighteen centuries, British empiricists such as John Locke, George Berkeley, and David Hume, found the philosophy of Idealism to be a difficult sell, and as legend has it, was naïvely challenged by a materialist who kicked a boulder and declared, “I refute it thusly.” That was an era preceding the quantum revolution when the philosophy of materialism and realism, advocating a one-to-one correspondence between objective reality and perception, seemed self-evident; a time before it was understood that matter was not “solid” but mostly empty space; a time before the understanding that an observer can change reality instantaneously across space and time, and a time before the realization that knowledge is limited by indeterminacy. It seemed that reality was not so objective after all. Physicist and mathematician James Jeans pronounced: “...that the stream of knowledge is heading toward a non-mechanical reality; the universe begins to look more like a great thought than like a great machine.”

These post-quantum revolutionary ideas certainly are changing our view of reality but they are never-the-less still based in materialism. Modern science considers matter and information to be fundamental and consciousness, if it is even acknowledged as a phenomenon at all, is thought by most to be an emergent property of matter. For example, the philosophy of epiphenomenalism asserts that consciousness is a result of neural processes in a one-to-one correspondence with certain brain states, but recent studies indicate that there is no consistency in this relationship, and even if such a relationship exists, correlations do not necessitate causation.

Materialism simply cannot explain the “hard” problem of consciousness, that asks the question of how a two-and-a-half-pound mass of tissue creates our subjective experience of sight, sound, touch, smell, and taste. Materialism must confront the fact that nothing physical, including information, can have objective reality without subjective experience. To suggest otherwise, implies the paradoxical notion that the very consciousness that apparently arises from matter requires consciousness to exits a priori.

Bernardo Kastrup proposes a philosophy that more closely coincides with observation. He suggests that mind is a fundamental aspect of the universe and that matter, including our body and our brains, are localized regions of the universal mind in the same way that whirlpools are self-sustaining and localized regions of a river. As Kastrup notes, the mind is not in the brain, rather the brain is in mind. Our brains simply act as localized filters that select information from the universal mind just as a radio receiver selects a particular station from multiple broadcast signals.

Kastrup makes a convincing argument for idealism. I highly recommend this book for those looking for a consistent view of reality.



How to Change Your Mind

Michael Pollan

Those of us who grew up in the 1960s and were part of the counterculture believed at the time that Harvard Professor, turned Guru, Timothy Leary was at the forefront of the psychedelic revolution. But by the time he established the Harvard Psilocybin Project in 1960, there had already been a decade of research accompanied by hundreds of academic papers.

Michael Pollan gives a thorough and balanced account of the history of psychedelics from the Swiss chemist Albert Hoffmann, who synthesized LSD while working at Sandoz pharmaceutical laboratories to Al Hubbard, who became the Johnny Appleseed of LSD, travelling the country with his liter-sized satchel of pure Sandoz LSD in a quest “ to liberate human consciousness” by turning on the elite of Hollywood and the movers and shakers of Silicon Valley, including Steve Jobs, who called his consumption of LSD one of his most profound life experiences, and finally to Timothy Leary himself who most blame for the dramatic end of all academic research into psychedelics.

Leary fanned the flames of moral outrage, when in January of 1967 he and Allen Ginsberg were invited to speak at the first Be-in in San Francisco’s Gold Gate Park attended by approximately twenty-five thousand people most of whom were tripping on freely distributed LSD. Leary stimulated the passion of the crowd with his oft repeated mantra: “Turn on, tune in, and drop out.” He proclaimed that the young were no longer going to fight the establishment’s wars or join big corporations. No wonder he became a target of Richard Nixon and his so-called “silent majority.”

Just five years later, all academic research collapsed or was driven underground due to the political and moral fear that swept the country. It would take another 13 years before another federally sanctioned trial involving 60 subjects would be carried out at the University of New Mexico under Dr. Rick Strassman who injected subjects with a potent psychedelic compound, called DMT.

In the 1950s, before the collapse of academic research, psychedelics were used to treat many conditions including autism, alcoholic addiction, obsessive-compulsive disorder, and anxiety with remarkable success. Pollen writes, “A 1967 review article summarizing papers about psycholytic therapy…estimated the technique’s success rate ranged from 70 percent in cases of anxiety neurosis, 62 percent for depression, and 42 percent for obsessive-compulsive disorder.” Obviously, much is to be gained by continued research into the benefits of these compounds for mental health, but the lessons learned about these molecules, collectively called ‘tryptamines,’ and their effects on consciousness and the nature of reality in general might be their greatest contribution.

In his decade-long experiences, both personally and as a guide for others on psychedelic journeys, Bill Richard, who has degrees in both psychology and divinity, says that he has come away from these experiences with several convictions. A psychedelic journey is not something we generate; it’s not a property of the brain; it’s a property of the universe at large.

Michael Pollan, says of his experiences with these substances, “I have no problem using the word ‘spiritual’ to describe elements of what I saw and felt, as long as it is not taken in a supernatural sense. For me, ‘spiritual’ is a good name for some of the powerful mental phenomena that arises when the voice of the ego is muted or silenced.” And at the conclusion of his experiments, Rich Strassman said he was forced to consider that perhaps DMT does not cause hallucinogenic experiences but rather allows our brain to sense different forms of existing reality. Both he and Pollen agree that every trip is different and that we don’t necessarily get the trip we want, but we get the trip that we need.

Michael Pollan’s conversational style makes this massive work of over 400 pages a joy to read. So, “Turn off your mind, relax and float downstream.” (John Lennon)





The Case Against Reality

Why evolution hid the truth from our eyes

Donald Hoffman
Donald Hoffman is professor of cognitive science at the University of California, Irvine.

Expressing his misgivings about the meaning of quantum theory, Albert Einstein wrote a letter to his friend Max Born saying in part, “I can’t believe that the moon doesn’t exist when I don’t look.” We don’t know how Born responded, but we can imagine how Donald Hoffman might have replied. “Albert, it’s true, the moon is not there when no one looks.”

At first glance this seems incredulous, but fundamentally, it is what quantum experiment has unequivocally demonstrated. Many philosophers claim, for instance, that we could never prove whether or not something exists when no one looks, but Hoffman points out that John Stewart Bell’s experiments in 1964 showed that an unobserved electron has no spin. Similar experiments show that quantum particles exist only when observed.

Hoffman does not deny objective reality. He believes that something is there when we don’t look, but it is nothing like what we perceive. In fact, evolution has hidden the truth about reality in favor of fitness. In every instance, mathematical models have proven that organisms that perceive truth over fitness go extinct.

Hoffman uses the metaphor of a computer to illustrate how an interface hides the truth in favor of survival. He says, without the help of an interface, a computer would be useless to most of us. We would have to use binary code for every number and every letter in the alphabet. It could take hours just to compose a couple of sentences. Thanks to the interface, we can type letters with the stroke of a key and launch entire programs with a click of our mouse. Like the reality we perceive, the interface and the icons on the computer screen hide the truth of how computer code works in favor of simplicity and productivity. As Hoffman says, fitness beats truth in the game of evolution. But one wonders: hasn’t science delved into the nature of the physical world with its particle accelerators giving us a window into deep reality?

Yes, Hoffman would say, but until now, all of our knowledge about the physics of material reality, atoms, quarks, fields, and the space/time they occupy, and all of our knowledge about life, including DNA and neural processes, have been nothing more than the science of icons. Hoffman says, “Space, time, and the physical objects are not objective reality. They are simply the virtual world delivered by our senses to help us play the game of life.”

Many prominent physicists agree proclaiming that space/time is doomed. Physicist Leonard Susskind, who along with Gerard ‘t Hooft developed the holographic principle says, “The three-dimensional world of ordinary experience—the universe filled with galaxies, stars, planets, houses, boulders, and people—is a hologram, an image of reality coded on a distant two-dimensional surface.”

Hoffman uses many examples of visual illusions to show that our brains do not recreate the world “out there” as most neuroscientists believe; rather, we construct reality according to our species-specific interface giving us survival fitness points. We humans agree for the most part on our perception of reality due to our similar interfaces. Other animals are likely to have different interfaces giving them different perceptions of reality allowing them to navigate through their world. The reality we construct, according to Hoffman, is a result of ‘conscious agents’ interacting, not the result of neural processes that seem to correlate with our experiences.

No scientific, materialistic theory has emerged, nor will it ever emerge, according to Hoffman, that gives the slightest hint as to how three pounds of matter inside our skulls gives us the experience of the taste of chocolate or the experience of a sunset. Hoffman states, “As we have discussed, all attempts at a physicalist theory of consciousness have failed…you cannot cook up consciousness from unconscious ingredients.”

Though counterintuitive, Hoffman’s fitness beats truth hypothesis is consistent with quantum experiments, evolution, and the holographic principle. I believe that Donald Hoffman’s counter-intuitive ideas will be mainstream with the passage of “time” in our never-ending search for deep reality.



NOW

The Physics of time

Richard A. Muller
(Kindle version)

The concept of now, which we take for granted, worried Albert Einstein immensely because he realized that it was different from the past and the future and that its definition might lie outside the purview of physics. Richard Muller agrees with Einstein’s sentiments and attributes Einstein’s own theory of relativity, not the concept of entropy, as keys to understanding the meaning of now.

Most physicists today accept Arthur Eddington’s hypothesis that the Second Law of Thermodynamics drives time from the past to the future, but Muller believes that the relationship between time and entropy is only a correlation. In turn, he wonders why physicist rely on the tenuous correlation between entropy and the arrow of time when every other law of physics, with the possible exception of the weak nuclear force responsible for nuclear decay and quantum measurement, are time reversible. Both quantum measurement, nuclear decay, and the expanding universe appear to have an arrow of time moving from past to future prompting Muller to wonder if one or all of these set time’s arrow. But one thing he says with conviction is that Eddington’s explanation is wrong.

Muller says that the Second Law of Thermodynamics, unlike the Law of the Conservation of Energy, doesn’t even rise to the level of a law, nor does it add anything to physics, because it is just a trivial tautology simply restating the idea that something that is more probable to occur is more likely to happen. Even Eddington realized that entropy was not a primary law and doesn’t stand on its own.

Two years before Edmund Hubble found convincing evidence that the stars and galaxies were receding, a Belgian priest, George Lemaitre, formulated a hypothesis that space itself was expanding. Time moves forward, according to Muller, because the early universe was, and still is, in an improbable state of low entropy with lots of empty space to fill due to the expansion of space itself. As the universe expands, the microwave signature of the Big Bang fills up more and more space, but in the process loses energy. The net result is that entropy remains constant, yet time goes forward in violation of the idea that entropy drives time’s arrow.

A case in point is the knowledge that after the Big Bang, prior to the manifestation of the Higgs field, the universe was filled with mass-less particles which do not change their collective entropy as the universe expands. Muller says that if the arrow of time was truly driven by an increase in entropy, time would have stopped!

If space expands, then according to Relativity, time expands as well. The Big Bang was in essence an explosion of four-dimensional space/time, the leading edge of which we refer to as now. We shouldn’t imagine this expansion as only a shock-wave at the furthest boundaries of space/time, however, because space/time is uniformly expanding, creating nows everywhere simultaneously. The expanding and accelerating universe is creating its future which is our present, what we call now; therefore, the future exists only as a probability. If this is true, according to Muller, Relativity must be modified.

Muller goes off topic at times, but his long illustrious career gives him license and serves only to enlighten the reader with his insights.



The Time Illusion

The Arrow That Points but Does Not Move

John Gribbin
(Kindle version)

Around 500 B.C the Greek philosopher Parmenides argued that change is an illusion and his student Zeno of Elea took this idea to its extreme with his well-known paradoxes concerning an arrow in flight. Zeno said that at any instant the arrow is occupying a given place; therefore, if it is occupying a given place it must be at rest at any instant and cannot fly. His approach hinged on two suppositions: First, that time and space are objective; and second, that a finite set cannot consist of an infinite number of parts.

Aristotle argued that time could indeed be divided into an infinite number of units allowing continuous smooth flowing motion through a continuum of time. This notion went unchallenged until 1926, when a twenty-year-old German by the name of Werner Heisenberg won the Nobel Prize in physics for his theory of indeterminism which demonstrated that, at the subatomic level, particles exhibit discontinuous behavior. Matter does not move smoothly from one place to another. Does this mean that time does not flow?

In Albert Einstein’s block universe, the past and future are real. Everything that has ever happened and everything that ever will happen is objectively present in some slice of the universe. Advocates of this philosophy of time called Eternalism believe that the future is fixed, free will is illusory, and time does not flow. Gribben disagrees saying that the block universe does imply a flow of time, prompting him to wonder if something is wrong with the block universe concept that suggests that the future is fixed and free will is illusory.

Gribben says that while most of the laws of physics are time symmetrical, such as Erwin Schrodinger’s famous equations describing the trajectory of electrons that make no distinction between past and future, recent research by Marian Cortes of the University of Edinburgh and Lee Smolin of Canada’s Perimeter Institute have developed models in which certain events, when reversed, will not return to a previous state, but will give rise to entirely new events. Time continues to point in one direction and there is no going back, implying that indeed there is a direction of time. But does it flow? To answer this question Gribben turns to Julian Barbour who wrote the popular book, The End of Time for an explanation to this riddle.

Barbour set out to banish the notion of time, and since all of our ideas of time are associated with movement, the tick of a clock— one second, the revolution of the earth on its axis— one day, and the earth’s journey around the sun—one year, he realized that in order to banish time he must banish motion itself. Barbour proposes that there is no motion, only different snapshots, different configurations of the universe. Each time capsule contains information about its place in the temporal order of things, but there is no motion and no flow. Everyone’s now is the same. This idea would probably have pleased Einstein who worried about the notion of now within the concept of the block universe. Do experiments support Barbour’s universe?

Gribbin says, “…Nobody has ever seen an atom in the act of changing from one state to another.” For example, an atom in an excited state can release a photon and fall down into a lower energy state. Mathematical equations describe the atom’s energy both when it’s in an excited state and the shared energy state of the atom and the photon when it has dropped to a lower orbital shell, but there is no mathematical description of the energy state of the atom and photon in transition.

In an attempt to observe the atom going from one state to the next, carefully controlled experiments have been devised to catch this transition from one state to another but all experiments have failed to observe this phenomenon. A watched pot never boils.

Gribben describes several quantum Zeno effect experiments designed to capture the transition from one quantum state to another, but for brevity, I will describe a simpler experiment portrayed by Jeffery M. Schwartz in his book The Mind and the Brain:
An ammonia molecule consists of three atoms of hydrogen and one atom of nitrogen that sits either on top or below the tripod of hydrogen atoms with an equal probability. When no one is watching the nitrogen atom is in a superposition of being both on top and on the bottom of the hydrogen atoms. An observation will find it either on top or on the bottom with a 50/50 probability. If the molecule is rapidly and repeatedly observed, the nitrogen atom will freeze in place and never make the transition.

Gribben says that the lesson we should take from these experiments, and the theory that underpins them, is that quantum systems do not move at all! Gribben says the quantum Zeno effect is real; a watched quantum pot never boils. Time has an arrow but it does not flow.



Rare Earth: Why Complex Life is Uncommon in the Universe

Peter D. Ward
Donald Brownlee

During the middle-ages, under the Ptolemaic system, it was widely believed that the stars and planets revolved around our flat-earth and we humans were the crowning glory of creation. That notion began to unravel when in 1514 Nicolaus Copernicus proposed the heliocentric model in which the planets move around the sun in perfect circle, later revised by Johannes Kepler who showed that the planets’ orbits trace out ellipses around the sun. These revelations were just the beginning of the Copernican Principle of mediocrity that has flourished ever since, portending that we live on an average planet, rotating around an average star, in an average galaxy, one among billions of such galaxies in the vast universe. But are these assumptions true? Peter D. Ward and Donald Brownlee weave a convincing argument that this is not the case.

Ward and Brownlee maintain that our good fortune of being on this particular rare planet, in a location in the habitable zone of our solar system often referred to at the Goldie Locks Zone where temperatures are not too hot and not too cold, as well as our favorable position in our galaxy, provide the unique and necessary condition to evolve complex life. They believe that microbial life could be ubiquitous throughout the universe because simple organisms can exist in very harsh environments; however, the evolution of larger, more fragile, complex life requires very special conditions that persist for billions of years. These special conditions do not seem to be common in other parts of the universe. It took billions of years for bacterial life to evolve into complex life on our planet, so it is reasonable to assume that planets must remain in the habitable zone of their home stars for billions of years. Because stars like our sun become hotter and brighter over time, the habitable zone of a planet moves outward from its star as the star ages. In fact, four billion years ago our sun was about 30% fainter than at present.

We often hear that we are an average planet orbiting an average star, but this is not factual. Approximately 95% of all stars are less massive than our sun, and their habitable zones are closer to their stars, which is not at all favorable for life, because as planets get closer to their suns, the gravitation tidal effects induce synchronous rotation in which one side of the planet always faces the star it orbits. This is the case with the planet Mercury. Planets that are tidally locked to their stars create a situation where the dark side of the planet gets so cold that it freezes the atmosphere, while the side facing its star gets extremely hot, making these planets uninhabitable for complex life.

Stars that are more massive than our sun have a more distant habitable zone than we see in our solar system, but it is doubtful that planets in such a zone could support complex life because stars that are 50% more massive than our sun enter the red-giant stage, increasing their brightness a thousand-fold after only 2 billion years. A star like our sun has a stable lifetime of about 10 billion years, plenty of time to evolve animal life. Additionally, massive stars are hotter and radiate substantially more ultraviolet light. Ultraviolet light is very detrimental to biological molecules and can strip away an earth-like atmosphere.

Approximately two-thirds of solar type stars are binary, having two stars orbiting each other. The authors state that recent simulations suggest that planets may not be able to form in such systems unless they are 50 astronomical units from their sun, probably too far to support life. Variable stars, neutron stars, and white dwarfs are also unlikely to be home for any kind of life.

We have already eliminated most star systems as host for complex life and have not even addressed the possibility of habitable zones within the galaxies themselves. Our sun is about 25,000 light years from the center of our galaxy where star density is low, but abundant in heavy elements. During the process of accretion, enough heavy elements existed to form a planet like ours with a solid/liquid metal core and ample radioactive elements that decay and radiate heat in the process. The metal core produces a magnetic field that protects the surface of the planet from radiation from space, and the radioactive heat from the core fuels plate tectonics. Both attributes seem to be necessary for the development of complex life.

The inner regions of the galaxy, where stars are more densely packed, would be a very dangerous place to be. This inner region would contain nearby supernovae, magnetars, and collapsed neutron stars that emit gamma rays and X-rays detrimental to any life forms.

In the outer regions of the galaxy the concentration of heavier elements declines and probably would not form planets as large as Earth, let alone a planet with organic compounds essential for life. If earth had formed around a star with lower heavy-element abundance, it would have been smaller because there would have been less solid matter in the ring of debris from which it amassed. Smaller size can adversely influence a planet’s ability to retain an atmosphere, and it can also have long-term effects on volcanic activity, plate tectonics, and the ability to have a magnetic field.

Only 6% of all galaxies in the universe are spiral galaxies like ours, so we are once again very fortunate because other galaxies are unlikely to have solar systems that could support life. Elliptical galaxies are regions with little dust and exhibit little new star formation. They are nearly as old as the universe itself and have a low abundance of heavy elements because all of the elements required for life such as carbon, oxygen, nitrogen, phosphorous, potassium, sodium, iron, and copper are created in stars. These elements were not created in the Big Bang and were not in abundance for at least 2 billion years after the birth of the universe.

Other regions of the universe with a high volume of stars, such as open star clusters and globular star clusters, are also unlikely to harbor planets with life, according to the authors. Open clusters are too young and have not yet produced heavy elements. Globular clusters are too dense with stars in a given amount of space, have too much radiation and gravitational disturbance to form a stable solar system, are low in heavier elements required for life, and have a high probability of a neighboring star going supernova sterilizing any planet within one light year and making conditions for life untenable in stars within 30 light years. We, Earthlings are unlikely to suffer such a fate as no such stars with supernova potential are within 30 light years of Earth.

It seems that we really do live in the Goldie Locks Zone in both our solar system and our place in the Milky Way. Atmospheric “thermostats” can increase the width of the Goldie Locks Zone. For example, our planet has a thermostat called the carbon dioxide-silicate cycle. When calcium reacts with carbon dioxide, it forms calcium carbonate or limestone. Calcium thus draws C02 out of the atmosphere. The removal of carbon dioxide from the atmosphere causes the planet to cool, reducing the amount of weathering taking place on the surface thereby limiting the amount of calcium material available for these chemical reactions. As the planet slowly warms again from increases of carbon dioxide from various sources such as volcanic activity, more weathering of the surface takes place exposing more calcium which draws more carbon dioxide from the atmosphere. Mountain building driven by plate tectonics assures that calcium material is brought to the surface. On non-plate tectonic worlds, buried limestone stays buried, thus removing calcium from the system and allowing carbon dioxide levels in the atmosphere to increase and go unchecked. It is difficult to underestimate the importance of plate tectonics for life. Besides contributing to a stable climate by regulating carbon dioxide levels in the atmosphere, plate tectonics are what created our continents, without which our planet would be a water-world with a few scattered islands created by volcanoes poking up through the water. Plate tectonics promotes high levels of global biodiversity which is a major defense against mass extinction, and most importantly, plate tectonics is essential for establishing our magnetic field protecting us from lethal cosmic radiation and solar winds that would destroy our atmosphere and eventually boil away the oceans as it has done on Mars. The dynamics of this system in relation to our magnetic field is complex.

The authors explain that, as Earth spins, it creates convection movement in the liquid part of the core that flows around an inner-most core of solid iron. Heat, created by radioactive decay deep within the earth, must be released from the core. Plate tectonics create temperature differences between the mantle and core, in turn, creating convection currents which sustain our magnetic field. Joseph Kirchvink of Cal Tech says, “no plate tectonics, no magnetic field.” This seems to be a spurious argument however as most sources say that plate tectonics has nothing to do with Earth’s magnetic field which is created by movement of the liquid core.

We are also fortunate to have a rare Moon. Relative to the size of our planet we have, by far, the largest Moon in the solar system. Because our Moon is so large and relatively close to our planet, it stabilizes the Earth’s spin axis at 23 degrees relative to the solar ecliptic. Without the large Moon, or if the Moon were more distant from Earth, the tilt angle would wander as much as 90 degrees due to the gravitational effect of the sun and Jupiter. Like a spinning top, the Earth wobbles over periods of thousands of years creating what we call the procession of the equinox, but the tilt angle remains constant relative to the solar ecliptic.

Planets with a tilt of 45 degrees or more, such as Uranus whose tilt is a full 90 degrees, have one pole exposed to sun-light for half a year, while the other experiences cryogenic darkness. These extreme tilts lead to a total freezing over of the oceans. On the other hand, planets with virtually no tilt have extreme heating in the mid latitude zones and ice at the upper latitudes year around. These conditions create stagnant climatic conditions that are not conducive to complex life. In just one more unlikely coincidence, Earth’s tilt seems to be perfect for the stability of our climate and the evolution of life. Our Moon is gradually moving away from us and the sun is gradually getting hotter. In tandem, these effects will decrease the habitability of Earth.

The Unlikely Emergence of Life on Our Planet

During the first half billion years of Earth’s formation, it was heavily bombarded by comets and asteroids creating a hellish environment that melted the outer layers of the planet into magma oceans, while bringing organic molecules and water that turned to steam contributing to an atmosphere made of mostly carbon dioxide and water vapor, not ammonia and methane as previously believed. These bombardments gradually lessened about 3.8 billion years ago, and as the earth cooled, the steam condensed as rain and created our oceans. Land masses emerged as a result of volcanic activity forming islands and plate tectonics forming continents made of low-density basalt floating on the underlying mantle.

Life on this planet began soon after the Earth cooled about 3.5 billion years ago, probably around hydrothermal vents under extreme conditions of high pressure, cold, high carbon dioxide levels dissolved in the water, and no free oxygen. These organisms, called extremophiles, did not use sunlight to photosynthesize sugar for energy and release oxygen, as do modern cyanobacteria, but took a different chemical pathway harvesting their energy from hydrogen sulfide and releasing methane as a by-product.

The taxonomy that many of us learned years ago having just two kingdoms, plant and animals, with a multitude of phyla under each kingdom has been completely reorganized in recent years. A new system proposed by biologist Carl Woese, who recognized archaea as being biologically different from bacteria, devised a new classification system with three domains of life: Bacteria (simple prokaryotes having no internal organelles), Archaea (prokaryotes most of which are extremophiles), and Eucarya which includes the kingdoms of Animalia, Plantae, Fungi, and algae.) These Eucaryotes all have complex cells with internal organelles including a nucleus. They sexually reproduce, have flexible cell walls, and contain about 1000 times more DNA than prokaryotic cells. During Earth’s earlier time of harsh conditions, surface life was probably nonexistent as Earth was still being bombarded by comets and asteroids that would have sterilized the land. For any life to exist, it would have to be deep in the oceans or well below ground.

Anaerobic Archaeans are the likely candidate for the first life on the planet. These extremophiles are remarkable. They can live deep in the oceans harvesting their energy from hydrogen sulfide or they can live inside solid rock. Not only can they live in sedimentary rock using the organic compounds found within, but they can live in igneous rock such as basalt, miraculously producing their own organic compounds such as carbon and hydrogen from hydrogen gas and carbon dioxide dissolved in the rock producing methane as a by-product. These autotrophs (organisms that can produce organic material from inorganic compounds) could theoretically live deep in the inner crust of Pluto warmed by radioactive decay. The authors state: “These extremophiles show that life can exist in regions previously thought too hot, cold, acidic, basic, or saline. They have rendered the original concept of the habitable zone obsolete.” Good news for the ubiquity of simple life in the universe; however, complex life is another story.

The cyanobacteria, whose remains are left in the ancient fossil records of Australia in the form of stromatolites, ruled the Earth for billions of years using sunlight and carbon dioxide to produce their energy in the form of simple sugars and releasing oxygen as a by-product. For many eons, free oxygen in the oceans and the atmosphere was scarce because the environment was reducing, meaning that the oxygen molecules were combining with iron dissolved in the oceans to form iron oxide or rust, leaving sedimentary deposits known as banded-iron formations that can be seen today throughout the world. This process of banding as a result of these organisms producing oxygen began about 2.5 billion years ago, but by about 1.8 billion years ago the reducing compounds such as iron were used up.

With the reducing environment gone, free oxygen from these photosynthetic organisms began to precipitate into the oceans and the atmosphere in what has become known as the Oxygen Revolution. The situation was catastrophic for anaerobic bacteria and most archaea had to adapt or die. Many organisms retreated from their widespread environment to more extreme places such as lake and ocean bottoms, but others found ways to use oxygen for metabolism.

About 1.6 billion years ago the first eukaryotes, similar to red and green algae appeared in the fossil record. These eukaryotic species were able to adapt to the Oxygen Revolution by losing their rigid cell walls in favor of a soft pliable cell epithelium allowing them to attach, and in some cases, engulf other organisms to form multicellular symbiotic communities. The first of these to appear in the fossil record were sponges. In recent deep-sea dives, these ancient sponges, called glass sponges because of their crystalline appearance due to their high silicate content and thought to be extinct, were found living in reefs one on top of another. These sponges live to be 11,000 years old, the oldest living organisms on Earth.

Then suddenly, about 600 million years ago, without any diversification found in the fossil record, an abundance of new body plans erupted on the planet in what is known as the Cambrian Explosion. What could have possibly been the impetus for such biodiversity?

Two major biological diversifications happened 2.5 billion years ago and again around 600 million years ago. These events coincided, perhaps not coincidentally, with the ends of what has been called the Snowball Earth epics whereupon the Earth almost completely froze.

After the first Snowball Earth episode around 2.5 billion years ago, the ice-covered seas began to melt, dust particles of iron and magnesium that had accumulated over centuries fertilized the oceans causing huge blooms of phytoplankton that in turn released massive amounts of oxygen into the atmosphere. For most organisms, this oxygen was a poison, but some organisms adapted to the situation by producing certain enzymes to deal with the oxygen and hydroxyl radicals in their environment. Astoundingly, geneticists found evidence that these enzymes appeared in Archaeans and Eukaryotes, but not in older bacteria. This find has upset the Tree of Life for it appears that the domains of Archaea and Eucarya arose only after this snowball earth episode. Though the authors don’t state this hypothesis, this might be the time-period when some Archaean was able to get inside a bacterium to create the first organism with internal cellular organelles, namely the first Eucarya. Either way, the Snowball Earth event of 2.5 billion years ago probably launched the eukaryotic cells necessary for animal life that came much later. The second Snowball Earth epic happened between 800 and 600 million years ago. By this time animal life was already present but sparsely distributed. When the deep freeze happened, these organisms were forced to find haven around areas of warmth such as near volcanoes and hydrothermal evens under the sea. The stresses of these isolated populations might have been responsible for the diversity of phyla that emerged during the Cambrian Explosion.

After nearly 3.5 billion years our planet was now teaming with newly created life forms from ammonites to trilobites, while the earliest forms of life, the layered bacteria forms known as stromatolites, began to diminish becoming food for grazing nematodes.

The Cambrian Explosion created more phyla than at any other time in history; some went extinct, but none have appeared since. Tens of millions of species have evolved from about 30 phyla and there have been 15 mass extinctions in the last 500 million years, five of which exterminated over half of all species on Earth. Most of the kill-off was of fragile animal life, but microorganisms survived, their basic body plan remaining stable throughout time. One of the effects of mass extinctions is that reefs disappear and this occurred after the Cambrian, Ordovician, Devonian, Permian, Triassic, and Cretaceous mass extinctions

Mass extinctions can result from a variety of situations such as, change in a planet’s spin rate, a planet moving out of the habitable zone, a change in the output of energy from a planet’s star, asteroid or comet impacts, nearby stars going supernova that could destroy a planets ozone layer 30 light years in distance, cosmic ray ejections that are lethal to life, and, last but not least, the rise of intelligent beings and their technology.

Adding up all of the possibilities that could lead to total destruction of all life on our planet, the odds are that this should occur about once every 2 billion years. Life on our planet has lasted nearly 4 billion years, so we are lucky indeed.

Peter D. Ward and Donald Brownlee, make a good case for the rarity of complex life in the universe, and in the process have narrowed the scope of the search for planets that have some probability of harboring complex life.

We should first look for spiral galaxies similar to our Milky Way Galaxy belonging to a very small class of about six galaxies out of every hundred in the universe. As we saw, other types of galaxies are probably not conducive to life. Next, we should search the band of stars in a region about 15,000 to 30,000 light years from the galaxy’s center, the habitable zone of the galaxy, and then find stars in that band that are similar to our sun. Out of every one hundred candidates, we should find about five. From those stars we should try to find planets in the habitable zone of each of those stars. Since the star’s mass will be similar to our sun, the habitable zone should be about the same distance from the star as our planet is from the sun. Next, we should do a spectral analysis of the planet’s atmosphere.

We would not be able to look directly for Nitrogen and Oxygen because they do not produce detectable absorption bands, but the detection of ozone, which has a strong detectable wavelength absorption band, would indicate the presence of oxygen.

When ultraviolet light interacts with O2 oxygen, it splits it into individual atoms and they, in turn, interact with other oxygen molecules to produce 03—ozone. Since ozone is unstable, it has to be constantly replenished by the act of photosynthesis. Therefore, the detection of ozone, and small amounts of water and carbon dioxide, would be good indicators that the planet supports life.

This book was sobering for those of us who believe that we are, or have been visited by extraterrestrials. The information contained within does not rule out the possibility that intelligent life exists in other regions of the universe; it only restricts the places where it could exist; that is, if it is even possible to predict such a thing. Physicist Paul Davies once said that at some point in the future we might have enough knowledge of the universe to predict fairly accurately the number of habitable planets in the universe, but until we know what life is, we can make no predictions as to how many planets are actually inhabited. We still don’t know how life arose on this planet. Some scientists believe that the probability that life arose on this planet by chance are so remote we could safely say the chances are zero! But, no matter how small the chances might be, life is here!

In addition, though most of the authors assumptions might be considered to be irrefutable, some of their assumptions have recently been challenged by other researchers. Still, they have made a good case for the rarity of complex life in the universe. But rare does not mean nonexistent. Even if we limit our search for stars in the habitable zone of our own galaxy between 15,000 and 35,000 light years from its center, and in that region, limit our search to the 5% of stars that are similar to our sun, in a galaxy 200,000 light years in diameter, made up of 200 billion stars and at least as many planets, roughly estimated, that still leaves us with 1.5 billion stars and about an equal number of planets that could support complex life in our galaxy alone.



The Grand Biocentric Design:
How Life Creates Reality

Robert Lanza, MD

For nearly half a century, astrophysicists and mathematicians alike have puzzled over the improbable coincidences regarding the fine-tuning of nature. If any of the approximately two-hundred parameters and constants of nature were different by a tiny fraction of their values, we wouldn’t be here to ponder such questions.

Some scientists have swept these improbabilities under the rug by suggesting that we live in one of an infinite number of universes most of which do not support life. We, they contend, are just lucky enough to live in the one that has the right conditions to have evolved life.

Robert Lanza has developed a simpler explanation for the fine-tuning of the universe, a hypothesis he calls biocentrism, in which life and consciousness are fundamentally responsible for creating the necessary conditions to evolve life. He is not bothered by the fact that life has arisen only recently in the history of the universe. As quantum experiments have shown, the past history of a particle can be determined by an observation in the present. Lanza states, “When the worldview catches up with the facts, the old paradigm will be replaced with a new biocentric model, in which life is not a product of the universe, but the other way around.”

The apparent temporal retroactivity of experimental results is not confined to elementary particles. Lanza cites the famous experiments conducted by Dr. Benjamin Libet in the 1980s that at first sight seems to confirm the viewpoint of most modern neuroscientists that we have no free will. In these experiments, subjects hooked to electroencephalograms to monitor brain activity were asked to perform a particular movement, such as raising a finger, at any time they pleased. Researchers monitoring brain activity observed that about six seconds before the subject consciously decided to perform a certain movement, brain activity associated with that particular activity had already begun. This, researchers contended, showed that our actions are a product of our deterministic unconscious brain.

Lanza has a different interpretation of the Libet experimental results. No linear unfolding of events take place. As in the elementary particle experiments, what the experimenters are monitoring with the EKG equipment is the potential, the superposition of states, of the subject’s decision to move or not move. The superposition collapses into a definitive decision the moment the subject becomes aware. As the first principle of biocentrism states, space and time are not independent realities but rather tools of the human and animal brain. No reversal of time takes place because only the “now” exists in reality. A kind of connectedness exists outside of space and time, and the present act of observation creates our individual reality.

Lanza states: “…quantum effects in the brain strongly suggest that decisions, and even the mere fact of awareness, causes an entire cascade of quantum consequences that can even seemingly ‘overwrite’ previous configurations. The important point here is that what’s in our consciousness now collapses the spatiotemporal logic of what happened in the past.” Reality itself manifests as a result of the collapse of the universal wave function.

The fine-tuning of the parameters and constants of nature can best be explained if consciousness is fundamental within the principles of biocentrism.



Darwin's Doubt:
The Explosive Origin of Animal Life and the Case for Intelligent Design

Stephen C. Meyer

The twin pillars of the standard theory of evolution are the ideas of universal common ancestry, that all of life evolved from an ancient single-celled life form, and natural selection, that morphological changes are a result of incremental random genetic mutation, a proposition that Stephen Meyer disputes.

Meyer contends that neo-Darwinian theory neither explains the source of genetic information nor the sudden appearance of complex body forms during the geologic period known as the Cambrian explosion, a concern expressed by Darwin himself due to the lack of intermediate species found in the fossil record. He assumed that as time passed, the gaps would be filled with new fossil discoveries.

Hopes were raised in this regard when in 1909 Charles Doolittle Walcott made one of the most astonishing paleontological discoveries in history in the Burgess shale of British Colombia. Twenty of the twenty-seven phyla alive today, of both soft bodied and hard bodied organisms, were found in a state of extraordinary preservation. But of the more than 65,000 specimens collected by Doolittle and his team, the mystery of the missing precursor fossils only deepened. As Meyer states, “The problem of the Burgess shale is not the increase in complexity, but the sudden quantum leap of complexity, the jump from the simpler Precambrian organisms to the radically different Cambrian forms.” A similar paleontological find in China in 1995, which eclipsed the Burgess shale for its astonishing preservation of fossils told a similar story establishing that Cambrian animals appeared even more explosively than previously thought.

Many paleontologists still held out hope that precursor fossils would be found. For example, Walcott argued that the ancestral organisms were missing because the Precambrian fossils evolved in the early seas and only after the seas rose and covered the land in Cambrian times were the remains of these specimens found on the continents. Since that time, deep-sea drillings have failed to reveal any fossils predating the Jurassic period because oceanic crust plunges under the continents and is recycled in the molten mantel destroying any fossil records.

The lack of sea sediment fossils should not pose a problem however because both soft-bodied and hard-bodied fossils have been preserved in the fossil record in the oldest rocks in western Australia, and these Precambrian organisms, lacking a head, mouth, bilateral symmetry, a gut, and sense organs bear no resemblance to Cambrian animals. Even fish and chordates thought to have appeared only later in Ordovician and Devonian times have been found in the Cambrian fossil record. The iconic trilobite of Cambrian times looks very similar to trilobites found in Devonian times 300 million years later. Where is the gradual change? Where are the early precursor fossils leading up to the complex chordates, fish, and trilobites? And why, as the great Swiss-born paleontologist Louis Agassiz wondered, is the fossil record incomplete at the supposed branching junctions between simpler organisms and newer more complex organisms but nowhere else? Could it be that these intermediate specimens never existed in the first place?

In 1966 a group of mathematicians, engineers, and scientists who were not convinced that random mutation could account for the complexity of organisms convened a conference at the Wistar Institute in Philadelphia. They were well aware of the extreme improbabilities that the four biochemical bases A, T, G, and C, acting as digital code, could randomly rearrange themselves over time to produce new viable code for long chains of amino acids that form proteins because they also knew that any random changes of computer code, for example, is much more likely to degrade a program than to improve it. Meyer states: “They realized that, if mutations are truly random—that is, if they were neither directed by intelligence nor influenced by the functional needs of the organism, then the probability of mutation and selection mechanisms ever producing a new gene or program could be vanishingly small.” The only way out of their dilemma, other than invoking intelligent design, was to propose the hypothesis that many different proteins might be able to perform the same functions. However, since that time, experimental work has shown that the chance of randomly selected amino acids making a functional protein is 10 raised to the power of 77—In other words, so unlikely that the chance that this could happen for all intents and purposes is zero!

An even larger challenge to the idea of random genetic mutational change producing functional body plans comes from the emerging field of epigenetics. Information in the nonprotein coding regions of the genome and on the surface of the cell membrane in the form of sugar molecules regulate the expression of genes in various tissues of the body and control the timing and regulation of a hierarchical network of genes during embryonic development. Because one gene can affect many genes in a hierarchical regulatory network of genes during the early development of an embryo, any mutation in a gene would be catastrophic for an organism, making it nonviable. Yet ironically, only mutations expressed early in the development of an organism, can produce large-scale evolutionary changes that are evident in the organisms of the Cambrian explosion. This suggests that epigenetic programs are inherited but static and resistant to genetic mutation.

Other attempts to explain the Cambrian explosion such as punctuated equilibrium, self-organization, nonadaptive evolution, and epigenetic inheritance have been dismal explanatory failures. Meyer says, “Intelligent agency is the only cause known to be capable of generating information at least starting from nonliving chemicals. Intelligent design offers the best explanation for the origin of the information necessary to produce the first organism.”

Meyer reiterates that the theory of intelligent design is not a biblical based theory but rather a scientific theory. In my estimation appealing to a creator God only leads to infinite regress when trying to explain such a miraculous entity. I believe that we can have design without a designer and its source will be realized when we understand the most perplexing problem known to mankind—the puzzle of consciousness.

This five-hundred-page book took me a long time to navigate, but it was packed with valuable information. I will never look at my shelves of Devonian and Ordovician fossils, collected over a period of a decade near my hometown, in the same light after reading Darwin’s Doubt.



Evolution & Intelligent Design In a Nutshell

Lo, Chein, Anderson, Alston, Waltzer

“In China we can criticize Darwin but not the government;
In America you can criticize the government but not Darwin.”
[paleontologist J.Y, Chen]

Many of us remember the famous Stanley Miller and Harold Urey experiment taught in our high school biology class. These two researchers believed that if they could replicate the component gases in the early Earth’s atmosphere, which they believed was made up of methane, ammonia, hydrogen, and water vapor and stimulate these gases with electricity, mimicking lighting strikes, they would be able to demonstrate how life evolved from inorganic matter. They were indeed successful in making almost one-half of the twenty amino acids, but this is not too surprising, according to the authors, because these basic building blocks are replete throughout the cosmos. These researchers were light years away from showing how life emerged naturally with only the laws of physics at work.

Only a century ago, most people believed in the idea of spontaneous generation. They noticed, for example, that maggots seemed to spontaneously arise in dead animals and fruit flies seemed to appear out of thin air around spoiled fruit. It wasn’t until Luis Pasteur demonstrated through rigorously controlled experiments that life only arises from former life that spontaneous evolution was abandoned. Or was it? Nowadays we make light of those beliefs, but an equally unsubstantiated idea called abiogenesis holds that when conditions were just right, almost three billion years ago, inorganic matter driven by the laws of physics miraculously self-organized into life. Apparently, according to biologists, this happened only once in the entire history of earth.

The authors state that researchers have identified more than a dozen serious problems with the abiogenesis hypothesis. One of these problems identified by Francis Crick, one of the discovers of the double helix, involved specified information. He recognized that the sequence of bases abbreviated as A, T, C, and G, in the strands of DNA is a four-letter code that specify the arrangement of amino acids, which in turn, construct proteins.

This revelation presents a problem because, like computer code, the information has to be specific to carry out the instructions, and any random change will inevitably degrade the program rather than enhance it. So, how does this fact reconcile with the requirement of blind, random, mutation espoused by Darwinian theory? And where did this information originate? Neo-Darwinism makes no claims as to how life began; rather, it describes only what happened after life emerged on this planet. The encoded information in the most primitive cellular life forms predates the point where evolution is supposed to have begun. It seems to have just sprung up, giving credence to the idea that the only source of specified information is intelligence.

As the authors state, “No credible naturalistic process has ever been identified that can produce information-rich systems.”

The source of intelligent design appears to go all the way back to the creation of the universe. The universe seems to be fine-tuned. If any of the parameters or laws of physics were slightly different, the stars would not shine and the elements required for life would not have come into being.

The hemoglobin protein is an example of an extraordinary complex molecule that could not have arisen by trial and error. The hemoglobin molecule can combine and release four oxygen molecules where they are needed to replenish the oxygen supply to our cells. The code for the specific arrangement of the 574 amino acids that make up the hemoglobin molecule, of which there are 280 million per red blood cell, is written in the DNA molecule. The chance that such a molecule with just the right sequence happening by chance has been mathematically calculated to be so remote that we can safely say the chance is zero!

Complex molecular machines inside of our cells, such as the ribosomes and flagella that propel single celled animals, are another problem for the idea of natural selection. Michael Behe, a longtime advocate of intelligent design, defines these complex molecular machines as irreducible in that, if any part of the machine loses function or is removed, the entire machine becomes dysfunctional. How could they have evolved one step at a time, Behe wonders, if the individual parts have no selective advantage to the organism.

Among the sticking points of gradual evolutionary change, identified by Darwin himself and remaining unsolved to this day, comes from the so-called Cambrian explosion. Five-million-year-old fossil sites such as the Burgess Shale in Canada and the Chengjiang in China have exposed pristine fossils of animals with complex body plans that seemingly sprung up over-night. The simpler ancestors of these fossils have never been found. According to the traditional evolutionary tree of life, animal phyla should increase over time as favorable mutations produce more complex body forms; however, the reverse seems to be true. Due to extinction many of the phyla of the Cambrian Period have gone extinct leaving fewer phyla today.

The traditional tree of life depicting a simple ancestor that, through mutation, branches out over time does not fit the fossil record. The tree most representative of the fossil record is a series of trees with no branches, a simple linear configuration with only minor changes. As the famous paleontologist Charles Wallcott, who during his life-time collected over 65,000 beautifully preserved specimens of both hard-bodied and soft-bodied specimens at the Burgess Shale site in British Colombia commented, “Why is it that the missing fossil record always occurs at the supposed branching nodes of the evolutionary tree?” He knew already at the turn of the twentieth century that the Darwinian paradigm was in trouble.

Still, many evolutionary paleontologists clinging desperately to the materialist view point have maintained that the ancestors of these animals have not been found because they were too delicate, soft-bodied, or too small. They say that Precambrian conditions were not very good at preserving those fossils. Both of these objections have been thoroughly researched and rejected by Paul Chien and others. He and his colleagues searched through thousands of slides of Precambrian rock under a scanning microscope finding only sponges and algae, none of which bore any ancestral components of the animals of the Cambrian. Since that time other researchers have found sponge eggs and embryos. As Paul Chien says, “But if the conditions [in the Precambrian rocks] for fossil preservation were so poor, why did they manage to preserve soft, delicate sponge eggs and early embryos, and preserve them extremely well, including the nucleus in eggs and embryo cells?”

Paul Chien and the other authors of this book make a very good case that the Neo-Darwinian paradigm is in need of revision.




QBism: The Future of Quantum Physics

Hans Christian von Baeyer

QBism, short for Quantum Bayesianism, is a new interpretation of quantum theory based on the probability theorem of the eighteenth-century Presbyterian minister and mathematician Thomas Bayes.

Drawing from Bayesian law, Christopher Fuchs, the primary developer of QBism, has put forth the most promising understanding of the meaning of quantum theory, eclipsing earlier models such as the Copenhagen interpretation developed by the early founders of quantum mechanics such as Niels Bohr, Werner Heisenberg, and Max Born.

Hans Christian von Baeyer, the author of this book, first learned of QBism in 2002 and was immediately impressed. Von Baeyer is well versed in quantum mechanics having taught it at the university level for fifty years and over that time, through books, lectures, and interviews, has attempted to bring the subject to the general public. But like many of his colleagues, he had always felt a sense of unease about its meaning. As the Nobel Lauret, Richard Feynman once proclaimed, “No one understands quantum physics.” But once von Baeyer understood the deep implications of QBism, it brought him a sense of relief. He says, “When I began to understand QBism and realized that by simply switching to a better definition of probability I could finally stop puzzling over the meaning of the collapse of the wavefunction, I felt a sense of liberation bordering on exhilaration.”

QBism’s insightful departure from the Copenhagen interpretation of reality involves the theory’s rejection of non-locality and the replacement of the mysterious wavefunction with Bayesian probability, while making a strong case for an observer-dependent universe. The author states: “The principal thesis of QBism is simply this: quantum probabilities are numerical measures of personal belief.” An example of this would be the quantum wavefunction that describes the cloud-like distribution of the electron around the nucleus of an atom. But in reality, the wavefunction does nothing of the sort; it describes, instead, the probability of finding the electron when one looks based on previous experimental results. In fact, the electron has no reality-based location before an observation. Once the observation takes place the belief of the agent or observer changes from uncertainty to certainty. No collapse takes place, only a change in the knowledge of the observer.

Classical physics hinges on two tenants, locality and realism, both of which Einstein and two colleagues Nathan Rosen and Boris Podolsky attempted to rescue from the onslaught of quantum mechanics with their celebrated EPR paper in 1935. Einstein thought that he had rid nature of Newton’s spooky action at a distance with his general relativity theory which didn’t require some mysterious force propagating across empty space. Yet, quantum theory and experiment seemed to revive nonlocality in cases of quantum entanglement.

Even more upsetting to Einstein, was the proposition that quantum mechanics violated realism, the assumption that the observer interacts with nature in a way that influences the outcome of experiments. Certainly, Einstein said, the moon must be there even if no one looks.

Von Baeyer details an experiment conducted in the laboratory by Anton Zeilinger and his team in 2000, involving the measurements of electrons that were put in a state of entanglement. The results of the experiments proved Einstein half-right; locality was not shown to be violated, but realism, “the assumption that objects have physical properties that are unaffected by measurement, observation, and even thoughts and opinions, was indeed violated” as the QBists’ interpretation predicts.

QBism doesn’t deny an external world, but rather, through experiments, has suggested we live in a participatory universe in which laws are invented to reflect our recurrent experiences of nature.

In his book, QBism: The Future of Quantum Physics, Hons Christian von Baeyer has succeeded in bringing the QBists’ interpretation of quantum theory to the general public. This book has re-invigorated my quest to understand the nature of reality.

Brief Peeks Beyond:
Critical Essays on Metaphysics, Neuroscience, Free Will, Skepticism and Culture

Bernardo Kastrup

Every scientific theory begins with a certain set of assumptions. These unspoken assumptions for the philosophy of materialism state: “Grant me the proposition that particles, fields, and information are fundamental, and I can explain every natural process in our world: The songbird’s melody is merely air disturbances that vibrate our eardrums, travel to our inner ear, and are interpreted by the brain as sound; the color of the red apple we see in the distance is not inherent in the apple, but rather, a certain frequency of light reflected off the apple, impinging our retinas, and interpreted by the brain as the color red; the room filled with brightness when the light is switched on are only invisible photons whizzing around that our brain interprets as brightness. The world we perceive, according to materialism, is a projection of a reality created by our physical brain and this includes consciousness itself, in-so-far as materialist even recognize consciousness as something real.

Brain-imaging technology has proven without a doubt, according to Bernard Kastrup, that tight correlations exist between brain activity and subjective experience. At first glance, the materialist paradigm suggesting that the material brain is responsible for our subjective experience seems to make sense, but correlations are not necessarily causal. The early morning rooster’s crow is correlated with the rising sun but obviously the crowing is not a cause of the event. No materialist’s explanations have even come close to explaining how three pounds of matter inside our skull can produce conscious experience. In this regard, materialism has been a dismal failure.

Kastrup, an idealist, says that if we instead assume that consciousness is fundamental, then a simpler more parsimonious explanation arises for our experience of the world. The reason that we all agree on our perceptions of nature, according to Kastrup, is that the universe is exactly as it appears with qualities of color, sound, textures, smells, and brightness. The apparent objective universe we all perceive is merely a second-hand representation of mind-at large, analogous to the second-hand view of neural firings in the brain that neuroscientists, using fMRI technology, are able to see when we have subjective experiences.

Starting with the premise that consciousness is fundamental not only alleviates the intractable “hard” problem of consciousness, (the materialist notion that when matter becomes sufficiently complex it miraculously boots up consciousness), but it also addresses the fine-tuning problem, (the hundreds of physical parameters of matter as well as the laws of physics necessary for the emergence of life). Kastrup says, “…that irreducible consciousness generates the world poses no more problems than to say that irreducible laws of physics generate the world.”

Kastrup addresses the question as to how our consciousness relates to the consciousness of mind-at-large. Our personal individual consciousness is part of universal consciousness much like a whirlpool is part of the larger stream. Consciousness is not in the brain-body system; rather, the brain-body system is within consciousness or “mind-at-large.”

As for my personal quest to understand the nature of reality, I find Kastrup’s position on free will to be most interesting. Extrapolating from his essay on free will, my understanding of his philosophy is that mind-at-large is not some kind of super intelligence. In fact, it is simple deterministic awareness much like the awareness of any life forms on Earth that react instinctively to their environment, playing out in the form of the four “fs”—feeding, fighting, fleeing, and mating. On the other hand, we humans have evolved a higher level of consciousness that I refer to as self-awareness, our innate ability to ponder, to be aware that we are aware, our science and philosophy. Self-awareness can be intuited from the following statement: You might wonder if your dog is conscious, but it is unlikely that your dog is wondering if you are conscious.

In light of these assumptions, we can, in some way, guide our own reality in sort of a feed-back loop. Wishing and praying for some ego driven outcome is pointless, but identifying with mind-at-large and simply expressing emotions of gratitude and compassion facilitates our connection with the perfect universal mind.

Bernardo Kastrup’s book, “Brief Peeks Beyond” has made me reconsider the philosophy of idealism.

Unraveling QBism for the Uninitiated

Dr. Sanjay Basu

I very much appreciate an author who gets straight to the point, and Sanjay Basu does just that in this concise, yet comprehensive, treatise which he lays out in his book, Unraveling QBism for the Uninitiated.

Beginning with a brief history of quantum theory, Basu introduces us to its core principles such as, the uncertainty principle, quantum superposition, and the wave-particle duality as well as the theory’s paradoxes, including the quantum measurement problem, entanglement, and nonlocality.

Basu then proceeds to the various elucidations of quantum theory such as the Copenhagen, Many-worlds, and Bohmian Mechanics interpretations, and their newest rival Quantum Bayesianism or QBism for short.

QBism’s main tenant involves its interpretation of probability. While conventional probability is viewed from an objective “frequentist” prospective, QBists view probability as the subjective degree of an individual’s personal beliefs. This new idea has implications for many of the paradoxes involved with the meaning of quantum theory named above.

In this new view, no collapse of the wave-function occurs because no physical wave-function exists, only one’s personal knowledge of the probability involved in the result of an observational outcome of the previously unmeasured particle. I like to think of this as the passing of information about an event from the deterministic nonconscious aspects or our mind to the subjective aware conscious.

This subjective interpretation of quantum theory is consistent with experiments demonstrating that subatomic particles have no attributes independent of observation, a fact experimentally proven by physicists Alan Aspect, John Clauser, and Anton Zeilinger who won the 2022 Nobel Prize for experimentally closing the possible loop-holes in John Stewart Bell’s theorem of inequality and, once and for all, demonstrating that local realism is violated.

Concerning the paradox of nonlocality, according to QBists, no nonlocal influence happens between correlated particles in a superposition of states; rather, the instantaneous outcome of a measurement is a result of information obtained about one of the correlated particles, which in turn, updates one’s personal knowledge of the other particle in accordance with deterministic theory.

If you are unfamiliar with one of the latest interpretations of quantum theory, Sanjay Basu’s book, Unraveling QBism for the Uninitiated is an excellent place to start.



The Experience Machine:
How Our Minds Predict and Shape Reality

Andy Clark

Andy Clark is a professor of cognitive philosophy at the University of Sussex in the United Kingdom and Macquarie University in South Wales, Australia.

Andy Clark’s new book, “The Experience Machine” has taken the first steps in merging materialistic cognitive science with ontological philosophy to answer humankind’s deepest questions concerning the relationship between our minds and the nature of reality.

The brain, according to Clark, is not waiting passively for sensory inputs and then processing that information as proponents of the standard computer-like model of cognitive neuroscience of the late-twentieth century once believed; rather, the brain is constantly anticipating signals from our bodies and the world and building a reality based on its predictions.

Clark says, “Instead of constantly expending large amounts of energy on processing incoming sensory signals, the bulk of what the brain does is learn and maintain a kind of model of the body and the world…A predictive brain is a kind of constantly running simulation of the world around us…” For every neural pathway coming into our brain from the environment, four neural pathways originating from deep within the brain are outgoing to the peripheral sense organs.

The neural wiring scheme of the motor cortex involving bodily action turns out to be very similar to the wiring of perception upsetting the traditional cognitive idea that perception is the inward flow of information, while action is the outward flow of information. As Clark says, “Actions are simply the brain’s way of making its own proprioceptive predictions come true.” Think about how coaches ask their players to imagine the outcome of a good golf swing or the result of basketball going through the hoop without overthinking the mechanics of the action. The predictive model of the brain works equally well for perception as it does for bodily action.

Clark sites several examples of the well-documented incidences of inert medications and procedures we know as the placebo responses that are a result of nothing more than our conscious and unconscious beliefs and expectations, which in my opinion, most empathically demonstrates the validity of the predictive brain model. Clark states, “Since experience is always shaped by our expectations, there is an opportunity to improve our lives by altering some of these expectations, and the confidence with which they are held.”

Clark makes it clear that he did not want to wade into the quagmire of the philosophical debate on the cause of our experience of qualia and this was probably a wise decision; yet, I can’t help but wonder if he could have taken this new understanding of the brain one step further and call it the ‘creative brain’ rather than the ‘predictive brain.’ After all, the 2022 Nobel Prize in Physics was awarded to three researchers who demonstrated experimentally that realism, the assumption that objects have physical properties that are unaffected by observation, and even thoughts and opinions, was violated.” Local realism was proven false.

Either way, Andy Clark’s Book, The Experience Machine was a ground-breaking book for my understanding of how the brain predicts and shapes our reality.



The Romance of Reality

Bobby Azarian

In this masterful work, Bobby Azarian sets out to propose a non-reductionist, unifying theory of reality he calls the integrated evolutionary synthesis, suggesting that the universe is Darwinian in nature and has spontaneously evolved life and mind. He states: “The overreaching thesis is that we live in a computational universe that is continuously evolving into an increasingly complex, functional, and sentient state. This means that humans are neither a cosmic accident nor the end goal of evolution…”

Azarian’s bold unifying theory will not come from hurling tiny bits of matter near the speed of light around a three-mile-long particle accelerator at CERN in Geneva Switzerland; rather, it will come about by bringing together disparate scientific disciplines that are attempting to answer some of the most compelling questions in science such as: the origins of life, the fine-tuning problem, the question of free will, the quantum measurement problem, and the nature of consciousness. The integrated evolutionary synthesis hypothesis makes a credible attempt at answering all of these.

Whereas many reductionist scientists such as Sean Carroll, Brian Greene, and Neil DeGrass Tyson, believe that life is insignificant and fleeting in a purposeless universe where entropy is increasing, Azarian, believes that entropic forces are the engine that create far-from-equilibrium dissipative structures generating order out of chaos. These entropic forces are believed to be the catalyst that drove geochemical and biochemical processes, spawning the first life in alkaline hydrothermal vents on the ocean floor, and it was this abiogenesis moment when information gained causal power as the universe became conscious of itself.

But elementary consciousness has no causal power and is not self-referential. Causal power and free will require a second level of consciousness or self-awareness—to be aware that we are aware. Azarian says, “It is self-reference through self-modeling that brings subjective experience into the world by creating an observer out of thin air… Once we understand the hierarchical structure of life, mind, and cosmos, we begin to see that individual freedom and cosmic destiny are not incompatible.”

Although Azarian believes that the mind emerges from the brain, he does not believe that this epiphenomenal stance is dualistic because information must always be associated with a physical substrate, and thoughts are just “instances of information in action.” But this information processing is unlike a Turing machine.

Citing Godel’s incompleteness theorem, which states that there are mathematical truths that cannot be computed using symbolic logic, Azarian concludes that unlike a computer that just processes information without any real understanding, evolution has given us the ability to comprehend meaning by continually updating our fitness payoffs using recursive trial and error loops.

Having explained abiogenesis, consciousness, and free will, Azarian concludes with an explanation of the quantum measurement problem and the fine-tuning problem, saying that, “If the unifying theory of reality can explain both the fine-tuning problem and the measurement problem, it’ll be more deserving of the title of the theory of everything.” I believe that Bobby Azarian has done just that.



Why? The purpose of the Universe

Philip Goff

Philip Goff is professor of Philosophy at Durham University, UK.

In this work, Philip Goff addresses several of the most important philosophical question that have eluded reductionist science. Intricately related, these questions concern universal fine-tuning, the mind-body problem, and purpose as a driving force in the universe.

How did it happen that the cosmological constants, involving the masses of particles and the laws of nature, arose to be just perfect to enable life to emerge on this planet. The odds of this fine-tuning by pure chance are 1 chance in 10 raised to the power of 136 according to Goff. These unlikely events must be due either to design, irrational coincidence, inherent universal purpose, or the popular but non-evidential idea of multiverses.

The rationale put forth by those advocating for the multiverse hypothesis appeal to the laws of probability, proposing that there are an infinite number of universes, most of which have constants and laws that are not conducive for the emergence of life, but we just happen to live in one of those universes in which the laws are perfectly attuned for life—a very convenient idea with little scientific support.

Goff says that advocates of the multiverse idea have fallen victim to at least two common logical errors called the “inverse gambler’s fallacy,” and the “weak anthropic principle.” The weak anthropic principle states that the laws of the universe must be as they are, because if the laws were different, we would not be here to ponder them. This rationale does not rise to the status of a principle, but is just a tautology, a syllogism, in which the premise is restated in the conclusion and therefore nothing new is learned; likewise, those who fall victim to the inverse gambler’s fallacy are amazed that the universe is fine-tuned for life and conclude that there must be many other universes which are not fine-tuned—an erroneous argument at its core. Eliminating these fallacious arguments for the fine-tuning conundrum, as well as arguments for blind coincidence and God as a designer, Goff advocates for a rational universe that fine-tunes itself.

Goff argues for a form of panpsychism he calls pan-agentialism, in which particles of matter respond to their experiences. He claims that observational evidence supports the idea that particles are purposeful, but purpose in Goff’s view does not imply design. Goff says, “If the laws of physics had been fine-tuned for life but the universe did not contain rational matter…it is highly unlikely that experiential understanding would have evolved. Fine-tuning and rational matter need each other to produce creatures that can understand and respond to what things are and mean.”

Panpsychism also offers a solution to the mind-body problem according to Goff. Materialist science has not even come close to explaining how the material brain produces conscious experience but he thinks that consciousness will be discovered to be strongly emergent. I was unclear on Goff’s explanation of how panpsychism offers a solution to the mind-body problem in this regard.

I was not at all convinced of Goff’s panpsychist ideas, but I was receptive and enlightened by his arguments that the universe is purposeful and fine-tunes itself.