DJK

Office Phone: (319) 338-3092
Home Phone: (319) 354-7383
Email: Sales@DaveKreiter.com

Return to the previous section of my reviews 



Life on the Edge: The Coming of Age of Quantum Biology

Jim Al-Khalili and Johnjoe McFadden

 

   Students and faculty at the Massachusetts Institute of Technology, who had been working on the problems involved in developing a quantum computer, were gathered together at one of their regularly scheduled journal club meetings in 2007. Members were encouraged to present novel papers in diverse fields of study. One member presented an article from the New York Times suggesting that plants were themselves quantum computers. (144) As the authors of this book, Jim Al-Khalili and Johnjoe Mcfadden recount, the group “exploded in laughter.” One of the group was Professor of Mechanical Engineering at MIT Seth Lloyd, who proclaimed, “We thought that was really hysterical…it’s like, Oh my God, that’s the most crackpot thing I’ve heard in my life!” 

  The MIT group’s incredulity and arrogance had to do with a long held belief that a physical property known as coherence could not be maintained in the warm, environment of biological cells where particles are under constant thermodynamic jostling. Coherence has to do with the wave aspect of the wave/particle duality which can only be maintained when unobserved. In this context, the term “observation” is synonymous with the term “measurement” and it constitutes an interaction with another particle, or an electromagnetic field; it could mean simply bumping into another particle. In any of these events the wave-like nature of the duality collapses or decoheres into a definitive particle at a particular location in space and time. The warm, crowded, environment within the confines of the cell is not conducive to coherence.

 

 The MIT group had been working on ingenious ways to fend off decoherence including cooling their apparatus down to a temperature close to absolute zero, (around -273 degrees Celsius) thereby immobilizing the thermodynamic buffeting by other particles. To the MIT group, the idea that a plant at room temperature or above could maintain coherence long enough to take advantage of quantum effects was laughable. Ah, but as we know, nature usually gets the last laugh even when it comes to bright MIT students. The authors muse, “No wonder those MIT researchers were incredulous. They might not be able to build a quantum computer but, if the article was right, they could eat one in their lunchtime salad!” (144)

 

 Quantum physics and molecular biology have remained insular disciplines since the quantum revolution in the 1900s. Only a few, such as quantum physicist Erwin Schrodinger, inventor of wave mechanics, have dared to cross the line suggesting that life might take advantage of quantum effects. In 1944 Schrodinger published his book What is life in which he theorized that life could not have arisen as a result of the established statistical thermodynamic laws of ‘order from disorder.’ He proposed that genes, and molecular machines, are so small that they would behave more like individual atoms, and therefore, would necessarily be subject to the laws of quantum mechanics. His work published all those years ago had never been taken seriously until very recently.

 

 Research is revealing that the cells of plants and animals take full advantage of all the weird aspects of quantum theory including coherence, superposition, entanglement, and quantum tunneling.

 

  All life on earth owes its existence to the process of nuclear synthesis within stars like our sun. This process would not be possible were in not for quantum tunneling. Nuclear fusion takes place in our sun when hydrogen nuclei fuse together to make helium resulting in a tremendous release of energy in the form of various types of radiation. The immense gravitational force of the sun creates the heat and pressure that squeeze the protons close together, but the strong repulsive electrical charges between the nuclei create an energy barrier too great for the positively charged protons to fuse under classical conditions. Subatomic particles, such as protons, however, exist in the quantum world where the wave-like nature of matter becomes more important. Because protons exist as a wave/particle duality they can seemingly exist in several locations at once determined by the laws of probability. This aspect of quantum particles was first described by Albert Einstein’s friend Max Born in the 1920’s. Born theorized that the wave state of particles is not a “real” wave but rather a mathematical construct. For example, upon observation, one will most likely find an electron in its most probable location at various energy levels around the nucleus of an atom; however, though the probability is very small, one might observe the electron half-way across the universe. Such is the case for the protons. On occasion the protons will “tunnel” or cross the energy barrier that keeps them apart. They simply appear in a new location without physically passing in between.

 

  When protons tunnel, one of the protons will undergo beta-decay and turn into a neutron to produce deuteron or heavy hydrogen allowing them to fuse. This is the first step in the process of nuclear fusion in which helium is created in the nuclear furnace.

 

  Neutrons and protons can bind together because they can exist in two “spin states” simultaneously, known as a superposition state (27). Superposition and quantum spin are two more bizarre aspect of quantum theory. The author states:

   "Quantum spin” is unlike anything we can visualize on the basis of our everyday experience of spinning objects such as tennis balls or planets…electrons can only—in a loose sense—spin in either a clockwise or an anticlockwise direction, corresponding to what is usually referred to as ‘spin up’ or ‘spin down’ states. And because this is the quantum world, an electron can, when not being watched, spin in both directions at the same time. We say that their spin state is a superposition.”

 

 The Pauli Exclusion Principle states that electrons in the same energy level of an atom must have opposite spins or what is called a singlet spin state. Measuring one of the electrons will find that it is either spinning in a clockwise direction or a counterclockwise direction, but before a measurement they exist in a superposition of spinning in both directions at once.

 

  Electrons in a superposition can be separated by great distances from an atom and they will remain in what is called a quantum entangled state. A measurement or observation of one of the particles will instantaneously affect its twin even if they are separated by light years of distance. This odd characteristic of quantum particles does not violate Relativity because no information can be communicated instantaneously. In theory, the second particle is technically indeterminate until it is also measured.

 

 The properties, of coherence, superposition, entanglement, and quantum tunneling, mentioned above are necessary components of life.

Enzymes

 

  Enzymes are made of amino acid molecules composed of carbon, nitrogen, oxygen, and hydrogen atoms in a three dimensional shape. Each amino acid is in turn joined to the next by a strong peptide bond, between the carbon atom of one amino acid and the nitrogen atom of the next amino acid to form a protein chain.

 

 Enzymes are the engines of life, tiny molecular machines that speed up chemical reactions; they make up most of the substance of our bodies, including collagen, the intercellular matrix that holds our bodies together, fats, carbohydrates, proteins, and DNA. They orchestrate the synthesis of protein and they are responsible for digestion, respiration, photosynthesis, and metabolism (102). Life depends upon enzymes.

 

 The function of enzymes inside plant and animals cells is to move electrons and protons from one location within the enzyme to the next location. Until recently, researchers were not able to account for the efficiency by which these particles move, nor could they explain how the particles were able to traverse the relatively large gaps existing between some of the molecules under the conditions of constant thermodynamic buffeting. Research is beginning to solve the mystery. The environment within the enzyme is not chaotic as formerly thought, but a “choreographed dance” where coherence is maintained and quantum tunneling takes place allowing the movement of particles over classical prohibitive energy barriers. The authors state,

 “Enzymes are as close as anything to the “vital factors” of life. So the discovery that some, and possibly all, enzymes work by promoting the dematerialization of particles from one point in space and their instantaneous materialization in another provides us a novel insight into the mystery of life."

Photosynthesis

 

  Quantum effects also explain how plants are able to carry-on photosynthesis with such extraordinary efficiency. The transfer of captured photon energy from the chlorophyll molecule to the reaction center, where photosynthesis takes place, is almost 100% efficient, the highest rate of any natural or human-made reaction. (172) The reason for this adeptness is that photons of light do not take random paths through the chlorophyll molecule, but take all paths simultaneously, a feature of coherence.

 

  The authors detail the famous two-hole experiment to illustrate this process. For example, when electrons are shot, one-at-a-time at a screen with two holes, they will appear to traverse both holes simultaneously and create an interference pattern as they merge and are recorded on a detection screen. But as soon as a measurement or observation is taken to determine if they are really going through both holes at once, even if the measurement occurs well after they pass through the holes, they will be found to choose one hole or the other with a 50/50 probability. Likewise, the unobserved chlorophyll molecule takes advantage of coherence allowing the photon waves to take all possible routes simultaneously to the reaction center where photosynthesis occurs.

 

 How was coherence maintained long enough to allow the unobserved photons to remain in a wave state so that they could take all possible paths? The authors state, “We have discovered that the answer seems to be that living systems don’t try to avoid molecular vibration; instead, then dance to its beat.”

 

  The photosynthetic reaction centers exploit two kinds of “noise” or molecular vibrations to achieve this dance. The first is thermal noise or “white” noise that comes from the molecular jostling of surrounding molecules; the second is “colored” noise a higher amplitude noise that is limited to various frequencies and associated with larger molecules such as the chloroplasts and the surrounding protein scaffolding molecules that are bent into three dimensional shapes of precise frequency vibrations. (391)

 

 One group, ironically enough, that looked into how plants harness thermal noise was a group of MIT students led by Seth Lloyd who first thought that quantum coherence in plants was laughable. The group found that a Goldilocks zone exists where just the right amount of white noise and colored noise exist to produce a sort of rhythmic entrainment or harmony keeping decoherence at bay long enough for the photosynthetic process to proceed.

Biological Compasses

 

  Features of quantum theory have been called upon to solve a century’s long mystery concerning the ability of birds, mammals, and insects to navigate across the globe. The suspicion was that these animals must have some sort of internal compass tuned to the earth’s magnetic field that guided them in their migratory routes. The problem with this explanation is that the earth’s magnetic field is extremely weak, and no known classical mechanism could be found that would allow animals to navigate using this natural phenomenon, nor was it known where such a chemical compass might be located in the animals’ bodies.

 

  Years of research by many teams of investigators discovered that the compass was located in the eyes of birds (and the antennae of butterflies) in the form of a protein molecule called cryptochrome. Various ingenious experiments showed that the compass used by birds and other animals is an inclination compass. Unlike a standard bar magnetic compass that can distinguish north from south, the inclination compass is sensitive to the angle of the magnetic lines of earth. If one looks at a diagram of the earth’s electromagnetic field, one notices that the lines of force run nearly parallel to the earth’s surface along the equatorial regions, but as one nears either pole the lines of force become more vertical. The angle of these lines, relative to the surface of the earth, is the distinguishing feature measured by the sensitive molecular inclination compass of migratory animals.

 

 Similar to the light capturing process that occurs in the chlorophyll molecule during photosynthesis, photons of light enter the eye of a bird, for example, and eject electrons from an area inside the cryptochrome protein. This causes an electrical vacancy which can be filled by an electron from a donor amino acid called tryptophan. The electron that has been accepted remains quantum entangled with the donor amino acid and it is also entangled with its newly paired partner electron in the cryptochrome protein in what is known as a superposition singlet/triplet quantum entangled state. This state is extremely sensitive to electromagnetic fields; therefore, it is thought that this provides the trigger mechanism for the inclination compass. Depending upon the strength and angle of the electromagnetic field, the delicate superposition can be influenced by the weak magnetic field acting as a measurement and causing a brief moment of decoherence. This observation will flip the spin states from one direction to the other influencing the neurochemicals produced, and this flipping of spin states can have a frequency of millions of times per second. These neurochemicals, in turn, influence the neurons in the brain resulting in a conscious experience of “seeing” the electromagnetic field and guiding the bird or insect in the right direction.

Accelerated Biological Mutations

 

  The Central Dogma of biology centers on two main principles---1) Adaptation, the notion that evolution occurs only as a result of a species’ ability to survive and pass on its genes to the next generation, and 2) Random mutations, the idea that favorable traits are expressed in an organism only as a result of random mutations in a direction from DNA to RNA to proteins. Eminent geneticist John Cairn’s controversial experiments at the University of Wisconsin challenged several aspect of the Central Dogma by demonstrating that information does not just flow in one direction in a random way from DNA, to proteins, and then to the organism which in turn can influence the environment. Cairn’s experiments showed that information can flow in the opposite direction from the environment to the organism and then to the proteins expressed by DNA.

 

 His method was to grow a mutated strain of E. coli that was unable to digest lactose sugar, in a petri dish containing only lactose. Initially, the bacteria began to die because they were unable to digest the lactose, but as time passed, the bacterial colony began to stabilize and then rapidly multiply. Cairns found that somehow the bacteria had corrected the mutation and passed it one to succeeding generations reestablishing the bacteria’s ability to digest lactose. He hypothesized that the rapid growth of the colony was a direct result of environmental stresses causing an acceleration of genetic mutations, until by chance, a mutation occurred that corrected the errant gene. The recovery of the colony in these trials could not be explained by mutations already existing in the population nor could it be explained by the extremely slow process of classical random mutations. According to Cairns, the only conceivable explanation was that information from the environment was feeding back into the genetic machinery of the organism. The mechanism for this change was a bit of a mystery however. Molecular biologist JohnJoe McFadden, one of the authors of this book, hypothesized that quantum effects must be involved.

 

  Mcfadden realized that mutations can occur during a process called transcription in which DNA is read by RNA. The bases of the two strands of DNA are held together by weak hydrogen (proton) bonds. The position of these bonds is dependent upon the letters of the bases that are joined together. The hydrogen bonds are in one position if they join the A-T bases and in different position if they join the C-G bases. During transcription the genetic code is read by an enzyme called RNA polymerase. To transcribe the code, the enzyme does not directly read the base code, but rather the position of the hydrogen bonds. Occasionally, because of quantum tunneling, the hydrogen bond moves from its correct position in the base pairing to what is called a tautomeric position. When this happens it fools the RNA polymerase into thinking its a different base pairing resulting in a copy error and a mutation. Mcfadden also knew that the more a specific gene is read the higher the probability of mutation.

 

  Mcfadden’s revelation came when he realized that genes are quantum information systems and the presence of lactose would act as a measurement of the single proton bonds. (300) These measurement would occur many thousands of times, each time testing to see if the correct proton position was available to be read and transcribed by the RNA polymerase. Occasionally, when the proton is being measured, the proton would be found to be in the tautomeric position as a result of quantum tunneling. If the errant tautomeric gene is then read and copied, a protein would be manufactured that would correct for the original mutation and once again allow the bacteria to digest lactose. The sheer number of measurements by the surrounding environment would increase the odds of finding the proton bond in the tautomeric position causing a higher rate of mutations. This continual observation by the environment, Mcfadden surmised, was the mechanism that caused the accelerated mutation rate of the bacteria. Once the errant gene is replaced with the correct lactose digesting gene, the bacteria survive and pass the successful gene to its offspring increasing the population to a stable size once again.

 

   As astronomer Fred Hoyle once proclaimed, a huge disservice was done to popular thought by the notion that a horde of monkey thumping away on keyboards could eventually compose the entire works of Shakespeare. This idea is so wrong, he said, that one wonders how is became such a cliché. Given all the time since the big bang to complete their task, that horde of simians would be lucky to compose even the first line of Hamlet. The truth is that classical random mutation, one of the pillars of Darwinian Evolution, cannot explain the diversity we see in the natural world and this is why evolution has come under attack from creationists. The authors state:

 "We tend to take for granted the ability of living organisms to replicate their genomes accurately, but it is in fact one of the most remarkable and essential aspects of life. The rate of copying errors in DNA replication, what we call mutations, is usually less that one in a billion."

 

 But this fidelity of replication in organisms is no reason to invoke intelligent design to shore up our lack of understanding as has so often been done throughout history. If accelerated mutation through quantum observation and tunneling is taken into account it is quite possible to explain the diversity we see in the biosphere.

The Quantum Mind

 

 The Mind is an emergent property of life and since life is dependent upon quantum properties it is reasonable to assume that mind itself might be understood by the same processes.

 

  The mystery of mind involves two main lines of reasoning. First, how is it that the three pounds of matter inside our skulls, which is made of the same organic ingredients as the rest of the tissue in our body, is able to produce feelings and sensations? How is it able to cause actions and volition? This is called the “mind-body problem” or as it is sometimes referred, the “hard problem.” The second question posed by the authors is called the “binding problem,” which asks the question, how is all of the information that passes along individual pathways and through individual synapses involving 100 billion neurons and trillions of neural connections coalesce to produce a single conscious experience?

 

  One place that quantum physics might be involved is the brain's ion channels located in the membrane of neurons. The ion channels are voltage-gateways that open and close to allow sodium and potassium ions to flow. In its resting state, the membrane has more positive ions on the outside than on the inside. (329) If an action potential occurs, the channels will open and positively charged sodium ions will enter the channel reversing the voltage and creating a pulse that travels down the nerve. The ion channels are extremely narrow allowing only one ion at a time to pass through, yet the ions pass through at a phenomenal rate, about one hundred million per second.

 

  Physicist Henry Stapp predicted in the 1980s that the narrow ion passage way creates a great amount of uncertainty. Stapp reasoned that since the ion’s position is extremely limited as it passes through the channel, then according to Heisenberg’s uncertainty principle, the velocity must be great. This situation results in the ions becoming delocalized and more wave-like, allowing for the fast transit through the channels. Stapp’s idea was born out when a team of researchers performed a quantum mechanical simulation of ions passing through voltage-gated ion channels. They found that, indeed, the ions become delocalized or wave-like. (347) In addition, they discovered that the waves oscillated at very high frequencies transferring energy to the surrounding proteins and effectively staving off decoherence. The team concluded that quantum coherence plays an ‘indispensable’ role in the conduction of ions through nerve ion channels, and is thereby an essential part of the thinking process. (348)

 

 The question of the binding problem of consciousness still remained. How do individual nerve pulses combine to form an overall experience of unified consciousness? The authors concluded that somehow entanglement must be involved, but they ruled out the possibility that the ions in individual nerves are coupled together through quantum entanglement. Mcfadden knew that neural firings create an electromagnetic field in the brain in the form of alpha, beta, theta, and delta waves that we are all familiar. McFadden hypothesized that the opening and closing of the quantum coherent ion voltage gates would necessarily couple to the electromagnetic fields of the brain and that it would encode the same information as the neural firings themselves. This, he believed, would solve the binding problem. This hypothesis could be easily proven if the reverse were true. Laboratory experiments have shown that inducing a subject with electromagnetic fields similar to those produced naturally by the brain set off coordinated neural firings that produce conscious experiences as if they were “real experiences.” Mcfadden believes that this likely solves the binding problem of consciousness.

 

  Erwin Schrodinger asked the question, what is life? This book goes as far as any in answering that question. The authors state that life straddles the classical and quantum worlds taking advantage of both. At the classical molecular level of thermodynamics, life exploits thermal noise orchestrating and entraining the various frequencies to achieve coherence. At the quantum level life takes advantage of the phenomena of quantum tunneling, coherence, superposition, and entanglement to carry out functions that would take too much time and be too inefficient in the classical world. Life straddles the classical and quantum worlds.





The Vital Question: Energy, Evolution, and the Origins of Complex Life

Nick Lane

Nick Lane is a biochemist at University College London, and was awarded the 2015 Biochemical Society Award for his outstanding contribution to the molecular life sciences.  He is the author of three other books, including the prize-winning Life Ascending.

  Biochemistry is in the midst of a golden age of discovery and Nick Lane is at the forefront, winning numerous awards in his contributions to the life sciences.  In this work, he has identified the vital unsolved questions in the field of biology and has provided plausible solutions to these mysteries including: the enigma of why life emerged only once on this planet, why no evolutionary intermediaries exist between simple and complex life, and the most vital question of all, how life began. 

  During the earth’s four billion year history, it appears that life emerged only once, just 500 million years after the earth’s formation.  Early life consisted of prokaryotes (cells without a nucleus) in the form of bacteria and archaea, a third domain of life discovered by Carl Woese in the 1960s.  Over billions of years through extreme environmental and ecological changes, these organisms have filled every conceivable niche on our planet.  Photosynthetic bacteria have bioengineered our planet on a colossal scale, creating the oxygen we breathe, changing the chemistry of the atmosphere and oceans, building up continents with sedimentary rock and minerals as their bodies fall to the ocean floor, in short, creating Gaia, our living planet.  Yet, after all this time, they have shown little change in form or complexity. Then, seemingly without any intermediate steps, the eukaryotes (cells with a nucleus) sprang into existence giving rise to all plants, animals, and fungi found today. 

  According to the cherished standard model of evolution, evolutionary changes are incremental, occurring over generations as a result of adaptation, the notion that evolutionary changes occur as a result of species’ ability to survive and pass on their genes to succeeding generations, and as a result of random mutations, the idea that favorable traits are expressed in an organism only as a result of random mutations in the process of replication and coding.  With these principles in mind, it is hard to understand how complex eukaryotic cells appeared virtually overnight. 

  In 1967, Biologist Lynn Margulis proposed a modification to the standard model of evolution. Her astute analysis of paleontological history revealed that evolution rarely occurs in a Darwinian or Malthusian way in which species battle for limited resources. Instead, she discovered that most evolutionary advances occur as a result of cooperation and symbiotic relationships.  During the Precambrian period, for example, simple prokaryotic bacteria banded together to form mats or layers of identical cells protected by a biofilm membrane.  By banding together these symbiotic stromatolites were able to enhance their survivability by sharing genetic information, specializing in task functions, and increasing their collective awareness.  Margulis went further when she proposed the radical idea that cells cooperated so closely that they merged by getting inside one another.  It is now widely accepted that mitochondria in animals and chloroplasts in plants are the result of endosymbiosis between bacteria and archaea.

   Author Nick Lane believes that early on in the history of life on earth complex eukaryotic cells arose on just one occasion through a singular endosymbiosis between an archaeon host cell and a bacterial invader creating the precursor of eukaryotic cells.  Lane says that this endosymbiotic event might have occurred more than once but those experiments never survived.  Over time, all of the complex features of modern eukaryotes including straight chromosomes, a membrane-bound nucleus, mitochondria, specialized organelles, a dynamic cytoskeleton, and total organism replication and reproduction arose by standard Darwinian evolution. 

  One simple reason that bacteria and archaea have not evolved is because of competition from eukaryotes; more specifically, cells spend about 80% of their total energy on protein synthesis.  Protein synthesis is both expensive and time-consuming draining energy from the processes involved in transcription and translation, the means by which cells replicate.  Organisms such as bacteria and archaea have survived over eons due to their ability to replicate fast, thereby adapting to the changing ecological conditions on earth.  There is no advantage for bacteria devoid of mitochondria and other specialized organelles to become larger or more complex because that only increases energy expenditures.  Instead, bacteria and archaea have survived as a result of reduction evolution, reducing overhead, losing unnecessary expensive genes, and streamlining their ability to reproduce fast and efficiently. 

 Eukaryotes have solved this energy catastrophe by specialization. Mitochondria, which are ancestral bacteria, have over time, lost mitochondrial DNA to the nucleus of their host eukaryotic cells. This adaptation has freed-up millions of mitochondria in each cell to specialize in energy production by turning ADP into ATP, the energy of respiration.  This is an enormous task since a single cell consumes around two million molecules of ATP every second.   Genes that have migrated to the nucleus from the ancestral bacterial mitochondria have evolved to perform other useful tasks such as producing ribosomal DNA and attending to other structural needs of the cell.

  All cells, both eukaryotic and prokaryotic, have one essential commonality involving the method of energy production by burning food in the process of respiration.  All living cells power themselves through a process of pumping protons across a membrane creating a reservoir of electrical imbalance.  The back-flow of these protons is used by cells to produce physical work such as turning the rotors of nanomachines, just as water through a dam turns a turbine.  This process provided Lane a clue in his attempt to find geochemical processes that would mimic biological energy production.  If he could discover this mechanism in the natural world, it would go a long way in solving the mystery as to how life emerged from geochemical processes.  But before we look at Lane’s idea, let’s take a look at some other proposals for how life originated on this planet.

   One idea proposed in 1947 by physicist Fred Hoyle and Chandra Wickramasinghe called panspermia suggests that life could have been seeded on earth from interstellar space. This might have happened in several ways:  Spectral analysis of the composition of interstellar dust in deep space has shown that much of it is made up of carbon and possibly carbon compounds.  Hoyle and Wickramasinghe proposed that this interstellar matter would, on occasion, come into contact with earth as it travels through space, depositing the organic material in earth’s atmosphere and on land, thereby, jump-starting life.  A similar hypothesis contends that organic compounds might have arrived on earth carried inside meteors or comets.  We know that spores can withstand the extremes of space for years, so this hypothesis is within reason. In fact, scientists have discovered precursors of life inside meteoroids that have arrived on earth from Mars and deep space. 

 A variation of Hoyle’s idea called directed panspermia has recently gained acceptance among a few academics.  In this scenario, advanced alien civilization might have purposely seeded our planet with life to create a living laboratory.  Both types of panspermia are plausible ideas, but this kind of thinking simply kicks the can down the road.   Lane believes that life in the cosmos is rare, but if life did emerge on other planets in the universe, it probably happened by the same geological and chemical processes that occurred on earth, processes that are only now being investigated by scientists such as Lane.  Some of the earliest ideas concerning the terrestrial emergence of life were conceived in the 1950s.

   In 1953 Stanley Miller and his professor Harold Urey conducted a famous experiment in which they ran an electric current, simulating lightning, through gases they believed were abundant in the earth’s early atmosphere such as hydrogen, methane, and ammonia.  Their experiment, which they considered to be a success, demonstrated how organic chemistry could have arisen from these gases billions of years ago in a watery environment such as a stagnant pond.  Their hypothesis of an organic brew suspended in the waters of a stagnant pond has famously become known as the primordial soup.  But Lane says that, except for the lack of oxygen, our planet’s atmosphere 3.5 billion years ago was not that much different than today’s atmosphere, made-up of carbon dioxide, water vapor, nitrogen gas, and sulphur dioxide.  Once more, life would have needed a steady dynamic supply of nutrients, not the stagnate soup one would find in a pond. Though Miller and Urey’s experiment was intriguing at the time, it belongs on a shelf next to the lore of Dr. Frankenstein.

  In order to explain how life arose, it is necessary to define life itself.  This has not been an easy task, and countless articles and books have been written attempting a definition.  Is a virus a living organism? A virus is extremely small and can only be seen under an electron microscope.  It has no internal cell structure, no cell wall, and no cell membrane, only a protein coat.  It does contain nucleic acids, either RNA or DNA; however, viruses cannot reproduce without hijacking another living cell’s reproductive machinery.  A virus is therefore a parasite that will eventually die without a host and because of these limiting factors most scientists would say that a virus is not alive.  But Lane says that we ourselves are parasites of our environment and the difference between us and a virus is merely the largesse of the environment.  Lane says that it is fruitless to attempt a definition of life since life is a seamless continuum between non-living and living.

  In this vein, Lane formulated his own recipe for the emergence of biological chemistry from geochemistry—rock, water, and carbon dioxide.  These simple ingredients are not only abundant in our atmosphere but are abundant throughout the known universe. 

 We all know the importance of his first ingredient for life—water. Water is the universal solvent promoting chemical interaction.  Water is a source of hydrogen and oxygen in the chemical process of photosynthesis that extracts carbon dioxide from the air and light from the sun to produce carbohydrates and oxygen.  Most biochemical reactions either produce a water molecule (a condensation reaction) or gain a molecule of water (a hydrolysis reaction).  Hydrophobic fatty acids are able to spontaneously form double ringed structures called vesicles, the precursors of cell membranes, due to the random thermal buffeting that occurs when fatty acids float in a water solution. Hydrogen provided by water molecules is a component of all the macromolecules of life—carbohydrates, fatty acids (lipids), amino acids, and nucleic acids.  Disassociated water ions (H+) and (OH-) are essential bonds in joining amino acids into long chains to make proteins, with an OH- bond one end of an amino acid and a COOH bond on the other end.  And hydrogen bonds provided by water molecules join together the double strands of DNA. 

 Carbon dioxide, the second ingredient in Lane’s recipe, is a source of carbon for photosynthesis which makes the sugars our bodies use in the process of respiration—turning food into ATP molecules. Carbon is unique in the Periodic Table of the Elements for its small size and its ability to form four covalent bonds as a result of having four empty spaces in its outer electron shell.   All of the large organic molecules essential for life contain carbon atoms. 

 And the third ingredient, olivine, is a mineral made up of ferrous Iron and magnesium one of the most abundant minerals in the universe found in accretions discs that formed the planets. Water oxidizes the ferrous iron that makes up olivine, releasing heat and generating large amounts of hydrogen gas and serpentinite, the waste products essential for life.  

   These are the ingredients for life, but one can’t just put them into a bowl and stir.  To begin the chain of chemical reactions leading to life, it is necessary for hydrogen gas (H2) and carbon dioxide (CO2) to react with one another to produce one of the simplest organic molecules—methane (C4). This reaction does not occur under normal conditions.  In fact, it is very difficult for hydrogen to react with carbon dioxide and this was one of the problems that confronted Lane. 

  All cells derive their energy from reduction/oxidation (redox) reactions in which electrons are transferred from a donor to an accepter molecule. Typically, the accepter is oxygen but any two molecules can perform redox reactions.  The molecule that receives electrons is said to be reduced and the molecule that gives up electrons is said to be oxidized.  In respiration, or in a fire, where carbon is burned, oxygen is reduced to water, in which oxygen atoms pick up two electrons (as well as two protons that make up the hydrogen atom) producing a final product of water and carbon dioxide.

 In the case of hydrogen gas (H2), an alkaline, and carbon dioxide (CO2), an acid, it is hydrogen gas that wants to give up its electrons and become oxidized.  Carbon dioxide, on the other hand, wants to accept electrons and be reduced. Each has a reduction potential, which is the amount of energy released when the reaction occurs.   If a molecule (in this case hydrogen gas) wants to give up electrons, it has a negative value (-414 at a neutral PH) for a reduction potential, and alternatively, a molecule that wants to accept electrons, in this case carbon dioxide, has a positive value. The reduction potential is dependent on the acidity of a solution.  High acidity increases the reduction potential of carbon dioxide making it more positive and easier to accept electrons whereas alkaline solution increases the reduction potential of hydrogen gas making it more negative and more likely to give up its electrons.   One would think that by changing the acidity of a solution it would be easier for hydrogen gas and carbon dioxide to readily react with each other, but changing the acidity of a solution affects all of the molecules in the solution in the same way, so hydrogen gas (H2) will tend to pass on its electrons to H+ to form CO2 and H2.  Nothing is gained and we’re right back where we started.  Simply changing the acidity of a solution won’t make it any more likely carbon dioxide and hydrogen gas will react to produce methane.

  The reaction of carbon dioxide and hydrogen gas is no problem for methane producing methanogens such as archaea.  They get all the energy and carbon they need by reacting carbon dioxide with hydrogen gas, but they have evolved the internal apparatus needed to pump protons across a gradient. They are living cells after all.  Many chemists working in the field have come to the conclusion that it simply is not possible for geochemical processes to undergo this reaction under any non-biological circumstances to produce the precursors of life.

 Lane was not deterred, believing that if there really is a continuum between geochemical and biological processes there should be a way to react CO2 with H2 naturally. He turned his thoughts to the ocean depths.  This is the habitat of the impressive volcanic black smoker vents.  Here, where the earth’s crust is thin, water percolates down through the sea floor to the magna chambers where the water is heated and mixed with dissolved metals and sulphides.  The extreme pressure from this super-heated acidic brew then blasts its way to the surface of the ocean bottom precipitating out iron sulphides and pyrite.  He concluded that these black smoker vents are not conducive to reacting hydrogen sulfide with CO2 because the temperatures are too extreme and the most stable compound under these conditions is CO2 itself. In addition, the black smoker vents are too violent and short lived to invent life.  What is needed is a more gentle and stable process.

  Alkaline hydrothermal vents seemed to Lane to be a much better candidate for the continuum between geochemical and biochemical processes.  Unlike black smoker vents, alkaline vents are not volcanic, but originate from the sea floor and are a product of a chemical reaction between water and rocks rich in olivine.  Olivine is rich in ferrous iron and magnesium and when mixed with water the ferrous iron is oxidized to ferric oxide releasing heat and generating hydrogen gas dissolved in warm alkaline fluids containing magnesium hydroxides.

  According to Lane, alkaline hydrothermal vents have the perfect physical and chemical environment to kick-start life.  Unlike black smoker vents that have a central chimney, alkaline vents have a microporous structure like a sponge with thin electrically conductive walls separating interconnected pores.  Warm currents passing through these micropores concentrate organic molecules such as amino acids, fatty acids, and nucleotides.  The interactions between these molecules often precipitate fatty acids into vesicles, the precursors of cell walls, and occasionally they will polymerize amino acids and nucleotides into proteins and RNA.  These porous vent structures mimic the biological structures in mitochondria that pump protons across a gradient. But before it is possible to concentrate organic molecules, it is necessary to create them and this was only one of the problems facing Lane: If these alkaline hydrothermal vents create life, then why aren’t they incubating life today?

  It occurred to Lane that conditions three and one half billion years ago in Hadean times are far different than conditions now. Under today’s conditions, there is not enough carbon to incubate life; however, estimates suggest that CO2 levels were anywhere from one hundred to one thousand times higher in Hadean times making the oceans more acidic.  The combination of high carbon dioxide levels, mildly acidic oceans (PH 5-7), and warm alkaline fluids flowing through thin electrically conductive Iron sulfide vent walls would have made them ideally suited to react carbon dioxide with hydrogen gas to form methane (C4) as long as oxygen is not present.  Under these conditions with temperatures between 25 and 125 degrees centigrade, the formation of all four of the macromolecules essential for life: amino acids, fatty acids, carbohydrates and nucleotides should form spontaneously from the reaction between hydrogen gas and carbon dioxide releasing energy in the process. 

  We’ve seen that the oceans of the past were more acidic (acids have more protons or H+ ions and bases have more hydroxyl or OH- ions). Because there were more hydrogen ions (H+) available in our ancient acidic oceans, it would have been easier to transfer electrons, which have negative electrical charges, onto carbon dioxide (CO2), because the reduced carbon dioxide molecule would have more available protons (H+) to balance the negative charge of the electrons it accepts, making it more stable in the process.  But just having more protons floating around wouldn’t solve the problem of reacting carbon dioxide with hydrogen gas because it would also be easier to pass the electrons onto other molecules as discussed earlier.  But remember, we now have the structure of the alkaline thermal vents to consider. In these alkaline vents we have a labyrinth of micropores.  Both mildly acidic waters and warm alkaline fluids would have been flowing through the micropores of the alkaline vents.  On one side of the thin electrically conductive micropore wall would be the mildly acidic ocean water with available protons (H+) and on the other side of the micropore wall would be the warm alkaline rich hydroxyl (OH-) negative fluids creating an electrical imbalance.  This situation would be very similar to what occurs in the energy producing mitochondria in our cells.  Under the alkaline conditions in the vent, the reduction potential of hydrogen gas, which is negative at a neutral PH of 7, is reduced even further making it more negative and more likely to get rid of its electrons, and conversely, the reduction potential of CO2 under acidic conditions becomes more positive making it easier to accept electrons.  The end result is that hydrogen gas and carbon dioxide can react to form organic molecules!  Lane had found his geologic “mitochondria” in the form of alkaline vents on the ocean bottom.  His hypothesis of a seamless transition between inorganic processes and organic processes was realized. 

  Lane goes into much more detail in this book describing the successful evolution of eukaryotes having to do, in large part, with their ability to recombine genetic material through sex.  Nick lane’s book The Vital Question is dense but accessible for the lay person who has patience. I think this is one of those landmark books that offer very plausible hypotheses for the vital questions concerning evolution, and the origins of life.








The 37th Parallel

Ben Mezrich

Ben Mezrich, a graduate of Harvard, has published 18 books including: Bringing Down the House and The Accidental Billionaires both of which are on the New York Times bestsellers list and have been adapted into award winning films. His latest book The 37th Parallel is in production for a future big-screen release.

  Ben Mezrich’s book, The 37th Parallel, is a fictionalized account based on the true-life adventures of Chuck Zukowski, a microchip engineer, sheriff’s deputy, and independent UFO investigator, and his sister Debbie Zukowski, a Missouri MUFON field investigator and STAR team member.  The title of the book, The 37th Parallel, details the very strange and often sobering events that have occurred along and on either side of latitude 37, a strip of territory running  through our nation’s midsection including: Missouri, Kansas, Nebraska, Colorado, Utah, Nevada, and California to the west and Illinois, Indiana, Ohio, West Virginia, and Virginia to the east.  An inordinate number of military bases and a statistically unlikely number of paranormal anomalies and UFO activity has, and still is, occurring along this UFO highway.  Most disturbing of all, over 10,000 surgically precise, bloodless, animal mutilations have plagued ranchers and landowners along this corridor since the early 1970s. 

  The situation with cattle mutilations became so bad with one rancher near Dulce, New Mexico losing 50 head of cattle to this bizarre phenomenon between 1975 and 1983 that New Mexico Senator Floyd Haskell from Colorado and Harrison Schmitt from New Mexico got involved by writing letters to the FBI office in Denver and to the attorney general of the United States, respectively.  Their actions initiated a formal FBI investigation in 1980.  

  The 37th Parallel is full of intrigue, and author Mezrich makes the reader feel part of the brother and sister investigative team as they trek down the UFO highway, probing the mysterious UFO phenomenon, all the while being threatened and followed by mysterious unknown agencies apparently unhappy with their involvement.

  The 37th Parallel is written in a nonlinear sequence, jumping forward and backward in time with each new chapter, confusing at times, but adding to the mystery and intrigue.  This book is a page turner with an inexplicable surprise at the end of the book.  I recommend the book for those who enjoy a good real- life mystery and a writer who is adept at telling a good story.  I can’t wait for the movie!







Origins of Consciousness:

How the Search to Understand the Nature of Consciousness is Leading to a New View of Reality

Adrian David Nelson

Adrian Nelson is a philosopher and psychologist specializing in consciousness and transpersonal psychology.

  Panpsychism, first championed by Baruch Spinoza and Wilhelm Leibniz in the 17th century and later re-examined by William Janes in the 19th century, is now experiencing resurgence through the work of contemporary scientists and philosophers.  Panpsychism is a philosophy maintaining that consciousness is fundamental to the universe.  Author Adrian Nelson calls this rising consensus establishing the primacy of consciousness the Intrinsic Consciousness Movement.    

  Panpsychism has gained acceptance because it addresses a long-standing mystery concerning consciousness called the hard problem, a term coined by philosopher David Chalmers.  Stated simply, the hard problem asks: How is it possible that billions of interconnected neurons operating by classical biochemical and electrical processes give rise to a subjective experience?  Chalmers believes that the answer might be associated with our new understanding of information, which since the mid-twentieth century is assumed to be more basic to reality than matter or energy.  All information has a physical representation; therefore, by definition, it abides by the laws of thermodynamics and Relativity.  Like matter and energy information is conserved, but unlike matter and energy information can be created from nothing.  For example, a quantum entangled system can be represented by one qubit of information.  A qubit, unlike a classical bit does not represent a one or zero, a yes or no; rather, a qubit is representative of one and zero, a yes and a no simultaneously.   As a result of this property, a single unobserved qubit of information can yield two bits of information after an observation creating something from nothing.   Because of the interaction between information and a conscious observer, author Adrian Nelson believes that information and consciousness must be fundamentally related.

    In the famous two-hole or double slit experiment for example, a subatomic particle such as a photon of light or an electron will behave one way when unobserved and another way when observed, despite the lack of physical interaction.  In fact, experiments have been performed with particles, demonstrating that just the possibility of gaining information about the system will change the outcome of the experiment, and this “effect” can be retroactive in time and space.  In other words, an observation in the present can influence and change the previous behavior of a particle. Somehow, information and consciousness are related and independent of the constraints of space and time.

 Physicist John Wheeler, who procured Einstein’s chair at Princeton, once proclaimed that the most important thing about quantum theory and experiment is that it has destroyed the concept of the universe ‘sitting out there’ independent of observation.  We are participants in the universe and that alone appears to make consciousness fundamental.  Despite this link, there have been few practical experiments that directly address the effects of consciousness on material substances.  This lack of experimentation is a result of a couple of factors according to Nelson:  First, anything dealing with psi, which is defined as extrasensory perception and psychokinesis experiences that are not explained by known physical or biological mechanisms, has been a taboo with mainstream scientist because of past shabby scientific methods and even trickery.  Secondly, it is often difficult to design experiments with humans that rule out subjective factors that could skew the results.  But this is beginning to change.

    In the 1970s Helmut Schmidt, a German-born physicist working for Boeing aircraft manufacturing, became intrigued with the possibility of mind-matter interactions.  Previous psi experimental results were often suspected of being biased against chance, so using his physics background he created the first truly random event generator or REG.  Because quantum events are by nature random, Schmidt devised a device that takes advantage of these quantum properties.  His device uses a small piece of weakly radioactive material that randomly emits electrons which are then directed to a circle of electronic lights causing them to move either in a clockwise direction or a counterclockwise direction depending upon the random triggering device.

  Using this device in his experiments, he asked participants to attempt to influence the direction of movement of the lights using only their thoughts.  The results, recorded by automatic print read-outs were subtle but intriguing with a small though significant change from randomness.  Distance and time seemed to have no effect on the out-put data.  The results of the experiments were very similar whether the participants were near the device or many miles away.  Even stranger, participants could achieve significant results regardless of whether sessions ran simultaneously with the REG or if participants attempted to influence data that had been previously produced. (77)  This finding was confirmed by researchers at the PEAR lab at Princeton University.

  Princeton University Anomalies Research (PEAR) was established in 1976 by physicist Robert Jahn while serving as Dean of Princeton University’s School of Engineering and Applied Science.  During that time, an engineering student approached him in his office and asked him if she could set up some psi experiments.  Jahn was skeptical, but agreed to allow her to set up some pilot programs.  To his surprise, the early experiments showed promise, so he established a full-time program helping to add respectability to psi research. (72)

  While working at PEAR, psychologist Roger Nelsen a long-time member was intrigued by a particular phenomenon discovered at the lab.  Moments of high emotion, suspense, or a shared experience were often found to correlate with statistically significant deviations from chance in REG output.  In 1997 Nelson teamed-up with psychologist Dean Radin to explore the possibility that emotional states could be amplified and act coherently.  Specifically, they wondered if significant global events would trigger coherent waves of consciousness that could be measured by triggering decoherence in REGs.

  In 1998, with the help of colleagues, Nelson and Radin began setting up the Global Consciousness Project (GCP) establishing a world-wide network of REGs with results fed to a central location through the internet.  Events such as mass meditations, deaths of public figures, natural disasters, major international sporting events, the coming of the new millennium, and the world trade center disaster were some of the significant events recorded. 

  Three seconds before midnight, as billions anticipated the coming of the year 2000 and the possibility of apocalyptic occurrences, the network of REGs deviated significantly from randomness with odds of 1,300 to 1 over chance—a very large discrepancy.  This result paled in comparison with the 09/11/2001 event which produced the largest deviation from chance since the project began.  Oddly, the spike in deviation from chance in the 09/11 event began in the REGs over four hours prior to when the first plane hit the towers. The largest spike occurred on the east coast, possibly because local events usually trigger higher emotional responses.

  Another of Radin’s projects conducted at the Institute of Noetic Sciences (IONS) in Northern California was, like Schmidt’s REG device, designed to remove all external variables other than the possible influence of an observer’s attending mind. 

  In these experiments Radin used an instrument called a Michelson interferometer in which photons can travel two possible routes.  But instead of slits, the interferometer device uses half-silvered mirrors in which one-half of the photons travel to one fully-silvered mirror and the other half travel to a second silvered mirror.  Both of these mirrors direct the photons to a detector screen.  As in any of the classical photon experiments the photons create an interference pattern on the detection screen when the two paths are equidistant.  When one of the two paths is blocked or if an observation or measurement occurs between the detections screen and the fully-silvered mirrors, the photon retroactively takes only one path and no interference pattern occurs.

   Participants in the experiments were located in separate electromagnetically shielded chambers separated from the outside environment.  Two groups of participants were used.  One group consisted of meditators and the other group was untrained in this mental technique. Their task was simply to quiet their minds and attempt to gain information about the photons which in similar physics experiments “causes” a collapse of the wave function resulting in the photon taking only one path, thereby eliminating an interference pattern at the detection screen.  Several control trials were also run in which participants did not pay attention to the task.  The results were compelling, resulting in a statistically significant deviation from chance by a factor of 500 to 1. When the data was separated into those who were meditators and those who were untrained, the differences were stark.  The untrained group’s effect on the interferometer results was about that of chance—no effect; however, the meditator’s results were astonishing with results of 107,000 to 1 against chance!

  In 2012, Radin and five associated researchers submitted a paper to the journal of Physics Essays detailing a further series of experiments utilizing a classical double-slit arrangement in which photons were directed toward two tiny holes in a barrier.  A sensitive digital camera captured and recorded the intensity of the interference pattern.   A decrease in illumination is a result of a reduction of the interference pattern; alternatively, an increase in illumination is a result of an increase in the interference pattern.  Pertaining to this experiment and its participants, a decrease in illumination indicates that participants have gained information about the route of the photons “causing” them to take individual paths thereby reducing the interference pattern, decreasing illumination, and demonstrating a mind-matter effect. The results of the experiments confirmed his previous findings: meditators were significantly better at reducing interference than untrained participants with odds against chance of 13,800 to 1.

  Radin then designed an experiment demonstrating one of the most bizarre aspects of quantum theory dubbed the delayed choice effect by physicist John Wheeler.  Quantum particle experiments demonstrate that an observation in the present retroactively determines the path of subatomic particles.  In the double-slit experiment, for example, individual photons traveling from a source to a detection screen, with a barrier having two closely spaced holes in between the source and the screen, reveals that the photon’s probability wave takes both routes through the two holes creating an interference pattern on the detection screen.  If an observation takes place between the barrier and the screen after the photons should have already passed through the holes, they retroactively choose one hole or the other creating a single strike on the screen with no interference pattern.

 Radin’s experiment graphically demonstrated just such a retroactive “effect.”  In this set-up, as in the previous experiment, participants were asked once again to attempt to gain information about the photon to reduce the interference pattern thereby reducing the illumination.  This time, however, the trials with the interference device had been run and recorded three months previous to Radin’s experiment with results unanalyzed, logically equivalent to the famous Schrodinger’s cat thought experiment in which a particle with a short half-life, a vial of poison, and a cat are placed into a sealed box. If the particle decays in a given amount of time the vial is broken and the cat dies.  Alternatively, if the particle does not decay the vial is not broken and the cat lives.  After a prescribed amount of time in which the particle has a 50/50 chance of decay, the experiment is concluded but the results are not known.  The cat and its coupled radioactive element go into a superposition of states of decay/no decay, and dead cat/alive cat.  The results cannot be known until the box is opened and the situation observed.

  The results of Radin’s experiments were similar to those done in real time with odds against chance 13,800 to 1 against chance demonstrating the Wheeler’s delayed choice effect.

   If consciousness is fundamental and a basic constituent of the universe, we have to wonder how consciousness evolved with the universe.  As Albert Einstein once said, the only incomprehensible thing about the universe is that it is comprehensible.  How is it that a universe created organisms that can reflect on the universe that created them?  Is the universe comprehensible because it produced life in its own image?  

  As we begin to understand more about the cosmos, it has become apparent that the universe is finely tuned to produce life.  If any of the approximately 20 cosmological constants deviated slightly from their current values, stars would not form and we would not be here to ponder the universe.  How is it that the universe is perfectly fine-tuned to produce life and mind?

  One idea is similar to biology’s evolutionary theory in which only those species that are adaptable survive.  The millions of species that could not adapt and went extinct are not around to witness nature.  Similarly, some cosmologists believe that trillions of universes are popping up with different laws of physics that aren’t conducive to producing life.  Only our universe happened to have just the right formula to generate life and as a result we are around to ponder.  Physicist Paul Davies is one of many scientists who do not buy into this idea for multiple reasons. Chief among these, is the question of how this universe-generating mechanism came into being?  Like religious claims about an intelligent creator, the multiverse hypothesis also leads to questions that quickly spiral toward infinite regression.  Rather than providing a more elegant explanation, the multiverse is sort of a cop-out avoiding the question and appealing to something vastly more complex.  Davies says we should first try to exhaust theories that do not appeal to something beyond the universe. (134)

  Physicist John Wheeler, who procured Albert Einstein’s chair at Princeton, came up with a more elegant idea consistent with the laws of quantum theory.  Wheeler believes that the laws of physics are neither fundamental nor static because information is constantly being created with the expansion of space/time.  Since the laws of physics are described by mathematics, and mathematics is a product of informational statements, the laws of physics and the mathematics that describe these laws must reflect a situation in which information is evolving with an expansion of the universe.  The information created in the evolving universe resides in a superpositon of states and observation collapses the superposition into a single reality, a reality consistent with our beliefs.  Together, we and the universe evolve creating our past and future.  If we accept Wheeler’s delayed choice experiments, which have been demonstrated experimentally, then we have to entertain the idea that we, the observers, have brought the universe into being through retroactive observation.

  The idea of retroactive processes and observation has only recently come to light in the field of biology.  The process of blood clotting in mammals has long confounded microbiologist.  A cascade of hundreds of chemical reactions must occur at just the right time and place for blood-clotting to occur. Any error in the process would be catastrophic. Surprisingly, later chemical events in the cascade influence former processes without regard to linear time.  Retroactivity is also apparent in the process of respiration in living cells in which energy “borrowing” takes place—borrowing from a future energy source to sustain a current process. Space and time could well turn out to be illusory and a product of consciousness itself. 

   If we are participants in the creation of the universe, then the universe is not entirely random and our choices, a product of our free will, determine what happens on a grand scale. But if consciousness is fundamental, then it is not only part of our being but part of the universe as well. Adrian Nelson suggests that it is more accurate to say that free will has us than to say that we have free will.

  This book raises interesting questions about consciousness and the role of observation.  Experiments involving psi is given some credence, but still, one has to wonder why the effect is marginal.  Certainly time will tell.  Nelson does an excellent job of presenting his case that information and consciousness are fundamentally related. 

 

The Epigenetic Revolution:

How Modern Biology is Rewriting Our Understanding of Genetics, Disease, and Inheritance

Nessa Carey

Epigenetics can be defined from the level of the organism and the environment or from the fundamental level of biochemistry. On the large scale, we can say that environmental influences affect changes in an organism that in some instances can have genetic consequences in future generations in a Lamarckian way. Food shortages in a population, for example, can create stress in individuals producing neurochemicals that influence an individual’s epigenetic program and these changes can be passed on to future generation in the same way that the genome is passed on.
At the level of biochemistry, epigenetics is the reason that identical twins are not identical in every way. Identical twins have the exact same genome, but both physical and personality differentiation is the result of the environment at the molecular level of DNA and proteins within each cell. The DNA molecule is not the sleek idyllic representation we often see in textbooks, but rather, a bumpy molecule wrapped with additional molecules along its length. Small protein molecular machines tag the DNA with methyl and acetyl groups, small molecules which act like on-off switches making the gene either easier or more difficult to read.
These epigenetic switches are not all-or-nothing switches. They increase or decrease the likelihood that the gene will be read. I like to think of the difference between genes and the epigenome in terms of digital and analog. The DNA can be thought of as a digital system in which the gene is replicated by an all-or-nothing process, while the epigenetic program can be thought of as an analog system that creates a certain level of constraints on the digital code by “tuning” it up or down and changing the gene’s expression. Methyl groups squeeze the DNA strands tighter against the histone proteins making them harder to read while acetyl groups relax the strands making them easier to read. Although each cell in the body contains the same genetic code, the expression of the code is controlled by the epigenetic program making a skin cell different from a brain cell.
John Gurdon of Cambridge, England is one of the early pioneers of epigenetics. Gurdon’s contribution resulted from experiments with African Clawed Toads. He wanted to know whether or not specialized cells all contained the same molecular code. To find out, he took the nuclei from the cells of adult toads and inserting them into unfertilized eggs that had their own nucleus removed. This is called somatic cell nuclear transfer (SCNT). He was able to create healthy young frogs, thereby, proving beyond doubt that when cells differentiate, their genetic material isn’t irreversibly lost or changed.
The reason for his success is that the cytoplasm of the egg can convert adult nuclei into a zygote that acts as a normal embryonic stem (ES) cell. In nature, when the sperm and egg fuse to create a zygote the cytoplasm of the egg is able to change the differentiated cells into pluripotent stem cells very quickly by reprogramming the two nuclei and making the nucleus pluripotent. The cytoplasm of the egg is very efficient at reversing the epigenetic memory on our genes and establishing a blank slate. Most, but not all, of the adult epigenetic code is lost and the zygote becomes pluripotent, able to make every cell in the body except the placenta. The zygote then divides a few times to create a bundle of cells or “blastocyst” with an outer layer called the trophectoderm which eventually forms the placenta and the other extra-
embryonic tissues of the inner cell mass. Just a small change in environment can change stem cells to skin cells or heart cells that beat in-time with each other.
Two Japanese biologists, Professor Yamanaka and his research associate Kazutoshi Takahashi won the Nobel Prize for their contribution to stem cell research. They wondered if it were possible to create pluripotent cells from differentiated cells just as happens naturally in the cell’s cytoplasm. They started with 24 genes which, when switched on, keep embryonic stem cells pluripotent. If the pluripotent genes are switched off, then the stem cells begin to differentiate and under normal circumstances do not revert to ES cells again.
But these two experimenters found that it takes only 4 of the 24 genes they initially started with to turn differentiated cells back to ES cells. Using various experimental techniques, the researchers were able to turn these reprogrammed ES cells into the three major differentiated tissue types from which all organs of the mammalian body are formed—ectoderm, mesoderm, and endoderm. Yamanaka called the cells he created “induced pluripotent stem cells” or IPS cells. Surprisingly, of the 20,000 genes in our genome it takes only four genes to turn fully differentiated cells into pluripotent cells.
The ability to create pluripotent stem cells from differentiated cells has important implications for certain diseases such as diabetes. In the case of type 1 diabetes, insulin producing beta cells located in the Islets of Langerhans in the pancreas are destroyed leaving the patient unable to control blood sugar. It might soon be possible to take differentiated skin cells from the patent, grow them in culture, and introduce the four pluripotent genes turning them into stem cells. They can then be turned into beta cells in the lab and injected back into the patent without the chance of rejection from the patient’s own immune system.
To understand how the epigenome operates, it is necessary to understand the process of DNA replication and protein synthesis. Four bases in the DNA molecule adenine (A) cytosine (C) guanine (G) and thymine (T) join the two strands of DNA together with hydrogen bonds. T always bonds to A, and C always bonds with G. The sequence of the four bases forms the coded language of the DNA strand. The replication of a cell happens when replication proteins move along each strand of DNA and form two identical strands using the original DNA strand as the template. When the cell divides, each daughter cell ends up with a copy of the original DNA strand. Any copying error is recognized by other sets of proteins and repaired by replacing a mistaken base with the correct base. The 50 trillion cells in the body are all the result of perfect replication of DNA. This is incredible since each cell contains six billion base-pairs of DNA, half from our father and half from our mother.
To make new proteins inside the cell, the DNA molecule is unzipped and read to make identical RNA copies, three base pairs or “codons” at a time. These RNA molecules travel outside of the nucleus of the cell to the ribosomes where long chains of amino acids are strung together to make proteins that might be 1,000 amino acids in length. This string of amino acids is divided into sequences of exons and introns which stand for expressed sequences and inexpressed sequences respectively. The inexpressed sequences are signals to stop or end the protein. All of these exons and introns are copied from the DNA to the RNA but a second editing protein removes some, or all, of the intron sequences and then splices the segments into one or more sections enabling one gene sequence to code for more than one protein. (52) Only 2 percent of the total genome codes for proteins. The other 98 percent which was once called “junk” has now been found to regulate the expression of DNA as part of the epigenetic code.
Regulation of the DNA code is a result of epigenetic “writers” such as DNMT1, DNMT3A, and DNMT3A. These writers are enzymes that place methyl molecules, (H3C) on the C base cytosine when it is followed by a G base. This group is known as CpG, which is a modification or epigenetic marker changing how a gene is expressed. The more CpGs on a DNA strand the less it is likely to be expressed. High levels of methylation switch genes off. It seems that methylation causes the DNA to become tightly coiled around the histones making it difficult for the gene transcription machinery to get access. The process of methylation keeps specialized cells such as neurons from producing skin cells or digestive enzymes for example. This methylation scheme is transferred from parent cell to daughter cell making sure that specialized cells, such as skin cells, continue to produce skin cells.
Like the epigenetic mechanism of methylation that keeps genes from being expressed, there exists another epigenetic modifier called acetylation which attaches an acetyl molecule to the lysine amino acid on the floppy tail of one of the histone octomers, a globular structure of eight closely packed protein balls that is tightly wrapped in strands of DNA. Acetylation increases the likelihood that a gene will be expressed. But methylation and acetylation are not like on-off switches. These modifiers simply increase or decrease the likelihood of genes being expressed.
As it turns out, over 50 epigenetic modifiers of the histone proteins have been identified. Some increase gene expression and some decrease gene expression. These modifiers are called the histone code. Methylation is a more permanent gene expresser than acetylation. Methylation, for example will continue to make sure that skin cells remain skin cells, while acetylation is more fluid and can change gene expression in response to various hormones, making it more responsive to environmental signals from the organism and from the outside environment.
The Human Genome Project that began in the year 2000 and culminated three years later was an extraordinary achievement but the end result was one of disappointment. Originally, researchers believed that they would discover about 120,000 genes, but only about 20,000 were found, about the same number as in a round worm. Obviously, the number of genes does not determine the complexity of an organism as was assumed. In effect, the Human Genome Project marked the end of the deterministic model of evolution. The project’s most important accomplishments were to peel away another layer of reality, thereby, paving the way for the next great leap forward—the epigenetic revolution. While the Human Genome Project took a mere three years to complete, the epigenetic revolution will take decades and promises to fulfill the expectations that the genome project failed to deliver—a cure for diseases and the manipulation of life itself.

 

 

The Biology of Belief:

Unleashing the Power of Consciousness, Matter & Miracles

Bruce H. Lipton, Ph.D.

 

Life began on Earth a mere ½ billion years after its formation in the form of single celled prokaryotic organisms that included bacteria and archaea.  These organisms did not battle for supremacy but shared information about the environment through lateral transfer of genes and chemical hormones.
  The first major symbiosis probably happened two billion years ago when a single archaea got inside a host bacterium, giving rise to the first single celled organism with a nucleus.  The archaea inside the bacterium took over the energy consuming process of respiration in the form of mitochondria while shifting some of its genes to form the nucleus of its host.   This new eukaryotic cell evolved internal organelles permitting specialization within the individual cell, saving energy, and over time, evolving a cell membrane with a surface area one thousand times larger than the prokaryotes, vastly increasing the awareness of the cell’s environment.
Task specialization, chemical signaling, and gene programming took another major leap forward about 700 million years ago when single celled entities formed into colonies establishing the first multicellular organisms. We are the product of 50 trillion cells working in cooperation with 500 trillion microbes, mostly in our gut, that influence and regulate our cells and are, in turn, regulated by our cells. Rather than battles for limited resources, cooperation has been the primary driving force of evolution.
Biology’s Central Dogma that information flows in one direction from an organism’s DNA, to RNA, and then to proteins reigned supreme until the end of the Human Genome Project in 2003. The results of that project marked the end of the deterministic model and the Central Dogma. Before the Genome Project began, researchers set out to catalogue the 120,000 genes they believed were necessary to code for the approximately 120,000 complementary genes known to exist in the human body. They were stunned to find a mere 20,000 genes, about the same number of genes existing in the common round worm. Apparently, our complexity is not dependent on the quantity of our genes. Research since that time has shown that the epigenetic code, the “program” controlling regulatory proteins located in the genome and cytoplasm of the cell, not only determines which genes should be read, but also influences the shape of the protein such that one gene can create 2,000 variations of proteins from the same gene blueprint.  The epigenetic code, in turn, receives information from environmental signals inside and outside of the cell. These findings substantiated the view that information in the cell flows in two directions: from proteins, to RNA, to DNA, to regulatory proteins, to environmental signals, and back again from environmental signals, to regulatory proteins, to DNA, to RNA, and to proteins.
 As extraordinary as the DNA molecule is, it is only the tool box, the gonads of the cell. The brain of the cell, the cell membrane, is made up of two layers of phospholipid molecules with the electrically polar heads of the phosphorous molecule facing out and the lipid nonpolar section of the molecule sandwiched in between the two heads creating a functional liquid crystal semiconductor, equivalent to a silicon chip. Piercing the membrane are the antenna-like proteins or Integral Membrane Proteins (IMPs) that can be divided into two main categories: receptor proteins and effector proteins.
 The receptor IMP proteins are electrically charged tiny antenna that are tuned to environmental signals. They are the equivalent to our sense organs. Some of these IMP proteins extend into the cytoplasm of the cell to monitor internal conditions, while other IMP proteins extend beyond the membrane’s outer surface each tuned to receive a unique environmental signal. Some IMPs are tuned to accept chemical signals such as estrogen, histamine, or insulin, for example, while others are tuned to detect electromagnetic energy fields such as light and sound. Once a chemical locks on to a receptor or an electromagnetic signal is received, the inactive IMP’s electrical charge changes and the IMP goes into an active form. The receptor proteins then pass on the information to the effector proteins which travel through the cytoplasm to affect the DNA replication and protein synthesis machinery.   
Some effector proteins act like revolving doors in the membrane itself. Every cell membrane has thousands of sodium-potassium ATPase proteins. Their activity uses almost half of our body’s energy every day. Its task is to shuttle protons across the membrane barrier. As the protein revolves it moves three positively-charged sodium atoms out of the cytoplasm and simultaneously admits two positively charged potassium atoms into the cytoplasm creating an electrical potential across the membrane barrier such that the interior of the cell remains negatively charged and the exterior positively charged.
Other effector proteins control genes by making them either more easily or less easily accessed. These effector proteins control the binding of the regulatory proteins that form a sleeve around the DNA molecule which, in turn, determine which genes get read. The receptor/ effector protein system is the mechanisms by which the liquid crystal membrane, the CPU, can be programmed from outside the environment of the cell.
The evolution of multicellular systems, like us, has evolved two primary mechanisms that mimic the receptor/effector protein system of single cells. One such example, designed to perceive external threats, is the Hypothalamus-Pituitary-Adrenal axis or HPA, equivalent to the receptor/effector system of the IMPs in the individual cells. When the hypothalamus perceives an external threat, it sends signals to the pituitary or “master gland” which is homologous to the individual cell’s effector proteins. The pituitary then launches the body’s organs such as the adrenal glands initiating the fight or flight response. The stress hormones released into the circulatory system causes a fight-or-flight response by constricting the blood vessels of the digestive tract to save energy while increasing blood flow to the arms and legs. Short term initiation of the HPA axis is good for short bursts of energy, but long term stress is very harmful to the body because it keeps us in a perpetual fight-or-flight status.
 The second mechanism is the immune system which protects us from internal threats. Each of these systems operates at the expense of the other in terms of energy use. When the immune system is fighting a virus, for example, we feel tired because the system is using much of the body’s energy to fight the virus, and when we are in crisis mode involving the HPA axis our immune system is weakened. Stress hormones actually shrink the hippocampus, the site of neural stem cells and the prefrontal cortex, the center of higher reasoning, often resulting in mental depression syndrome. Letting go of fear and stress is the first step toward creating a fuller, more satisfying life according to Lipton.
  The brain of multicellular organisms is homologous to the cell membrane. Our conscious mind can generate emotions through thought producing controlled release of regulatory chemicals or signals. Thinking about a sexual encounter or a terrifying situation, for example, will create neuropeptides that course through the blood. These neurochemicals are received by the IMPs in the cell influencing our genetic machinery. When we change our beliefs, we can change the neurochemicals initiating a complementary change in the body’s cells. According to Lipton, “The function of the mind is to create coherence between our beliefs and the reality we experience.” Positive perceptions of the mind enhance health by engaging immune functions; negative thoughts and perceptions inhibit the immune system and can precipitate disease.  
Lipton emphasizes though, that thinking positive or negative thoughts rarely, if ever, lead to physical cures. In order to affect real change, it is necessary to change our deep seeded beliefs which is difficult and often takes much time and effort. Our conscious mind processes about forty environmental signals per second, while the subconscious mind, the seat of belief, processes about twenty million environmental stimuli per second. Our conscious aware mind gives us the illusion that we are in control, but neuroscientists contend that only about five percent of our decisions are conscious. Most of our decisions are generated by the subconscious mind which carries out automatic programs that have been in place since the age of six. This is where our deep seated beliefs reside and it is no wonder that these beliefs are difficult to change. Beliefs are very powerful and we need look no further than the voluminous medical history of the placebo response to see that beliefs can affect cures for disease.
Lipton’s aha experience came when he realized that we are inseparable from the universe. Every functional protein in our body is a physical/electromagnetic complement to something in the environment. This implies that if we change the environment too much, we will no longer be the environment’s complement. Our identity, our idea of “self” is not located in our cells. What makes us unique is that the IMPs on our cell membranes are attuned to specific environmental signals. If our particular identity receptors were removed from our cells, they would become generic cells. The cell’s receptors are not the source of its identity but the vehicle by which the self is downloaded from the environment. Evidence to support the supposition that cell receptors create our identity by downloading information from the field comes from transplant patients who report behavioral and psychological changes matching those of the donor after receiving an organ transplant.  When we die the self is still out there in the information field according to Lipton.
The Biology of Belief convincingly reveals that we are inseparable from our environment at all levels of complexity, and that our beliefs can affect physical change.     



The Enigma of Cranial Deformation:

Elongated Skulls of the Ancients

David Hatcher Childress
Brien Foerster

The enigma of cranial deformation, still practiced today in some parts of the world, dates back to at least 47,000 years ago to the time of the Neanderthal, and its cultural influence has touched ever part of the planet from Central and South America, to Africa and the Far East, from the Middle East to Europe and even remote Islands of the Pacific Ocean.
The technique of skull deformation often begins about one month after a child is born, before the hard bones of the skull are formed and the skull is still malleable. Bindings of rope, wood, or cloth are wrapped around the child’s skull, and over time, tightened until the desired shape is achieved. By two years of age the restrictors are no longer needed and the head continues to grow in the prescribed shape. These techniques produce two basic types of artificial cranial deformations evidenced in the fossil records: dolichocephaloids, human skulls that are long and narrow and brachiocephaloids, skulls of broad and round morphology.
Despite the widespread, and centuries old, practice of cranial deformation few mainstream scientists have engaged in the investigation of this phenomenon. The reasons for this are varied but in general it has to do with the difficulty in changing established belief systems. This intransience is true in all scientific disciplines but archeology is especially susceptible due to the slow and meticulous means of procuring and interpreting artifacts. Archeological digs can take decades and the recoveries of artifacts are often sparse and open to interpretation. But once a consensus has taken hold, a scientific archetype established, and the text books written, an enormous amount of incontrovertible date is needed to change established dogma.
Advocates of the philosophy of Isolationisms, which has been the primary working hypothesis of nearly all archeologists, maintain that ancient cultures separated by oceans had no contact with each other. This outdated paradigm continues to persist among academics despite decades of accumulating evidence to the contrary, providing a glaring example of archeology’s resistance to change. For example, pyramid building, which was prolific on all but one continent on earth, provides an example of isolationists’ attempts to fit a square peg into a round hole. Although the similarities of techniques and mathematical precision by which these structures were built are well documented, isolationists still cling to the idea that similarities in structure and design found on opposite sides of the oceans is just a matter of coincidence. Pyramids, they claim, are simply the easiest structures to build, yet to this day we have no idea how the ancients built some of these complex structures. Because of the disparity between evidence and theory, a small, but growing number of academics now support the philosophy of Diffusion, the proposition that ancient man could span the oceans of the world.
A third, more controversial idea, which I refer to as Visitation, has recently emerged among advocates of the “ancient alien” hypothesis. While not negating either isolationism or diffusion, this philosophy contends that an extraterrestrial race has, and possibly still is, influencing humanity on a world-wide scale.
The few scholars who have had enough curiosity and courage to investigate cranial deformations have not reached consensus as to the reasons that societal groups participated in artificial head binding, but the majority opinion, preferred by isolationists, is that it was performed to delineate persons of high status and royalty. This idea is difficult to defend, according to Childress, because it doesn’t explain why the practice was worldwide, nor does it offer any explanation as to why cranial deformation would have been discovered in the first place or why it would have been considered elitist. The author states:
The standard explanation [for cranial deformation] is that it was a style of the day and the ‘technology’ of deforming crania was relatively simple…But once again we have to ask ourselves—why would this be the style? It seems pretty odd.
Also, given the currently reigning isolationist doctrine of mainstream archeology, this strange style adopted by widely separated cultures would have to have developed independently in each place. This seems almost impossible.

In addition to Childress’ objection, archeologists have unearthed evidence that, in some cultures, elongated skulls have been the norm rather than the exception. This would seem to indicate that cranial deformation was not just for the elite.
Others researchers believe that cranial deformation was practiced to imitate a separate species of humans of higher intelligence or possibly an extraterrestrial race of “watchers,” who according to the book of Enoch, ruled the earth in the past. This hypothesis could theoretically fit comfortably within any of the three philosophical paradigms if certain criteria were to be considered: If the practice of cranial deformation was used as a form of imitation then isolationism would be viable if, and only if, visitation occurred, since it would be extremely unlikely that a genetically diverse species of humans would have proliferated both hemispheres of the earth and yet become extinct fairly recently. This criticism would also be true of diffusionism.
Alternatively, the world-wide practice of cranial deformation fits nicely into the visitationalist hypothesis. By definition the visitation hypothesis would allow “teachers” to roam the earth dispersing knowledge and technology regardless of whether cultures intermingled.
The latter two premises, that cranial deformation was practiced to either imitate a genetically diverse species of highly intelligent humans or an extraterrestrial species is bolstered by researchers in the field. Dr. John James Von Tschudi studied two pre-Incan cultures displaying dolichocephalism, the Aymares and the Huancas of Peru. He noted that the crania of two children less than a year old had the same deformations as the adults. But more impressive was the finding of a seven-month-old fetus in the womb of a mummied pregnant woman displaying dolichocephalism—giving credence to the idea that elongated skulls are of a genetic component, either terrestrial or extraterrestrial.
The elongation of a skull does not necessarily increase the interior capacity. Most artificially deformed skulls are of the same capacity as normal skulls; however, dolichocephalic skulls found in some of the earliest cultures in the world in Malta and Iraq have a capacity of between 2200 cubic centimeters to 2500 cubic centimeters compared to modern humans with a skull capacity of around 1450 cubic centimeters indicating once again that elongation is of a genetic nature.
Skeletons found on the Aleutian Island of Shemya by engineers who were bulldozing to create an airstrip found a large cranium measuring 23 inches from base to crown, about twice as large as a normal human. These skulls had been trepanned, a surgical process of drilling or sawing square or round holes in the skull. Patients of trepanning usual survived this surgery as most excavated skulls show that the wounds have healed. Childress states:
Trepanation is perhaps the oldest surgical procedure for which there is evidence, and in some areas, may have been quite widespread. Out of 120 prehistoric skulls found at one burial site in France dated to 6500 BC, 40 had trepanation holes.
Other examples bolster the hypothesis that cranial deformation was a genetic trait, either terrestrial or extraterrestrial. Skulls displayed in the Ica Regional Museum in Peru are not just deformed and elongated, but seem to be much larger in volume. Some have speculated that the cranial capacity is between 2 and 2.4 liters compared to a modern human skull that averages about 1.3 liters. But the sheer enormity of these skulls is not the only indicator that these skulls belong to a separate race. Several other genetic markers are present.
Many of the skulls displayed in the Paracas History Museum have unusual morphologies including: a lack of molars, that are most likely a genetic trait rather than the result of a poor diet, one-centimeter size holes in the back of the head (a trait also found in elongated skulls in Iraq) that might be a genetic necessity allowing arteries to provide oxygen to the elongated part of the skull, and wavy auburn hair color not characteristic of the natives who have black hair.
Most startling, several of the specimens displayed in the Ica Regional Museum in Paracas have only two cranial plates, one frontal and one parietal, instead of three found in average human skulls, leaving medical professionals at a loss for explanations. If imitation of a genetically diverse human population was the reason for cranial deformation, one wonders what happened to this race. Whether an isolationist or a diffusionist it would be difficult to explain how a world-wide race of presumably higher intelligent humans suddenly all died out at about the same time.
More likely, in my opinion, is that the technique of cranial deformation was practiced as a means to imitate an esteemed extraterrestrial species that visited our planet, guided humanity at various times throughout history, and then returned to their place of origin. This hypothesis would be congruent with the ancient advanced architectural achievements that accompany skulls of similar antiquity. Enormous pyramidal structures of Egypt, Mexico, and China, giant blocks of quarried stone weighing one hundred tons at Baalbek, Lebanon, complex interior cuts in hard igneous stone at Puma Punku in Bolivia (ironically attributed to the primitive Aymara Indians), the enormous carved elongated heads, some weighing 19 tons, in Central America, and the enigmatic thirty- foot tall, ninety-ton Moai statues with elongated heads on Easter Island are all suggestive of a highly advance world-wide culture, probably of extraterrestrial origin.
Is it any wonder that academic scholars are reluctant and maybe even fearful of what investigation into the phenomenon of cranial deformation might reveal? Why, for instance, have displays of elongated skulls been removed from exhibit all around the world? Curators of these exhibits have given all kinds of reasons why these skulls have been returned to storage, yet none are convincing. What unsavory truths could be revealed from studying the real reason for elongated heads? Our species brain size has reached a maximum limit restricted by the diameter of a woman’s birth canal which must allow for a baby’s head to pass through. Could it be that species from other worlds have evolved a large, but elongated, head that would permit passage through the birth canal while still increasing their brain capacity?
The study of cranial deformation could turn out to be a revelation. David Hatcher Childress and Brien Foerster have asked all the right questions; now we just need answers.



Our Mathematical Universe

My Quest for the Ultimate Nature of Reality

Max Tegmark

Max Tegmark holds a Ph.D. from the University of California, Berkeley, is a physics professor at MIT, and has authored or coauthored more than two hundred technical papers.

Max Tegmark says that the question of “what is reality?” represents the ultimate detective story, and he considers himself fortunate to be able to spend so much time pursing it. Tegmark explains that when we gaze into the sky with our most powerful telescopes we are looking back in time. The light from nearby galaxies are the oldest, while galaxies that are far away appear younger because their light is just reaching us now, so we are seeing them as they would have looked billions of years ago when the universe was young. We cannot look back to the very beginning of time because according to physicist Alan Guth the universe underwent a period of faster than light expansion. This is not a violation of Relativity because matter did not expand faster than the speed of light; rather, space itself expanded by periodically doubling in size.

Astronomers have several methods to determine the distance of galaxies. These measuring sticks are referred to as standard candles. It would be easy to measure distance if all stars were the same size and had the same luminosity. One could simply measure the luminosity of a known star and then extrapolate the distance of far-away stars by using the inverse-square law. The problem in applying this method to determine the distance of stars is that a star’s individual luminosity is extremely variable. In 1912, Harvard astronomer Henrietta Swan discovered a type of star called a Cephid that pulsates and its luminosity changes over time. Observation demonstrates that the longer the period between successive pulses the greater the amplitude or intensity of light being released from the star. So, by counting the number of days between pulses the luminosity can be calculated, and, using that information, the distance can be formulated.

Another distance indicator involves the wavelength of hydrogen gas. The known wave-length of hydrogen gas is 21 centimeters. So, if the wave-lengths of hydrogen gas reaching earth from great distances is 210 centimeters, for example, we know that the light was emitted when the universe was 10 times smaller than now. As space expands the wave-lengths get stretched. Since the rate of expansion of the universe is known, the distance can be calculated.

Another standard candle used by astronomers are supernovae explosions. One particular category of supernovae begin their lives as white dwarf star that suck up mass from the companion stars they orbit. Once they gain 1.4 times the mass of our sun the force of gravity overtakes the outward pressure of the star and it collapses and then rebounds as a supernovae explosion releasing more energy in a few seconds than a hundred million billion suns, and this explosion is visible billions of light years away. Since their mass and output of light is known, they too become a standard candle for measuring distance just as are Cepheid stars.

Alan Guth’s revelation that the universe underwent rapid expansion resolved two enigmas, the horizon problem, and the flatness problem.

The horizon problem has to do with the fact that background radiation has the same temperature in all directions. Guth explains, “If our Big Bang explosion had happened significantly earlier in some regions than in others, different regions would have had different amounts of time to expand and cool, and the temperature in our observed cosmic microwave-background maps would vary from place to place…” Guth realized that one solution to Einstein’s theory of gravity involving dark energy could make the universe double in size at very small intervals in an exponential expansion. In effect, the doubling from beginning to end would, by our standards, happen almost instantaneously. This means that distant regions of the universe were once extremely close together in the early stages of inflation, so they had time to interact, but the rapid expansion moved them far apart where no contact was possible. Exponential inflation, which is the definition of the big bang, took our small universe and expanded it into a homogenous massive universe in a microsecond where all regions have been cooling at the same rate of time. Their light, which has today traveled to the half-way point—our region of the universe where expansion has ended after 7 billion years of expansion—is only now bringing them back into contact.

Guth also solved the flatness problem with his theory of inflation. The flatness problem has to do with the fact that our universe is flat and has expanded without curving. If the universe had only slightly more density it would have curved inward resulting ultimately in a big crunch. If the universe had slightly less density it would have curved outward into an eventual big chill. The flatness of our universe is the result of this rapid expansion. Tegmark says that while it is easy to detect the curved surface of a small sphere like a basketball, it is not so easy for us to perceive the curved surface of a large sphere like our earth. A square centimeter of the surface of a basketball is noticeably curved, whereas a square centimeter on the surface of Earth is almost perfectly flat. Similarly, when inflation dramatically expanded our own 3-D space, the space within any cubic centimeter becomes nearly perfectly flat. This expansion, according to Guth, made our universe last until now without a Big Crunch or a Big Chill.

Our observable universe only makes up a small portion of the familiar matter/energy that constitutes the universe. The remainder, about 70 percent of the mass of the universe, is some kind of exotic material that is invisible to us. We know of its existence because theory predicts its existence and it has an effect on ordinary matter, such as electrons, protons, neutrons, and quarks. To obey the conservation of matter and energy, dark matter must have positive pressure in the form of attractive gravity and dark energy must have negative pressure in the form of repulsive gravity. Inflation was caused by dark energy expanding space and in the process creating matter to balance the scale. Guth calculated that the repulsive pressure of dark energy is three times stronger than the attractive force of gravity. In order to keep the density of the universe the same and to obey the laws of conservation of energy, the repulsive force required to double the size of the universe is exactly enough to double its mass. The energy that antigravity expends to expand space is self-sustaining. The violent doubling of the universe occurs over and over again creating enough new mass to retain constant density—the ultimate free lunch.

Inflation is thought to have a half-life during which half of the inflating substance decays, causing a tug-of-war between the doubling caused by inflation and the halving caused by decay. Observation shows that the universe is undergoing accelerated expansion after 7 billion years of slowing down. This means that the doubling time of the inflating substance is shorter that its half-life.

Cosmic density used to be dominated by dark matter, and its gravitational attraction helped assemble galaxies; however, because the cosmic expansion diluted the dark matter but not the dark energy, the gravitational repulsion of dark energy is gaining the upper hand sabotaging further galaxy formation.

If inflation is never ending, how is it possible that our universe is finite? Tegmark says that inflation can create an infinite volume inside a finite volume!

The laws of physics are fine-tuned for our own existence and this includes the density of dark energy and dark matter in the universe. For example, if dark energy had a slightly larger density, galaxies would have never formed. If dark energy had slightly less density, then our universe would have collapsed early on. Tegmark explains that there are far too many coincidences to assume this all happened by accident. The most likely explanation for fine-tuning, according to Tegmark, is that if the number of physical universes, the multiverse, is infinite or very close to infinite, we shouldn’t be surprised if we live in one that is habitable.

Not to be confused with the multiverse, is the idea of parallel universes that exist in other dimensions parallel to our own world. This concept, called the Many Worlds Interpretation of Quantum Theory, was first proposed by Hugh Everett in 1957. Evert explained that the quantum wave doesn’t collapse as described by the Copenhagen Interpretation of reality; rather, all possibilities are realized. For example, in the famous two-hole experiment in which a subatomic particle is ejected from a source toward a detection screen with a barrier having two closely spaced holes or slits in between, the particle will go through one hole or the other with a 50/50 probability when observed, according to the well-established Copenhagen Interpretation of reality. Everett said that, on the contrary, the observed particle goes through both holes with each outcome splitting off into a separate reality with the observer. This suggests that the randomness of the universe is illusory. Tegmark believes that this is a viable alternative to the Copenhagen Interpretation of reality. This would mean that every interaction of particles and fields would yield a new universe. If we live in a mathematical universe as Tegmark claim, one wonders why we need a new universe for every particle interaction. In the two-hole experiment, for example, the unobserved particle has no physical reality before it is observed. It is in a superposition of states—sharing one qubit of information. An observation as described by the Copenhagen interpretation of reality, yields two bits of information and a burst of entropy. This process of creating information from nothing is what drives our universe toward more complexity at the expense of entropy. Invoking a parallel universe would, in the opinion of some, rob this universe of its creativity.

Elementary quantum particles are described by their own unique set of quantum numbers such as spin and charge, but these numbers appear to have no intrinsic properties leading most physicist to now conclude that the universe itself is mathematical as Pythagoras exclaimed over 2000 years ago. Such is the nature of string theory. Tegmark says that we shouldn’t think of strings as tiny entities. They are purely mathematical constructs. Physicists call them strings to emphasize that, like mathematical points, they live in only one dimension.

Tegmark says that the idea of a mathematical universe offers a solution to a famous philosophical dilemma of infinite regress. He states: …If we say that the properties of a diamond can be explained by the properties and arrangements of its carbon atoms, that the properties of a proton can be explained by the properties and arrangements of its quarks, and so on, then it seems that we’re doomed to go on forever trying to explain the properties of the constituent parts. The Mathematical Universe Hypothesis offers a radial solution to this problem: at the bottom level, reality is a mathematical structure, so it parts have no intrinsic properties at all.

The notation used to denote entities and relations is irrelevant; the only properties of integers are those embodied by the relations between them. What matters in physics is the mathematical relationship amongst quantities. For example, a proton is about 1836.15267 more massive than an electron. This is a pure number just as Pi or the square root of 2. These numbers do not require human units. In principle, out of the hundreds of thousands of pure numbers that have been measured across all areas of physics, all could be calculated from 32 basic number relationships. The Mathematical Universe Hypothesis implies that we live in a relational reality, in the sense that the properties of the world around us stem, not from properties of its ultimate building blocks, but from the relations between these building blocks. This proposition infers, according to Tegmark, that the external physical reality is more than the sum of its parts, in the sense that it can have many interesting properties while its parts have no intrinsic properties at all. While our physical world is changing over time, mathematical structures don’t change—they just exist—suggesting that our external physical reality is a mathematical structure, and by definition, an abstract immutable entity existing outside of space and time, which incidentally coincides with the most recent speculative thoughts of our greatest physicists and philosophers, that we live in a holographic universe projected from outside of space and time. Max Tegmark said that he worked a long time on this book and when he finished it was a relief. This book of over 400 pages is a testament to the time and effort he put into it.



The Spontaneous Healing of Belief: Shattering the Paradigm of False Limits

Greg Braden

Greg Braden says that science is catching up with our most cherished spiritual and indigenous traditions, which have always told us that our world is nothing more than a reflection of what we accept in our beliefs. We are not just observers, we are participants. Physicist John Wheeler says that it is impossible for us to simply observe the universe as if it were independent of us; rather, our observations create our reality. He goes on to say, “We could not even imagine a universe that did not somewhere for some stretch of time contain observers because the very building materials of the universe are these acts of observer-participancy.”

We are able to affect the world because ultimately everything in the universe is made of the same stuff—bits of information. Wheeler says, “Every it—every particle, every field of force, even the space-time continuum itself—derives its function, its meaning, its very existence entirely from binary choices, bits. What we call reality arises…from the posing of yes/no questions.” Seth Lloyd, the designer of the first feasible quantum computer, agrees. He says that everything that exists in our universe is a result of the fact that the universe computes.

Computational processing and output is simply made up of repeating patterns of binary digits. For example, the beautiful pictures you see on your computer screen are made up of a matrix of individual pixels. In the 1970s Benoit Mandelbrot discovered that nature itself is made up of these repeating patterns called fractals. Mandelbrot says that the mathematics that describes nature is not invented, it is discovered—it is just there in the structures that make up the universe. All of this evidence is now suggesting that the universe is mathematical, that it computes, and that the odds are better than chance that we are part of a simulated reality.

Braden asks the question, If Wheeler is correct that the particles of the universe are like computer bits of information and Lloyd is correct when he says that the universe is a quantum computer and everything is based on code, then is it possible that we have access to the code? Braden thinks we do. He says that our belief system, which is analogues to the programs of a computer, receives incoming information in the form of wave patterns, processes it in the form of our aware consciousness, which is analogous to the operating system, and projects it out into the universe as reality, which is analogous to the output of a computer.

Our programs or beliefs are very powerful as the well documented phenomenon of the placebo response has shown. Placebos have become so established in medicine that pharmaceutical companies must demonstrate that their newly manufactured drugs perform better than a placebo before the drugs go on the market. What is uncertain is just how a placebo works. The mainstream viewpoint is that when a patient believes that he has been given a beneficial drug, for example, but instead is given a placebo, or that he has undergone a surgery that, in fact, was only a sham surgery, the belief triggers a cascade effect that sends signals from the brain in the form of neuropeptides that, in turn, trigger the body’s own immune system. This process might occur in some circumstances, but this idea does not explain all circumstances especially when cures are almost instantaneous. A growing body of evidence is now suggesting that something much more basic is happening—that belief alone changes reality itself. Because of our new scientific understanding that we are participants in the universe, this idea is no longer unrealistic.

In our analogy of belief as a computer program, a change in our belief or “program” can change the probabilistic outcome of the “0s and 1s,” or “yes and no” quantum superposition of states. These changes in belief that can change probabilistic outcomes of reality manifest in cases of the placebo response in medicine. According to studies, about one-third of test subjects respond to a placebo in any particular experiment, but not having a response in one experiment has no statistical effect on the likelihood of having a response in subsequent experiments. With this in mind, it might be prudent to continue to change the program, to roll the dice, until the desired outcome, the desired collapse of the superposition of states, is achieved by chance. To unleash the power of belief, Braden contends, we must believe in belief. Braden defines belief as the certainty that comes from accepting what we think is true in our minds, coupled with what we feel is true in our hearts. A thought that is imbued with the power of emotion produces the feeling that brings it to life. If we have a wish for something to occur, we have to act as if it has already happened. Through gratitude for what has already occurred, we create the changes in life that mirror our feelings.

Another way to affect a change in beliefs is through logic. Just as Copernicus’ discovery changed our belief systems and our world view, considering the universe as a computer and belief as a program can change reality. Over time, our knowledge and logic can become part of our subconscious intuitive-self and become a deep seeded belief.

Braden says that if we combine the scientific evidence that we live in a virtual reality with indigenous traditions which tell us that the universe mirrors our beliefs, and we also accept that we live in a fractal universe in which our beliefs are an integral part of the universal program, then we must consider the possibility that we are the programmers of the simulated reality!

Braden makes a good case that we live in a holographic, simulated, fractal reality and that our belief systems are similar to the programs of the universal computer. As a result of this revelation, it should be possible to change our experience of reality by changing our beliefs.



You Are the Universe: Discovering Your Cosmic Self and Why it Matters

Menas Kafatos
Deepak Chopra

According to Menas Kafatos and Deepak Chopra, we live in a participatory universe that depends upon the existence of human beings. No boundary can exist between what is objective and what is subjective because all of what we normally consider to be external objective reality: stars, galaxies, space, time, matter, and energy have no reality beyond human observation. The bright sunlight we see, the stars, and the galaxies have no brightness in themselves because photons are invisible. The sensation of light is created by consciousness. Our sense organs are only evolutionary apparatus to allow consciousness to project our familiar three-dimensional reality.

The idea of a participatory universe was first championed by physicist John Wheeler to help explain the “fine-tuning” problem, also known as the anthropic principle that asks: How is it possible that the constants of nature happen to be so perfectly tuned to allow for the evolution of life?

For most physicists such as Stephen Hawking and Max Tegmark, there is no mystery concerning the anthropic principle. They believe in an infinite number of universes called the multiverse and, by chance, we happen to live in one of the few where conditions are exactly right to produce life and the awareness that allows us to appreciate this fact. The probability for this fine-tuning has been estimated to be one chance in 10 to the 500th power—a number far greater than the sum of all the particles in the universe. Other physicists say this number is far too low. Physicists who advocate chaotic inflationary theory say the chances are even more remote, claiming that all of the laws of the universe unfold in “infinite ways, infinite times over.” However, according to Alan Guth of inflationary theory fame, probabilities break down when the odds for or against something become infinite.

Adding to this shortcoming, no observational evidence exists for the multiverse, so a growing number of physicists think that a seamlessly whole, consciousness-driven universe has produced all of the laws of physics that guarantee our existence. In short, we mirror the cosmos. In this self-organizing universe, reality unfolds as a result of the interplay between potentia and consciousness as one layer of complexity gives rise to the next in a complementary fashion. In this recursive system there is no before or after, no linear causality, only potential states that evolve and change states when perceived. Our participatory universe is made to order!

Properties of quantum entanglement and complementarity are not properties of matter, they are a property of consciousness pervading every facet of the cosmos including our biology. DNA produces the protein enzymes that are needed to unzip the DNA helix which, in turn, hold the code to synthesize these very proteins. DNA would not exist without the proteins needed to unzip and replicate the DNA strand, and the proteins would not exist without the DNA code. It is impossible to say which came first because these are complementary systems. The same recursive processes hold for the phenomenon of blood clotting in mammals. Hundreds of linear chemical events must occur at the right time and at the right place for blood clotting to take place, yet some of the later events in the linear chemical chain recursively activate former chemical events. Self-organizing complementary processes are embedded in the fabric of the cosmos acting like an offstage invisible choreographer to drive evolution according to the authors. The whole process seems to be nonlinear, and nonlocal.

Two competing theories of ontological existence have emerged, the “matter first” camp and the “mind first” camp. Those who believe in the primacy of matter cannot explain how matter behaves before an observation, nor can they explain how matter becomes conscious, and “mind first” advocates cannot explain how an independent observation can take place since an observer cannot be separate in a holistic universe. This leaves us with an alternative view of reality which the authors call “reality first.”

The “reality first” hypothesis contends that qualia are the true building blocks of nature. While quanta are defined as packets of energy, qualia are everyday qualities of existence—light, sound, color, shape, and texture. Qualia are indeed subjective, but a case can be made that quanta are just as subjective as science has shown experimentally. These experiments can best be summed up by Niels Bohr who proclaimed, “No subatomic event is a real event until it is an observed event.” If reality exists outside or our experience, we will never know of it. Once you subtract everything you can sense, imagine, feel, or think about, there is nothing left according to the authors.

For those who normally would not read a book authored by Deepak Chopra, rest assured that the scientific background in this book has been provided by cosmologist Menas Kafatos.



Mind to Matter
The Astonishing Science of How Your Brain Creates Material Reality

Dawson Church

Dawson Church says that thoughts create reality. This is not a metaphysical proposition according to Church, it’s a biological proposition in that thoughts not only create new neurons and synapses but interact with chemicals in the body such as stress hormones like cortisol and adrenalin and feel good hormones such as dopamine and oxytocin. These neuropeptides affect organs in the body and epigenetic programs in the individual cells which, in turn, affect your genetic expression. But thoughts and beliefs go beyond the human body. Quantum experiments clearly show that consciousness creates information and physical reality.

Over 175 papers in the scientific literature show that cells and molecules respond to a narrow band of frequencies that trigger cell regeneration and repair. Some of these beneficial effects are the stimulation of nerve cells and synapses, regeneration of spinal cord tissue, rapid wound healing, increase in bone regeneration, memory improvement, increase in growth of connective tissue such as ligaments and tendons, and stimulation of stem cells that differentiate into muscle, bone, and skin cells to name a few.

Church was very impressed by these findings but was most interested in the affects of various frequencies produced by our own brains. These natural frequencies of the brain include: Beta, Alpha, theta, delta, and gamma waves. His interest stems from the fact that scientific measurements of brain waves produced by meditation practitioners and subjects who have reaped the benefits of “energy” work provided by “healers” show unique patterns of brain waves, particularly slow waves of alpha, theta, and delta as well as very fast high amplitude gamma waves.

Delta waves, the slowest of the brain waves, propagating between 0 and 4 Hz, are produced naturally during sleep. Human Growth Hormone, or GH factor, is responsible for repair and regeneration of our cells. Studies indicated that high levels of GH factor are secreted during peak production of delta waves. In addition, it was found that delta is associated with increased activity in the synaptic connections between neurons in the hippocampus where neurons are produced from stem cells and where memory and learning take place. Experienced meditators who produce delta waves, testify that they often have mystical experiences and feel at one with nature and the infinite oneness.

Theta waves, the next slowest brain wave with oscillations of 4 to 8 Hz has been identified in healing practitioners. A research group at the Toho University School of Medicine in Japan measured higher levels of the neurotransmitter serotonin in the blood as theta waves increased in the brain during deep abdominal breathing sessions. Other research groups found that frequencies in this range stimulated cartilage cell regeneration and that DNA molecules in water solution were highly stimulated using frequencies in the range of 8 to 9 hertz.

Alpha waves, oscillating at 8 to 13 hertz, right in the middle of the brain wave spectrum, can be generated at will by individuals who practice biofeedback training as well as people who are experienced meditators. Maxwell Cade, a brain research pioneer, believes that alpha is a bridge between the higher and lower frequencies connecting the conscious mind with the unconscious mind and the universal field.

Those in an alpha state experience higher levels of serotonin improving their mood. Alpha frequencies of around 10 hertz results in significant increase in the synthesis of DNA as well as enhancement of learning and memory in the brain’s hippocampus. Alpha range firing is key in the brains ability to communicate with disparate neural groups oscillating at the same frequency.

Beta waves, oscillating at between13 and 25 hertz are often broken down into two groups, high and low beta. Low beta is associated with our everyday tasks, while high beta is associated with fear and anxiety producing stress hormones of cortisol and adrenaline.

Gamma waves, are the most recently discovered frequency of the brain going from around 25 hertz up to 100 hertz. This frequency occurs during “aha” experiences and occurs during flashes of insight and high coherence of brain regions. Gamma triggers genes that produce anti-inflammatory proteins in the body as well as an increase in the production of stem cells.

Electroencephalograms (EEG) scans show that when the brain is functioning at peak performance, especially when alpha and beta waves are in coherent synchronicity with the rhythms of the heart, we tend to be calm, creative, and intuitive, a state in which all possible realities exist in a superposition of states. During these states of bliss, local coherent mind can synchronize with the nonlocal field to influence material reality.

Experiments with individuals in a coherent state of mind were asked to change the degree of molecular twists in the DNA extracted from human placenta. The degree of twists in a sample of DNA can be measured by the amount of absorption of ultraviolet light. Participants in the study who were trained in HeartMath techniques were able to either wind or unwind the DNA by 25 percent. When the trained subjects entered the coherent state but had no intention of changing the DNA there was no more change than in a control group. Likewise, when the trained individuals were not in a coherent state but held the intention of changing the DNA they had no more influence than a control group. Both coherence and intention seem to be required to change reality.

In a similar experiment, a highly trained volunteer was asked to change the degree of winding in two of three prepared vials of DNA. Only the two vials of DNA in which the volunteer had directed his intention were changed. To rule out the possibility that the proximity of the heart’s electromagnetic field might be responsible for the change in the DNA, the experimenters preformed the same experiment nonlocally at a distance of half a mile and attained the same results.

In another experiments, two individuals were separated with one sealed in a room shielded from all electromagnetic frequencies. When the other participant’s brain was briefly stimulated and his brain waves compared to the individual in the shielded room, it showed that the brain waves of the participant in the shielded room responded in sync.

In an experiment at the Institute of High Energy Physics in China, Dr. Yan Xin, a qigong master, was asked to change the decay rate of americium-241. The process of nuclear decay, one of the four fundamental forces of nature, is very stable over time being immune to temperature variations, acids, pressure, or electromagnetic fields, yet Dr. Yan Xin was able to either slow down or speed up the rate of decay when he projected qi energy to the radioactive substance in a controlled test.

In an experiment with healing intentions, mice had been injected with a substance that caused cancer. A healer in the same room with the mice intentionally focused healing “energy” toward the mice for 30 minutes. The magnetic field around the mice’s cages changed significantly, and this effect was duplicated nonlocally when the healer performed this experiment from a great distance.

Interestingly, Church acknowledges that because the nonlocal effect matched the local effect in many of these experiments the action was not likely the result of energy fields, but a result of the intender’s non-physical intention. This idea coincides with my own opinion that no energy or field is involved; rather, the results of the experiment involves the instantaneous change in reality itself.

Church states that, “What we find when we examine the way science is conducted is that for better or for worse, it is heavily influenced by belief. The ideal of the scientist as an objective assessor of facts is at odds with reality. Scientists are believers in their own work. They cannot separate mind from matter…Change mind, and matter changes right along with it.”

If mind affects matter as experiments at the quantum level and macro level of reality show, can we really rely on the notion that the “hard” sciences such as physics, chemistry, biology, and astronomy are objective? Is it possible, as Church suggests, that the experimental scientist is also subject to the observe effect, in that they tend to find what they are looking for?

We know that in the soft sciences such as psychology, sociology, and even biology in some circumstance, many collaborative studies are needed to become accepted and published in journals. The food and drug administration, for example, requires at least two studies to demonstrate efficacy in new untested experimental drugs. Belief is so potent, that pharmaceutical companies must test the drug they hope to put on the market against a placebo in double-blind studies. And in almost all cases, about one-third of the participants respond to the placebo as if it were the drug being tested. In many studies, the experimental drug never out-performs the placebo, and when it does the experimental drug is often only marginally more effective. Despite all of these required studies, the effectiveness of drugs seems to be variable over time.

To illustrate this, Church cited Amgen, a huge biotech company that set out to replicate some important studies regarding advances in anti-cancer research. The company, which was pouring millions of dollars into cancer research based on previous studies, wanted to make sure that their investment was based on solid experimental evidence. They consulted with their researches to find out which particular studies were the most important to these scientists doing work in the field. Fifty-three studies were cited as very important. Amgen set out to replicate these studies, but after ten years of rigorous scientific experiment only six out of fifty- three were replicable. This is not an anomaly, but seems to be prevalent in many fields of science.

Belief and consciousness, it appears, construct our material reality. We live in a fluid reality where the relationship between mind and nature do a dance. Church says that when we release the fixation our local minds have on local reality, and instead, align our local consciousness with the nonlocal consciousness of the universe, we bring mind into coherence with nonlocal mind allowing us to change the very structure of reality itself.



The Deeper Genome:
Why there is more to the human genome than meets the eye. (Kindle)

John Parrington

The Encyclopedia of DNA Elements, the ‘ENCODE’ project, began in the same year that the Human Genome Project concluded. ENCODE’S mission was to find the relationship between genes and the direct and indirect effects on an organism—what they actually do.

The expectations for The Human Genome Project that began in the year 2000 were astronomical as Biotech companies lined up to reap the project’s rewards by patenting gene sequences that would lead to the production of drugs and therapies for curing diseases and increasing life span. Disappointingly, of the 100,000 genes they expected to discover, only about 23 thousand were found, fewer than a grape plant or a lowly round worm. Obviously, it is more than the number of genes that make us human.

The Human Genome Project marked the end of Francis Crick and Richard Dawkins’ belief in the central dogma of biology, the notion that: 1) Genes control our destiny on a one-to-one basis and that information flows in only one direction, from DNA, to RNA, to proteins, and to the organism at large. 2) The end of Sydney Brenner’s idea that biology can be explained as a digital code, and that, like a machine language in a computer, it would soon be possible to program biological entities to grow a hand or an eye. 3) And the end of the idea comparing DNA to a blueprint as even a blueprint needs an architect or engineer to comprehend and orchestrate the blueprint.

The ENCODE project discovered that there is a continual flow of information coming from within and from without the cellular environment. Contrary to Brenner’s hypothesis, genes are not like a machine language. Activation and repression of gene expression is more a result of analog fine-tuning sensitive to environmental signals. The ENCODE project discovered that biology must be understood in a more holistic manner in which the whole is present in every part of a biological system and each part is connected to the whole.

Researchers have found that this fine-tuning is accomplished by negative and positive regulation. Negative repression works by lifting a suppressive inhibitory protein that blocks gene expression and positive regulation works by proteins such as cAmp that activate genes with a common function called ‘operons.” The process of repression and activation is accomplished by a method called allostery which changes the three-dimensional shape of an enzyme. Allosteric inhibition is caused by an inhibitor protein that changes the three-dimensional shape of the enzyme so that the substrate cannot fit into the distorted activation site. Allosteric activation occurs when the activator protein changes the distorted shape of the enzyme to allow the substrate to couple with the enzyme. These regulatory proteins are known as transcription factors. These transcription factors, bound in the groves of the DNA double helix, are able to recognize specific gene sequences by the differences in the shape of the bases and are regulated by intracellular signals, explaining why differentiated cell types such as heart cells, liver cells, and skin cells, all having the same genome produce very different proteins.

Experiments with frogs have shown that when differentiated frog cells are placed in the nucleus of an egg that has had its own nucleus removed a complete fully functioning frog is produced. This occurs because the differentiated transplant nucleus is exposed to all of the proteins in the cytoplasm of the egg that in normal circumstances would produce a healthy frog from a sperm and egg. The complete genome resides in the differentiated cell just as in an embryonic cell, but in differentiated cells only needed portions of the genome are expressed. The proteins produced in differentiated cells not only have different transcription factors but they have different incoming signals. This is what allows cloning to be successful.

Unlike lower organisms, human DNA is wrapped in proteins called chromatin that package DNA making it accessible to transcription factors. The primary component of chromatin are histones in which DNA wraps around just over two times forming what looks like beads on a string. Histones are alkaline and negatively charged forming an attraction to positively charged DNA. This bond is reduced by cellular machinery that adds an acetyl group removing a charge from the histone and making it less attractive, thereby loosening the bond to the DNA. The enzymes that perform acetylation or deacetylation either activate or repress gene expression by making them more or less accessible to transcription factors that bind at the start of the genes they control.

But genes can be controlled by transcription factors even if they are thousands of base pairs in distance. This feat is accomplished when the DNA chain loops around transcription factors bringing them into contact.

What makes multicellular organisms different from single cell bacteria is the amount of so called “junk DNA.” Humans have by far the most non-coding DNA. In bacteria there is a one-to-one correspondence between DNA and RNA molecules. But in multicellular organisms as well as viruses, the genes are matched with the RNA but then get trimmed down in a process called splicing. The discarded regions are called introns and those that remain in the messenger RNA are called exons. The discarded regions often contain 2,000 to 11,000 bases pairs while the remaining exons is usually less than 200 base pairs.

Splicing’s evolutionary advantage allows for a single gene to code for many different proteins. For example, at various stages of embryonic development, different exons can be selected to be included in the final mRNA as needed. Splicing also is important in the formation of antibodies. Antibodies can be generated in a very short time to fight antigens.

One of the biggest mysteries in cellular biology has been the chicken and the egg problem in relation to RNA and DNA. DNA produces RNA, but the same RNA is needed in the process of replicating DNA. One depends on the other and neither would occur without its complement. In 1981 a clue to the resolution of this problem emerged when it was discovered that MRNA can be generated through splicing and subsequently used as a template to produce a protein in translation. RNA’s flexibility of action might have played a role in the replication of life long ago.



Human by Design:
From Evolution by Chance to Transformation by Choice

Gregg Braden

Greg Braden says that Darwin’s theory of evolution is not a “done deal.” He says that he wrote this book to give voice to esteemed scientists who object to evolution in a way that is not reflected in the mainstream media. Because of the stigma associated with those who believe in intelligent design, scientists are not willing to risk their careers by bringing up the heretical notion that evolution is not scientifically based. Yet, many prominent scientists of the last two centuries have written about their objections. Three of the nine prominent scientists Braden quotes are as follows:

“There are …absolutely no facts either in the records of geology, or in the history of the past, or in the experience of the present, that can be referred to as proving evolution, or the development of one species from another by selection or any kind whatever.”

Louis Agassiz (1807-1873), Harvard University, American Geologist

“Ultimately the Darwinian theory of evolution is no more or less than the great cosmogenic myth of the twentieth century. The truth is that despite the prestige of evolutionary theory and the tremendous intellectual effort directed towards reducing living systems to the confines of Darwinian thought, nature refuses to be imprisoned…”

Michael Denton (1943-), British Biochemist, Senior fellow, Center for Science and Culture

“The point, however, is that the doctrine of evolution has swept the world, not on the strength of its scientific merits, but precisely in it capacity as a Gnostic myth…”

Wolfgang Smith (1930-) American Mathematician and Physicist

Braden also quotes Thomas H. Morgan winner of the 1933 Nobel Prize in physiology and medicine who said in his book Evolution and Adaptation, “Within the period of human history we do not know of a single instance of the transformation of one species into another.” Braden says that, to his knowledge, that statement remains true up till the present. One exception to this rule, which became well-known to the public, was thought to be the case of the lung fish. It was touted to be an intermediate species between a fish and a mammal. Unfortunately, for the advocates of strict Darwinism, it was found through DNA testing that it was much more closely related to other species and did not give rise to mammals.

Braden says that as a geologist, researcher, and author he has tremendous respect for Charles Darwin. Much of the controversy regarding Darwin’s theory in his time and today, according to Braden, was due to a misunderstanding of what Darwin actually said and the desire of the scientific community to regard his theory as scientifically infallible. The tendency for scientist to dogmatically hold Darwin’s theory in such reverence, however, is little different from the Catholic Church’s dogmatic declaration of heresy and condemnation toward Darwin’s theory on religious grounds in his day.

The fossil record shows that anatomically modern humans appeared about 200,000 years ago, but its not at all clear from where we came. While it is now known that mating occurred between Neanderthals and Homo sapiens sapiens, a peer-reviewed report in the journal, Nature concluded that mitochondrial DNA has clearly shown that we did not evolve from Neanderthals.

The Human Genome Project, which led to the mapping of our genes and eventually other species’ genes, shows that we share 98 percent of our DNA with chimpanzees, 60 percent with a fruit fly, 80 percent with a cow, and 90 percent with a house cat. Obviously, something other than the genetic code itself must be responsible for determining the phenotype of various species. Research in the field of epigenetics has found that the variety of characteristics within individual species and across the landscape of species has more to do with how genes are selected and expressed rather than the quantity of genes in a genome. For example, muscle cells in humans have the same genetic code as liver cells, yet they are very different in function. A complex system of protein interactions responding to communication from both within and from without the cell determine which genes are activated. The activation of genes is not like a digital program turning genes on or off, but acts more like an analogue dial that regulates gene expression by turning the dial up or down.

Braden asks the question, “If we have so much in common with other creatures genetically, then why are we so different from them. Epigenetics explains much, but the proteins that carry out cellular functions are themselves coded from DNA. Two extraordinary changes took place in our genome that cannot be explained by Darwinian evolution, according to Braden.

At the same time when modern humans appeared 200,000 years ago, a mutation happened rapidly in chromosome 7 called the FOXP2 gene. The mutation of this gene, which is also present in chimpanzees, gave rise to human’s ability to create language. Braden says, “that the speed and precision of the mutations in FOXP2, occurring in just the right two places in the DNA code, are further examples of the kind of change that does not lend itself to the theory of evolution—at least not as we understand the theory today.”

A second extraordinary change took place in ancient times. Chimpanzees have 48 chromosomes while humans have only 46. This missing DNA in our genome was a mystery until recent times. Researchers discovered that there was a merger of two chromosomes into one, human chromosome 2 (HC2). The genes in this chromosome are what make us human to a large extent including the development of the neural cortex associated with emotion and empathy, the development of the forebrain and the midbrain, the development of organs including the heart, brain, eye, kidney, liver, lungs, skeleton, and spleen in the fetus, and much more. Just how and why this fusion took place is unknown.

After the merger of these two chromosomes, redundant functions were either “adjusted, turned off, or removed altogether to make one more efficient chromosome, and this implies intentionality, according to Braden. The question remains, why did this fusion happen and how did the redundant functions get turned off or eliminated entirely 200,000 years ago? “The intelligence that carried out the genetic modifications giving us our humanness must have had advanced technology that we are now only beginning to understand.”

Braden says that all of this doesn’t mean that evolution hasn’t occurred. He says that as a geologist he has seen firsthand evidence of evolutionary changes in other species, but when it comes to humans the facts just don’t support the theory.

Here are several reasons for Braden’s position:

The theory of living cells evolving over long periods of time does not explain our origins or the complexity of our bodies.

The evolutionary family tree for human is not supported with physical evidence.

DNA studies prove that we did not descend from Neanderthals.

We have not changed since the first of our kind appeared in the fossil record 200,000 years ago.

The precise events that produced the DNA that gives us our uniqueness are not commonplace in nature.

It isn’t a matter of either creationism or evolution, according to Braden. He says that there is a third possibility he calls “directed mutation.” We are one part of a living, holistic, and evolving universe that shares information nonlocally, so it shouldn’t be surprising that we are evolving along with the creative universe.

If we are apart of the whole, then we should listen to what the universe is telling us and that means being more intuitive and listening to our hearts. Each time we access our heart’s wisdom, we strengthen the neural connections of the heart-brain system. Emotions of empathy, compassion, and gratitude activate neuropeptides that bath the body in beneficial chemicals such as oxytocin. This is not just some new age idea, but it is based in science.

Recent discoveries have found that our hearts are much more than a pump. A team of scientists led by Andrew Armour, M.D., of the University of Montreal, found that about 40,000 specialized neurons form a communication network within the heart, and these neurites perform many of the same functions that are found in the brain. A key role of the neural network of the heart is to detect changes of hormones and other chemicals within the body and communicate those changes to the brain. By paying attention to the heart-brain system, we create coherence allowing us to tap into the universal field of information and wisdom.

Go to the next section of my reviews