How do chemists calculate the existence of billions of organisms?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

So the packaging of Bio-Cultures Complex says that it contains a minimum of a billion viable organisms per capsule.

How has such an estimate been drawn?

Have they counted a smaller amount and then multiplied it?

It says a "a billion viable cells". The viable is important because it tells you that they employ a method that determines whether the cells are alive or dead. Most likely they did a simple colony count.

The approach is simple, you take bacteria in suspension, serially dilute them in order to reduce the concentration to numbers that can be counted, and then spread onto a solid media plate (usually containing agar) and leave for 1-3 days (depending on organism) and count the number of colonies produced. You can then multiply up this number by the amount you diluted by to get the number of viable cells present in the original.

A billion is not a lot of bacteria. You can expect 100-1000 billion cells per ml in an overnight culture of these bacteria.

Looking at the product description, it specifically mentions CFUs (colony forming units) so I think we can be pretty sure that this is the approach they are using.

(Note that this is standard microbiological technique rather than something chemists would be doing)

This more a question of Biology than Chemistry, but we have a similar problem when determining particle size distributions.

The principle is always the same: the suspended particles or cells are passed through a capillary tube and than some measurement is done. This can be with light, an electrical field, etc. When a particle passes, there is a change in the variable and a counter is stepped. It can be designed to determine the size also.

The most usual instrument for counting organisms is a Coulter counter. Knowing the volume that has passed through the counter and the number of organisms you know the concentration. Then you can calculate the content in any other volume.

Edit

To distinguish between living and dead cells, stains are used. There are several types but they relay on cell membrane integrity. If the membrane is broken, the organism is not viable, but the stains can enter the cell. They than bind to intracellular molecules (DNA, proteins,… ) giving an intense color or fluorescence. On the membrane there can also be some of the molecules, but then the color is much less intense.

How Life Began: New Research Suggests Simple Approach

Somewhere on Earth, close to 4 billion years ago, a set of molecular reactions flipped a switch and became life. Scientists try to imagine this animating event by simplifying the processes that characterize living things.

New research suggests the simplification needs to go further.

All currently known organisms rely on DNA to replicate and proteins to run cellular machinery, but these large molecules—intricate weaves of thousands of atoms—are not likely to have been around for the first organisms to use.

"Life could have started up from the small molecules that nature provided," says Robert Shapiro,a chemist from New York University .

Shapiro and others insist that the first life forms were self-contained chemistry experiments that grew, reproduced and even evolved without needing the complicated molecules that define biology as we now know it.

Primordial soup

An often-told origin-of-life story is that complex biological compounds assembled by chance out of an organic broth on the early Earth's surface. This pre-biotic synthesis culminated in one of these bio-molecules being able to make copies of itself.

The first support for this idea of life arising out of the primordial soup came from the famous 1953 experiment by Stanley Miller and Harold Urey, in which they made amino acids—the building blocks of proteins—by applying sparks to a test tube of hydrogen, methane, ammonia, and water.

If amino acids could come together out of raw ingredients, then bigger, more complex molecules could presumably form given enough time. Biologists have devised various scenarios in which this assemblage takes place in tidal pools, near underwater volcanic vents, on the surface of clay sediments, or even in outer space.

But were the first complex molecules proteins or DNA or something else? Biologists face a chicken-and-egg problem in that proteins are needed to replicate DNA, but DNA is necessary to instruct the building of proteins.

Many researchers, therefore, think that RNA — a cousin of DNA — may have been the first complex molecule on which life was based. RNA carries genetic information like DNA, but it can also direct chemical reactions as proteins do.

Metabolism first

Shapiro, however, thinks this so-called "RNA world" is still too complex to be the origin of life. Information-carrying molecules like RNA are sequences of molecular "bits." The primordial soup would be full of things that would terminate these sequences before they grew long enough to be useful, Shapiro says.

"In the very beginning, you couldn't have genetic material that could copy itself unless you had chemists back then doing it for you," Shapiro told LiveScience.

Instead of complex molecules, life started with small molecules interacting through a closed cycle of reactions, Shapiro argues in the June issue of the Quarterly Review of Biology. These reactions would produce compounds that would feed back into the cycle, creating an ever-growing reaction network.

All the interrelated chemistry might be contained in simple membranes, or what physicist Freeman Dyson calls "garbage bags." These might divide just like cells do, with each new bag carrying the chemicals to restart — or replicate — the original cycle. In this way, "genetic" information could be passed down.

Moreover, the system could evolve by creating more complicated molecules that would perform the reactions better than the small molecules. "The system would learn to make slightly larger molecules," Shapiro says.

This origin of life based on small molecules is sometimes called "metabolism first" (to contrast it with the "genes first" RNA world). To answer critics who say that small-molecule chemistry is not organized enough to produce life, Shapiro introduces the concept of an energetically favorable "driver reaction" that would act as a constant engine to run the various cycles.

Driving the first step in evolution

A possible candidate for Shapiro's driver reaction might have been recently discovered in an undersea microbe, Methanosarcina acetivorans, which eats carbon monoxide and expels methane and acetate (related to vinegar).

Biologist James Ferry and geochemist Christopher House from Penn State University found that this primitive organism can get energy from a reaction between acetate and the mineral iron sulfide. Compared to other energy-harnessing processes that require dozens of proteins, this acetate-based reaction runs with the help of just two very simple proteins.

The researchers propose in this month's issue of Molecular Biology and Evolution that this stripped-down geochemical cycle was what the first organisms used to power their growth. "This cycle is where all evolution emanated from," Ferry says. "It is the father of all life."

Shapiro is skeptical: Something had to form the two proteins. But he thinks this discovery might point in the right direction. "We have to let nature instruct us," he says.

Scientists identify vast underground ecosystem containing billions of micro-organisms

The Earth is far more alive than previously thought, according to “deep life” studies that reveal a rich ecosystem beneath our feet that is almost twice the size of all the world’s oceans.

Despite extreme heat, no light, minuscule nutrition and intense pressure, scientists estimate this subterranean biosphere is teeming with between 15bn and 23bn tonnes of micro-organisms, hundreds of times the combined weight of every human on the planet.

Researchers at the Deep Carbon Observatory say the diversity of underworld species bears comparison to the Amazon or the Galápagos Islands, but unlike those places the environment is still largely pristine because people have yet to probe most of the subsurface.

“It’s like finding a whole new reservoir of life on Earth,” said Karen Lloyd, an associate professor at the University of Tennessee in Knoxville. “We are discovering new types of life all the time. So much of life is within the Earth rather than on top of it.”

The team combines 1,200 scientists from 52 countries in disciplines ranging from geology and microbiology to chemistry and physics. A year before the conclusion of their 10-year study, they will present an amalgamation of findings to date before the American Geophysical Union’s annual meeting opens this week.

Samples were taken from boreholes more than 5km deep and undersea drilling sites to construct models of the ecosystem and estimate how much living carbon it might contain.

The results suggest 70% of Earth’s bacteria and archaea exist in the subsurface, including barbed Altiarchaeales that live in sulphuric springs and Geogemma barossii, a single-celled organism found at 121C hydrothermal vents at the bottom of the sea.

One organism found 2.5km below the surface has been buried for millions of years and may not rely at all on energy from the sun. Instead, the methanogen has found a way to create methane in this low energy environment, which it may not use to reproduce or divide, but to replace or repair broken parts.

Lloyd said: “The strangest thing for me is that some organisms can exist for millennia. They are metabolically active but in stasis, with less energy than we thought possible of supporting life.”

Rick Colwell, a microbial ecologist at Oregon State University, said the timescales of subterranean life were completely different. Some microorganisms have been alive for thousands of years, barely moving except with shifts in the tectonic plates, earthquakes or eruptions.

“We humans orientate towards relatively rapid processes – diurnal cycles based on the sun, or lunar cycles based on the moon – but these organisms are part of slow, persistent cycles on geological timescales.”

Underworld biospheres vary depending on geology and geography. Their combined size is estimated to be more than 2bn cubic kilometres, but this could be expanded further in the future.

The researchers said their discoveries were made possible by two technical advances: drills that can penetrate far deeper below the Earth’s crust, and improvements in microscopes that allow life to be detected at increasingly minute levels.

The scientists have been trying to find a lower limit beyond which life cannot exist, but the deeper they dig the more life they find. There is a temperature maximum – currently 122C – but the researchers believe this record will be broken if they keep exploring and developing more sophisticated instruments.

Mysteries remain, including whether life colonises up from the depths or down from the surface, how the microbes interact with chemical processes, and what this might reveal about how life and the Earth co-evolved.

The scientists say some findings enter the realm of philosophy and exobiology – the study of extraterrestrial life.

Robert Hazen, a mineralogist at the Carnegie Institution for Science, said: “We must ask ourselves: if life on Earth can be this different from what experience has led us to expect, then what strangeness might await as we probe for life on other worlds?”

How “Great” Was the Great Oxygenation Event? Enzyme “Family Tree” Reveals When Organisms First Started Using Oxygen

Around 2.5 billion years ago, our planet experienced what was possibly the greatest change in its history: According to the geological record, molecular oxygen suddenly went from nonexistent to becoming freely available everywhere. Evidence for the “great oxygenation event” (GOE) is clearly visible, for example, in banded iron formations containing oxidized iron. The GOE, of course, is what allowed oxygen-using organisms – respirators – and ultimately ourselves, to evolve. But was it indeed a “great event” in the sense that the change was radical and sudden, or were the organisms alive at the time already using free oxygen, just at lower levels?

Prof. Dan Tawfik of the Weizmann Institute of Science’s Biomolecular Sciences Department explains that the dating of the GOE is indisputable, as is the fact that the molecular oxygen was produced by photosynthetic microorganisms. Chemically speaking, energy taken from light split water into protons (hydrogen ions) and oxygen. The electrons produced in this process were used to form energy-storing compounds (sugars), and the oxygen, a by-product, was initially released into the surroundings.

Prof. Dan Tawfik and Jagoda Jabłońska. Credit: Weizmann Institute of Science

The question that has not been resolved, however, is: Did the production of oxygen coincide with the GOE, or did living organisms have access to oxygen even before that event? One side of this debate states that molecular oxygen would not have been available before the GOE, as the chemistry of the atmosphere and oceans prior to that time would have ensured that any oxygen released by photosynthesis would have immediately reacted chemically. A second side of the debate, however, suggests that some of the oxygen produced by the photosynthetic microorganisms may have remained free long enough for non-photosynthetic organisms to snap it up for their own use, even before the GOE. Several conjectures in between these two have proposed “oases,” or short-lived “waves,” of atmospheric oxygenation.

Research student Jagoda Jabłońska in Tawfik’s group thought that the group’s focus – protein evolution – could help resolve the issue. That is, using methods of tracing how and when various proteins have evolved, she and Tawfik might find out when living organisms began to process oxygen. Such phylogenetic trees are widely used to unravel the history of species, or human families, but also of protein families, and Jabłońska decided to use a similar approach to unearth the evolution of oxygen-based enzymes.

An enzyme “family tree” revealed when organisms first started using oxygen. Credit: Weizmann Institute of Science

To begin the study, Jabłońska sorted through around 130 known families of enzymes that either make or use oxygen in bacteria and archaea – the sorts of life forms that would have been around in the Archean Eon (the period between the emergence of life, ca. 4 billion years ago, and the GOE). From these she selected around half, in which oxygen-using or -emitting activity was found in most or all of the family members and seemed to be the founding function. That is, the very first family member would have emerged as an oxygen enzyme. From these, she selected 36 whose evolutionary history could be traced conclusively. “Of course, it was far from simple,” says Tawfik. “Genes can be lost in some organisms, giving the impression they evolved later in members in which they held on. And microorganisms share genes horizontally, messing up the phylogenetic trees and leading to an overestimation of the enzyme’s age. We had to correct for the latter, especially.”

The phylogenetic trees the researchers ultimately obtained showed a burst of oxygen-based enzyme evolution about 3 billion years ago – something like half a billion years before the GOE. Examining this time frame further, the scientists found that rather than coinciding with the takeover of atmospheric oxygen, this burst dated to the time that bacteria left the oceans and began to colonize the land. A few oxygen-using enzymes could be traced back even farther. If oxygen use had coincided with the GOE, the enzymes that use it would have evolved later, so the findings supported the scenario in which oxygen was already known to many life forms by the time the GOE took place.

One microorganism’s waste is another’s potential source of life.

The scenario that Jabłońska and Tawfik propose looks something like this: Oxygen is one of the most chemically reactive elements around. Like one end of a battery, it readily accepts electrons, thus providing extra metabolic power. That makes it extremely useful to many life forms, but also potentially damaging. So photosynthetic organisms as well as other organisms living in their vicinity had to quickly develop ways to efficiently dispose of oxygen. This would account for the emergence of oxygen-utilizing enzymes that would remove molecular oxygen from cells. One microorganism’s waste, however, is another’s potential source of life. Oxygen’s unique reactivity enabled organisms to break down and use “resilient” molecules such as aromatics and lipids, so enzymes that take up and use oxygen likely began evolving soon after.

Tawfik: “This confirms the hypothesis that oxygen appeared and persisted in the biosphere well before the GOE. It took time to achieve the higher GOE level, but by then oxygen was widely known in the biosphere.”

Jabłońska: “Our research presents a completely new means of dating oxygen emergence, and one that helps us understand how life as we know it now evolved.”

Reference: “The evolution of oxygen-utilizing enzymes suggests early biosphere oxygenation” by Jagoda Jabłońska and Dan S. Tawfik, 25 February 2021, Nature Ecology & Evolution.
DOI: 10.1038/s41559-020-01386-9

Prof. Dan Tawfik’s research is supported by the Zuckerman STEM Leadership Program. Prof. Tawfik is the incumbent of the Nella and Leon Benoziyo Professorial Chair.

Acknowledgements

We thank Steven Goodman for the opportunity to provide the new cover for Experimental Biology and Medicine and to write this commentary Frank E. Block III, Dominic Doyle, Dmitry Markov, and Virginia Pensabene for their contributions to the figures Jeff Davidson, Kristin Fabre, Rosemarie Hunziker, Shane Hutson, Ilya Nemenman, Lans Taylor and Marija Zanic for their comments and suggestions and Don Berry and Allison Price for their bibliographic and editorial support. The preparation of this commentary has been supported in part by the National Center for Advancing Translational Sciences of the National Institutes of Health (NIH) under Award No. UH2/UH3 TR000491 and the Defense Advanced Research Projects Administration (DARPA) under grant W911NF-12-2-0036, but its content reflects the views of the authors and not those of either agency.

2.1 Atoms, Isotopes, Ions, and Molecules: The Building Blocks

By the end of this section, you will be able to do the following:

• Define matter and elements
• Describe the interrelationship between protons, neutrons, and electrons
• Compare the ways in which electrons can be donated or shared between atoms
• Explain the ways in which naturally occurring elements combine to create molecules, cells, tissues, organ systems, and organisms

At its most fundamental level, life is made up of matter. Matter is any substance that occupies space and has mass. Elements are unique forms of matter with specific chemical and physical properties that cannot break down into smaller substances by ordinary chemical reactions. There are 118 elements, but only 98 occur naturally. The remaining elements are unstable and require scientists to synthesize them in laboratories.

Each element is designated by its chemical symbol, which is a single capital letter or, when the first letter is already “taken” by another element, a combination of two letters. Some elements follow the English term for the element, such as C for carbon and Ca for calcium. Other elements’ chemical symbols derive from their Latin names. For example, the symbol for sodium is Na, referring to natrium, the Latin word for sodium.

The four elements common to all living organisms are oxygen (O), carbon (C), hydrogen (H), and nitrogen (N). In the nonliving world, elements are found in different proportions, and some elements common to living organisms are relatively rare on the earth as a whole, as Table 2.1 shows. For example, the atmosphere is rich in nitrogen and oxygen but contains little carbon and hydrogen, while the earth’s crust, although it contains oxygen and a small amount of hydrogen, has little nitrogen and carbon. In spite of their differences in abundance, all elements and the chemical reactions between them obey the same chemical and physical laws regardless of whether they are a part of the living or nonliving world.

Element Life (Humans) Atmosphere Earth’s Crust
Oxygen (O) 65% 21% 46%
Carbon (C) 18% trace trace
Hydrogen (H) 10% trace 0.1%
Nitrogen (N) 3% 78% trace

The Structure of the Atom

To understand how elements come together, we must first discuss the element's smallest component or building block, the atom. An atom is the smallest unit of matter that retains all of the element's chemical properties. For example, one gold atom has all of the properties of gold, like its chemical reactivity. A gold coin is simply a very large number of gold atoms molded into the shape of a coin and contains small amounts of other elements known as impurities. We cannot break down gold atoms into anything smaller while still retaining the properties of gold.

An atom is composed of two regions: the nucleus , which is in the atom's center and contains protons and neutrons. The atom's outermost region holds its electrons in orbit around the nucleus, as Figure 2.2 illustrates. Atoms contain protons, electrons, and neutrons, among other subatomic particles. TThe most common isotope of hydrogen (H) is the only exception and is made of one proton and one electron with no neutrons.

Protons and neutrons have approximately the same mass, about 1.67 × 10 -24 grams. Scientists arbitrarily define this amount of mass as one atomic mass unit (amu) or one Dalton, as Table 2.2 shows. Although similar in mass, protons and neutrons differ in their electric charge. A proton is positively charged whereas, a neutron is uncharged. Therefore, the number of neutrons in an atom contributes significantly to its mass, but not to its charge. Electrons are much smaller in mass than protons, weighing only 9.11 × 10 -28 grams, or about 1/1800 of an atomic mass unit. Hence, they do not contribute much to an element’s overall atomic mass. Therefore, when considering atomic mass, it is customary to ignore the mass of any electrons and calculate the atom’s mass based on the number of protons and neutrons alone. Although not significant contributors to mass, electrons do contribute greatly to the atom’s charge, as each electron has a negative charge equal to the proton's positive charge. In uncharged, neutral atoms, the number of electrons orbiting the nucleus is equal to the number of protons inside the nucleus. In these atoms, the positive and negative charges cancel each other out, leading to an atom with no net charge.

Accounting for the sizes of protons, neutrons, and electrons, most of the atom's volume—greater than 99 percent—is empty space. With all this empty space, one might ask why so-called solid objects do not just pass through one another. The reason they do not is that the electrons that surround all atoms are negatively charged and negative charges repel each other.

Atomic Number and Mass

Atoms of each element contain a characteristic number of protons and electrons. The number of protons determines an element’s atomic number , which scientists use to distinguish one element from another. The number of neutrons is variable, resulting in isotopes, which are different forms of the same atom that vary only in the number of neutrons they possess. Together, the number of protons and neutrons determine an element’s mass number , as Figure 2.3 illustrates. Note that we disregard the small contribution of mass from electrons in calculating the mass number. We can use this approximation of mass to easily calculate how many neutrons an element has by simply subtracting the number of protons from the mass number. Since an element’s isotopes will have slightly different mass numbers, scientists also determine the atomic mass , which is the calculated mean of the mass number for its naturally occurring isotopes. Often, the resulting number contains a fraction. For example, the atomic mass of chlorine (Cl) is 35.45 because chlorine is composed of several isotopes, some (the majority) with atomic mass 35 (17 protons and 18 neutrons) and some with atomic mass 37 (17 protons and 20 neutrons).

Visual Connection

How many neutrons do carbon-12 and carbon-13 have, respectively?

Isotopes

Isotopes are different forms of an element that have the same number of protons but a different number of neutrons. Some elements—such as carbon, potassium, and uranium—have naturally occurring isotopes. Carbon-12 contains six protons, six neutrons, and six electrons therefore, it has a mass number of 12 (six protons and six neutrons). Carbon-14 contains six protons, eight neutrons, and six electrons its atomic mass is 14 (six protons and eight neutrons). These two alternate forms of carbon are isotopes. Some isotopes may emit neutrons, protons, and electrons, and attain a more stable atomic configuration (lower level of potential energy) these are radioactive isotopes, or radioisotopes . Radioactive decay (carbon-14 decaying to eventually become nitrogen-14) describes the energy loss that occurs when an unstable atom’s nucleus releases radiation.

Evolution Connection

Carbon Dating

Carbon is normally present in the atmosphere in the form of gaseous compounds like carbon dioxide and methane. Carbon-14 ( 14 C) is a naturally occurring radioisotope that is created in the atmosphere from atmospheric 14 N (nitrogen) by the addition of a neutron and the loss of a proton because of cosmic rays. This is a continuous process, so more 14 C is always being created. As a living organism incorporates 14 C initially as carbon dioxide fixed in the process of photosynthesis, the relative amount of 14 C in its body is equal to the concentration of 14 C in the atmosphere. When an organism dies, it is no longer ingesting 14 C, so the ratio between 14 C and 12 C will decline as 14 C decays gradually to 14 N by a process called beta decay—electrons or positrons emission. This decay emits energy in a slow process.

After approximately 5,730 years, half of the starting concentration of 14 C will convert back to 14 N. We call the time it takes for half of the original concentration of an isotope to decay back to its more stable form its half-life. Because the half-life of 14 C is long, scientists use it to date formerly living objects such as old bones or wood. Comparing the ratio of the 14 C concentration in an object to the amount of 14 C in the atmosphere, scientists can determine the amount of the isotope that has not yet decayed. On the basis of this amount, Figure 2.4 shows that we can calculate the age of the material, such as the pygmy mammoth, with accuracy if it is not much older than about 50,000 years. Other elements have isotopes with different half lives. For example, 40 K (potassium-40) has a half-life of 1.25 billion years, and 235 U (Uranium 235) has a half-life of about 700 million years. Through the use of radiometric dating, scientists can study the age of fossils or other remains of extinct organisms to understand how organisms have evolved from earlier species.

To learn more about atoms, isotopes, and how to tell one isotope from another, run the simulation.

The Periodic Table

The periodic table organizes and displays different elements. Devised by Russian chemist Dmitri Mendeleev (1834–1907) in 1869, the table groups elements that, although unique, share certain chemical properties with other elements. The properties of elements are responsible for their physical state at room temperature: they may be gases, solids, or liquids. Elements also have specific chemical reactivity , the ability to combine and to chemically bond with each other.

In the periodic table in Figure 2.5, the elements are organized and displayed according to their atomic number and are arranged in a series of rows and columns based on shared chemical and physical properties. In addition to providing the atomic number for each element, the periodic table also displays the element’s atomic mass. Looking at carbon, for example, its symbol (C) and name appear, as well as its atomic number of six (in the upper left-hand corner) and its atomic mass of 12.01.

The periodic table groups elements according to chemical properties. Scientists base the differences in chemical reactivity between the elements on the number and spatial distribution of an atom’s electrons. Atoms that chemically react and bond to each other form molecules. Molecules are simply two or more atoms chemically bonded together. Logically, when two atoms chemically bond to form a molecule, their electrons, which form the outermost region of each atom, come together first as the atoms form a chemical bond.

Electron Shells and the Bohr Model

Note that there is a connection between the number of protons in an element, the atomic number that distinguishes one element from another, and the number of electrons it has. In all electrically neutral atoms, the number of electrons is the same as the number of protons. Thus, each element, at least when electrically neutral, has a characteristic number of electrons equal to its atomic number.

In 1913, Danish scientist Niels Bohr (1885–1962) developed an early model of the atom. The Bohr model shows the atom as a central nucleus containing protons and neutrons, with the electrons in circular orbitals at specific distances from the nucleus, as Figure 2.6 illustrates. These orbits form electron shells or energy levels, which are a way of visualizing the number of electrons in the outermost shells. These energy levels are designated by a number and the symbol “n.” For example, 1n represents the first energy level located closest to the nucleus.

Electrons fill orbitals in a consistent order: they first fill the orbitals closest to the nucleus, then they continue to fill orbitals of increasing energy further from the nucleus. If there are multiple orbitals of equal energy, they fill with one electron in each energy level before adding a second electron. The electrons of the outermost energy level determine the atom's energetic stability and its tendency to form chemical bonds with other atoms to form molecules.

Under standard conditions, atoms fill the inner shells first, often resulting in a variable number of electrons in the outermost shell. The innermost shell has a maximum of two electrons but the next two electron shells can each have a maximum of eight electrons. This is known as the octet rule , which states, with the exception of the innermost shell, that atoms are more stable energetically when they have eight electrons in their valence shell , the outermost electron shell. Figure 2.7 shows examples of some neutral atoms and their electron configurations. Notice that in Figure 2.7, helium has a complete outer electron shell, with two electrons filling its first and only shell. Similarly, neon has a complete outer 2n shell containing eight electrons. In contrast, chlorine and sodium have seven and one in their outer shells, respectively, but theoretically they would be more energetically stable if they followed the octet rule and had eight.

Visual Connection

An atom may give, take, or share electrons with another atom to achieve a full valence shell, the most stable electron configuration. Looking at this figure, how many electrons do elements in group 1 need to lose in order to achieve a stable electron configuration? How many electrons do elements in groups 14 and 17 need to gain to achieve a stable configuration?

Understanding that the periodic table's organization is based on the total number of protons (and electrons) helps us know how electrons distribute themselves among the shells. The periodic table is arranged in columns and rows based on the number of electrons and their location. Examine more closely some of the elements in the table’s far right column in Figure 2.5. The group 18 atoms helium (He), neon (Ne), and argon (Ar) all have filled outer electron shells, making it unnecessary for them to share electrons with other atoms to attain stability. They are highly stable as single atoms. Because they are non reactive, scientists coin them inert (or noble gases ). Compare this to the group 1 elements in the left-hand column. These elements, including hydrogen (H), lithium (Li), and sodium (Na), all have one electron in their outermost shells. That means that they can achieve a stable configuration and a filled outer shell by donating or sharing one electron with another atom or a molecule such as water. Hydrogen will donate or share its electron to achieve this configuration, while lithium and sodium will donate their electron to become stable. As a result of losing a negatively charged electron, they become positively charged ions . Group 17 elements, including fluorine and chlorine, have seven electrons in their outmost shells, so they tend to fill this shell with an electron from other atoms or molecules, making them negatively charged ions. Group 14 elements, of which carbon is the most important to living systems, have four electrons in their outer shell allowing them to make several covalent bonds (discussed below) with other atoms. Thus, the periodic table's columns represent the potential shared state of these elements’ outer electron shells that is responsible for their similar chemical characteristics.

Electron Orbitals

Although useful to explain the reactivity and chemical bonding of certain elements, the Bohr model does not accurately reflect how electrons spatially distribute themselves around the nucleus. They do not circle the nucleus like the earth orbits the sun, but we find them in electron orbitals . These relatively complex shapes result from the fact that electrons behave not just like particles, but also like waves. Mathematical equations from quantum mechanics, which scientists call wave functions, can predict within a certain level of probability where an electron might be at any given time. Scientists call the area where an electron is most likely to be found its orbital.

Recall that the Bohr model depicts an atom’s electron shell configuration. Within each electron shell are subshells, and each subshell has a specified number of orbitals containing electrons. While it is impossible to calculate exactly an electron's location, scientists know that it is most probably located within its orbital path. The letter s, p, d, and f designate the subshells. The s subshell is spherical in shape and has one orbital. Principal shell 1n has only a single s orbital, which can hold two electrons. Principal shell 2n has one s and one p subshell, and can hold a total of eight electrons. The p subshell has three dumbbell-shaped orbitals, as Figure 2.8 illustrates. Subshells d and f have more complex shapes and contain five and seven orbitals, respectively. We do not show these in the illustration. Principal shell 3n has s, p, and d subshells and can hold 18 electrons. Principal shell 4n has s, p, d and f orbitals and can hold 32 electrons. Moving away from the nucleus, the number of electrons and orbitals in the energy levels increases. Progressing from one atom to the next in the periodic table, we can determine the electron structure by fitting an extra electron into the next available orbital.

The closest orbital to the nucleus, the 1s orbital, can hold up to two electrons. This orbital is equivalent to the Bohr model's innermost electron shell. Scientists call it the 1s orbital because it is spherical around the nucleus. The 1s orbital is the closest orbital to the nucleus, and it is always filled first, before any other orbital fills. Hydrogen has one electron therefore, it occupies only one spot within the 1s orbital. We designate this as 1s 1 , where the superscripted 1 refers to the one electron within the 1s orbital. Helium has two electrons therefore, it can completely fill the 1s orbital with its two electrons. We designate this as 1s 2 , referring to the two electrons of helium in the 1s orbital. On the periodic table Figure 2.5, hydrogen and helium are the only two elements in the first row (period). This is because they only have electrons in their first shell, the 1s orbital. Hydrogen and helium are the only two elements that have the 1s and no other electron orbitals in the electrically neutral state.

The second electron shell may contain eight electrons. This shell contains another spherical s orbital and three “dumbbell” shaped p orbitals, each of which can hold two electrons, as Figure 2.8 shows. After the 1s orbital fills, the second electron shell fills, first filling its 2s orbital and then its three p orbitals. When filling the p orbitals, each takes a single electron. Once each p orbital has an electron, it may add a second. Lithium (Li) contains three electrons that occupy the first and second shells. Two electrons fill the 1s orbital, and the third electron then fills the 2s orbital. Its electron configuration is 1s 2 2s 1 . Neon (Ne), alternatively, has a total of ten electrons: two are in its innermost 1s orbital and eight fill its second shell (two each in the 2s and three p orbitals). Thus it is an inert gas and energetically stable as a single atom that will rarely form a chemical bond with other atoms. Larger elements have additional orbitals, comprising the third electron shell. While the concepts of electron shells and orbitals are closely related, orbitals provide a more accurate depiction of an atom's electron configuration because the orbital model specifies the different shapes and special orientations of all the places that electrons may occupy.

Watch this visual animation to see the spatial arrangement of the p and s orbitals.

Chemical Reactions and Molecules

All elements are most stable when their outermost shell is filled with electrons according to the octet rule. This is because it is energetically favorable for atoms to be in that configuration and it makes them stable. However, since not all elements have enough electrons to fill their outermost shells, atoms form chemical bonds with other atoms thereby obtaining the electrons they need to attain a stable electron configuration. When two or more atoms chemically bond with each other, the resultant chemical structure is a molecule. The familiar water molecule, H2O, consists of two hydrogen atoms and one oxygen atom. These bond together to form water, as Figure 2.9 illustrates. Atoms can form molecules by donating, accepting, or sharing electrons to fill their outer shells.

Chemical reactions occur when two or more atoms bond together to form molecules or when bonded atoms break apart. Scientists call the substances used in the beginning of a chemical reaction reactants (usually on the left side of a chemical equation), and we call the substances at the end of the reaction products (usually on the right side of a chemical equation). We typically draw an arrow between the reactants and products to indicate the chemical reaction's direction. This direction is not always a “one-way street.” To create the water molecule above, the chemical equation would be:

An example of a simple chemical reaction is breaking down hydrogen peroxide molecules, each of which consists of two hydrogen atoms bonded to two oxygen atoms (H2O2). The reactant hydrogen peroxide breaks down into water, containing one oxygen atom bound to two hydrogen atoms (H2O), and oxygen, which consists of two bonded oxygen atoms (O2). In the equation below, the reaction includes two hydrogen peroxide molecules and two water molecules. This is an example of a balanced chemical equation , wherein each element's number of atoms is the same on each side of the equation. According to the law of conservation of matter, the number of atoms before and after a chemical reaction should be equal, such that no atoms are, under normal circumstances, created or destroyed.

Even though all of the reactants and products of this reaction are molecules (each atom remains bonded to at least one other atom), in this reaction only hydrogen peroxide and water are representatives of compounds : they contain atoms of more than one type of element. Molecular oxygen, alternatively, as Figure 2.10 shows, consists of two doubly bonded oxygen atoms and is not classified as a compound but as a homonuclear molecule.

Some chemical reactions, such as the one above, can proceed in one direction until they expend all the reactants. The equations that describe these reactions contain a unidirectional arrow and are irreversible . Reversible reactions are those that can go in either direction. In reversible reactions, reactants turn into products, but when the product's concentration goes beyond a certain threshold (characteristic of the particular reaction), some of these products convert back into reactants. At this point, product and reactant designations reverse. This back and forth continues until a certain relative balance between reactants and products occurs—a state called equilibrium . A chemical equation with a double headed arrow pointing towards both the reactants and products often denote these reversible reaction situations.

For example, in human blood, excess hydrogen ions (H + ) bind to bicarbonate ions (HCO3 - ) forming an equilibrium state with carbonic acid (H2CO3). If we added carbonic acid to this system, some of it would convert to bicarbonate and hydrogen ions.

However, biological reactions rarely obtain equilibrium because the concentrations of the reactants or products or both are constantly changing, often with one reaction's product a reactant for another. To return to the example of excess hydrogen ions in the blood, forming carbonic acid will be the reaction's major direction. However, the carbonic acid can also leave the body as carbon dioxide gas (via exhalation) instead of converting back to bicarbonate ion, thus driving the reaction to the right by the law of mass action . These reactions are important for maintaining homeostasis in our blood.

Ions and Ionic Bonds

Some atoms are more stable when they gain or lose an electron (or possibly two) and form ions. This fills their outermost electron shell and makes them energetically more stable. Because the number of electrons does not equal the number of protons, each ion has a net charge. Cations are positive ions that form by losing electrons. Negative ions form by gaining electrons, which we call anions. We designate anions by their elemental name and change the ending to “-ide”, thus the anion of chlorine is chloride, and the anion of sulfur is sulfide.

Scientists refer to this movement of electrons from one element to another as electron transfer . As Figure 2.11 illustrates, sodium (Na) only has one electron in its outer electron shell. It takes less energy for sodium to donate that one electron than it does to accept seven more electrons to fill the outer shell. If sodium loses an electron, it now has 11 protons, 11 neutrons, and only 10 electrons, leaving it with an overall charge of +1. We now refer to it as a sodium ion. Chlorine (Cl) in its lowest energy state (called the ground state) has seven electrons in its outer shell. Again, it is more energy-efficient for chlorine to gain one electron than to lose seven. Therefore, it tends to gain an electron to create an ion with 17 protons, 17 neutrons, and 18 electrons, giving it a net negative (–1) charge. We now refer to it as a chloride ion. In this example, sodium will donate its one electron to empty its shell, and chlorine will accept that electron to fill its shell. Both ions now satisfy the octet rule and have complete outermost shells. Because the number of electrons is no longer equal to the number of protons, each is now an ion and has a +1 (sodium cation) or –1 (chloride anion) charge. Note that these transactions can normally only take place simultaneously: in order for a sodium atom to lose an electron, it must be in the presence of a suitable recipient like a chlorine atom.

Ionic bonds form between ions with opposite charges. For instance, positively charged sodium ions and negatively charged chloride ions bond together to make crystals of sodium chloride, or table salt, creating a crystalline molecule with zero net charge.

Physiologists refer to certain salts as electrolytes (including sodium, potassium, and calcium), ions necessary for nerve impulse conduction, muscle contractions, and water balance. Many sports drinks and dietary supplements provide these ions to replace those lost from the body via sweating during exercise.

Covalent Bonds and Other Bonds and Interactions

Another way to satisfy the octet rule is by sharing electrons between atoms to form covalent bonds . These bonds are stronger and much more common than ionic bonds in the molecules of living organisms. We commonly find covalent bonds in carbon-based organic molecules, such as our DNA and proteins. We also find covalent bonds in inorganic molecules like H2O, CO2, and O2. The bonds may share one, two, or three pairs of electrons, making single, double, and triple bonds, respectively. The more covalent bonds between two atoms, the stronger their connection. Thus, triple bonds are the strongest.

The strength of different levels of covalent bonding is one of the main reasons living organisms have a difficult time in acquiring nitrogen for use in constructing their molecules, even though molecular nitrogen, N2, is the most abundant gas in the atmosphere. Molecular nitrogen consists of two nitrogen atoms triple bonded to each other and, as with all molecules, sharing these three pairs of electrons between the two nitrogen atoms allows for filling their outer electron shells, making the molecule more stable than the individual nitrogen atoms. This strong triple bond makes it difficult for living systems to break apart this nitrogen in order to use it as constituents of proteins and DNA.

Forming water molecules provides an example of covalent bonding. Covalent bonds bind the hydrogen and oxygen atoms that combine to form water molecules as Figure 2.9 shows. The electron from the hydrogen splits its time between the hydrogen atoms' incomplete outer shell and the oxygen atoms' incomplete outer shell. To completely fill the oxygen's outer shell, which has six electrons but which would be more stable with eight, two electrons (one from each hydrogen atom) are needed: hence, the well-known formula H2O. The two elements share the electrons to fill the outer shell of each, making both elements more stable.

View this short video to see an animation of ionic and covalent bonding.

Polar Covalent Bonds

There are two types of covalent bonds: polar and nonpolar. In a polar covalent bond , Figure 2.12 shows atoms unequally share the electrons and are attracted more to one nucleus than the other. Because of the unequal electron distribution between the atoms of different elements, a slightly positive (δ+) or slightly negative (δ–) charge develops. This partial charge is an important property of water and accounts for many of its characteristics.

Water is a polar molecule, with the hydrogen atoms acquiring a partial positive charge and the oxygen a partial negative charge. This occurs because the oxygen atom's nucleus is more attractive to the hydrogen atoms' electrons than the hydrogen nucleus is to the oxygen’s electrons. Thus, oxygen has a higher electronegativity than hydrogen and the shared electrons spend more time near the oxygen nucleus than the hydrogen atoms' nucleus, giving the oxygen and hydrogen atoms slightly negative and positive charges, respectively. Another way of stating this is that the probability of finding a shared electron near an oxygen nucleus is more likely than finding it near a hydrogen nucleus. Either way, the atom’s relative electronegativity contributes to developing partial charges whenever one element is significantly more electronegative than the other, and the charges that these polar bonds generate may then be used to form hydrogen bonds based on the attraction of opposite partial charges. (Hydrogen bonds, which we discuss in detail below, are weak bonds between slightly positively charged hydrogen atoms to slightly negatively charged atoms in other molecules.) Since macromolecules often have atoms within them that differ in electronegativity, polar bonds are often present in organic molecules.

Nonpolar Covalent Bonds

Nonpolar covalent bonds form between two atoms of the same element or between different elements that share electrons equally. For example, molecular oxygen (O2) is nonpolar because the electrons distribute equally between the two oxygen atoms.

Figure 2.12 also shows another example of a nonpolar covalent bond—methane (CH4). Carbon has four electrons in its outermost shell and needs four more to fill it. It obtains these four from four hydrogen atoms, each atom providing one, making a stable outer shell of eight electrons. Carbon and hydrogen do not have the same electronegativity but are similar thus, nonpolar bonds form. The hydrogen atoms each need one electron for their outermost shell, which is filled when it contains two electrons. These elements share the electrons equally among the carbons and the hydrogen atoms, creating a nonpolar covalent molecule.

Hydrogen Bonds and Van Der Waals Interactions

Ionic and covalent bonds between elements require energy to break. Ionic bonds are not as strong as covalent, which determines their behavior in biological systems. However, not all bonds are ionic or covalent bonds. Weaker bonds can also form between molecules. Two weak bonds that occur frequently are hydrogen bonds and van der Waals interactions. Without these two types of bonds, life as we know it would not exist. Hydrogen bonds provide many of the critical, life-sustaining properties of water and also stabilize the structures of proteins and DNA, the building block of cells.

When polar covalent bonds containing hydrogen form, the hydrogen in that bond has a slightly positive charge because hydrogen’s electron is pulled more strongly toward the other element and away from the hydrogen. Because the hydrogen is slightly positive, it will be attracted to neighboring negative charges. When this happens, a weak interaction occurs between the hydrogen's δ + from one molecule and the molecule's δ – charge on another molecule with the more electronegative atoms, usually oxygen. Scientists call this interaction a hydrogen bond . This type of bond is common and occurs regularly between water molecules. Individual hydrogen bonds are weak and easily broken however, they occur in very large numbers in water and in organic polymers, creating a major force in combination. Hydrogen bonds are also responsible for zipping together the DNA double helix.

Like hydrogen bonds, van der Waals interactions are weak attractions or interactions between molecules. Van der Waals attractions can occur between any two or more molecules and are dependent on slight fluctuations of the electron densities, which are not always symmetrical around an atom. For these attractions to happen, the molecules need to be very close to one another. These bonds—along with ionic, covalent, and hydrogen bonds—contribute to the proteins' three-dimensional structure in our cells that is necessary for their proper function.

Career Connection

Pharmaceutical Chemist

Pharmaceutical chemists are responsible for developing new drugs and trying to determine the mode of action of both old and new drugs. They are involved in every step of the drug development process. We can find drugs in the natural environment or we can synthesize them in the laboratory. In many cases, chemists change potential drugs from nature chemically in the laboratory to make them safer and more effective, and sometimes synthetic versions of drugs substitute for the version we find in nature.

After a drug's initial discovery or synthesis, the chemist then develops the drug, perhaps chemically altering it, testing it to see if it is toxic, and then designing methods for efficient large-scale production. Then, the process of approving the drug for human use begins. In the United States, the Food and Drug Administration (FDA) handles drug approval. This involves a series of large-scale experiments using human subjects to ensure the drug is not harmful and effectively treats the condition for which it is intended. This process often takes several years and requires the participation of physicians and scientists, in addition to chemists, to complete testing and gain approval.

An example of a drug that was originally discovered in a living organism is Paclitaxel (Taxol), an anti-cancer drug used to treat breast cancer. This drug was discovered in the bark of the pacific yew tree. Another example is aspirin, originally isolated from willow tree bark. Finding drugs often means testing hundreds of samples of plants, fungi, and other forms of life to see if they contain any biologically active compounds. Sometimes, traditional medicine can give modern medicine clues as to where to find an active compound. For example, mankind has used willow bark to make medicine for thousands of years, dating back to ancient Egypt. However, it was not until the late 1800s that scientists and pharmaceutical companies purified and marketed the aspirin molecule, acetylsalicylic acid, for human use.

Occasionally, drugs developed for one use have unforeseen effects that allow usage in other, unrelated ways. For example, scientists originally developed the drug minoxidil (Rogaine) to treat high blood pressure. When tested on humans, researchers noticed that individuals taking the drug would grow new hair. Eventually the pharmaceutical company marketed the drug to men and women with baldness to restore lost hair.

A pharmaceutical chemist's career may involve detective work, experimentation, and drug development, all with the goal of making human beings healthier.

The Curious Wavefunction

I n the Wall Street Journal, the physics writer Jeremy Bernstein has a fine review of a new joint biography by Gino Segre of George Gamow and Max Delbruck named "Ordinary Geniuses" which I just started reading.

Just a note: The "open dots" used as "attachment points" look an awful lot like O's. I had several seconds of confusion as to why a beta amino acid contained a peroxide ("well, *that's* not stable!")

If you ever have an opportunity to redo the figure, I might recommend using filled dots instead.

Done, thanks! (I wonder if a beta amino acid with the central carbons substituted by oxygens can be even fleetingly synthesized!)

These types of questions/scenarios are especially important with origin-of-life science.

I use the term "reductionist" differently from you, I think. I call using physics to predict biology "constructionism", while "reductionism" is looking at modern biology and figuring out what was there earlier in the process of evolution.

Why can't Gamow and Delbruck's superintelligent freak being predict giraffes? If He/She knew all the laws of physics, why couldn't He/She have predicted the seemingly "random" events -- "chance" point mutations in proto-giraffe genes -- that led to the existence of giraffes? After all, those were simply caused by radiation damage to DNA, or mis-catalysis by a DNA-replicating enzyme, or . -- in other words, something physical. It seems to me Gamow and Delbruck abandon the reductionist logic prematurely. Their assumption appears to be that truly random events do exist, but is that the case, or do they only appear random to use mere mortals?

Yes, a superfreak could have predicted the set of all possible mutations. But there was still no way to decide which ones among those would prove beneficial and help the species evolve and propagate.

However, your point about the perceived randomness of events is an interesting one. "Random" does not necessarily mean non-deterministic. In my head it has more to do with probabilities. Random events have probabilities that cannot be predetermined and therefore cannot be predicted.

Given limitless computational power, it could have predicted giraffes as a possibility, among billions of other possibilities. It could not have said with certainty that giraffes, as we know them, would occur. Prediction through billions of bifurcation points is not possible. This is a key observation of chaos theory, in which small initial differences lead to widely divergent outcomes, rendering long-term prediction impossible.

I have to say that I like Paul's usage of the phrase "constructionism." It suggests - at least to me - that while one should consider it necessary to be able to bridge physics to chemistry and chemistry to biology (and I suppose physics to biology as well), that it is not going to be sufficient to provide a complete understanding.

Of course, I have to wonder if we're sometimes being overly demanding in expecting physics to lead into chemistry and/or biology - for example, the entire topic of protein folding seems to be a fairly popular one in the (bio)chemical blogosphere. There are classic physical systems that still invoke equally lively discussions in the literature (especially thinking of glass-forming systems here), despite having been available on the scientific research buffet for longer. And people are expecting equally or even more detailed physical pictures of protein folding?

Having said that, I suspect it's a function of where one sits - I have the impression that the physicists (and physical chemists) who are interested in biological problems aren't going to be inclined towards explaining the chemistry of amino acids. They're going to be more interested in understanding in, say, signal transduction where they can control the strength of a signal (ligand concentration) and measure the output (some sort of enzymatic activity being up or down-regulated). If the receptor clusters, then they're off to comparing the Ising model vs MWC vs whatever else they can devise via simulations and subsequent comparison to experimental data.

My two cents, change likely is warranted.

In response to Wavefunction: Thanks for your reply. You say that "a superfreak could have predicted the set of all possible mutations. But there was still no way to decide which ones among those would prove beneficial and help the species evolve and propagate."

But the theoretical superfreak can also predict the set of all possible mutations for *other* organisms in the system, not just the giraffe. Why can't He/She enumerate those possibilities simultaneously? This information would be the basis for decisions about which giraffe mutations would prove beneficial.

This obviously implies an astronomical number of theoretical evolutionary pathways. but this is just a thought experiment anyway, so let's pretend He/She can evaluate each step of each pathway based on just physics. What information is lacking for this being to predict evolutionary history, unless we invoke truly random events?

I don't quite follow your point on randomness vs. deterministic events with probabilities, so I'm not sure how that fits in. By the way, I don't want to undermine your article, because I think it's extremely interesting and provocative! I just wanna poke you about it a bit.

FullyReduced: I appreciate your poking, it's a very interesting issue. The problem as I see it is that a lot of evolution has been governed by the propagation of events which might have appeared to be low probability events beforehand. Thus, even if the superfreak could calculate every single mutation in every organism along with every single environmental condition that could lead to these mutations being preserved, how could he/she/it know which one of those countless combinations will actually be the one that finally exists?

You are right that among the countless scenarios predicted by the superfreak will be the universe and earth that we inhabit. But there is still no way to decide beforehand that this particular universe would be the one that actually materializes, part of the reason being that the a priori possibility of such a universe arising might be very low and there is no reason why the superfreak will pick a low probability event as the preferred one.

MJ: I find your mention of other (poorly understood) classical systems interesting. As one example of why we may perhaps be overly demanding, consider that we cannot even accurately calculate the solvation energy for simple organic molecules (except for cases where you parametrize the system to death and use a test set that's very similar to the training set). With our knowledge at such a primitive level, it might indeed be overly demanding to try to predict protein folding which is orders of magnitude more complex.

By the way, reductionism is supposed to imply the kind of constructionism (sometimes called "upward casusation") that Paul mentions. The fact that it does not speaks volumes.

Interesting argument. I agree with you for the most part, though I feel that all of your chemistry examples are actually classified as "biochemistry."

I suppose when one gets down to it, it's not just chemistry and biology - any system where one is looking at the behavior of many (interacting) entities is going to be complicated, and - I will be overly generous here - deriving its properties from first principles is going to be an extremely opaque process at best. People are still getting headaches from the entire "strong correlations in condensed matter" problem. Not being able to break out the "non-interacting, independent particles" approximation really irritates people. Especially when it fails to properly account for the properties that don't naturally fall out of said approximation. Heh.

I do think, though, that the conceptual tools and formalisms that one develops in the physics can find fruitful new applications in biology and chemistry - although how much of that is just the unreasonable effectiveness of mathematics is always up for debate, I suppose.

As a related followup to my previous comment in this thread, someone fortuitously sparked my memory today - there are those chemists incorporating parity violation into their calculations to explain chirality I remember hearing that the expected spectral differences might be too small to reasonably observe for the lighter elements spectroscopically, so they were starting to look at heavy element compounds.

@Anonymous: Interesting point: "Prediction through billions of bifurcation points is not possible. This is a key observation of chaos theory. "

But you use the term _prediction_, i.e. a guess from a human perspective with (by our very nature) limited data. In other words, it's not clear the findings of chaos theory negate determinism rather, they seem to refute our ability to predict deterministic systems given our imperfect ability to gather information about the natural world.

I guess what I'm trying to do here is separate out what's theoretically possible from what _we're_ capable of. (Of course, "theoretically" implies theories _we_ came up with, so maybe this is ultimately a dead end. ) Thoughts?

@WaveFunction: I think we may have different assumptions about what kind of predictive calculations this theoretical superfreak is capable of. You appear to assume mutations are truly random events and can't be predicted using the laws of physics. On the other hand, I assume mutations are determined by the physics of intermolecular interactions, incoming environmental radiation, etc. and that history proceeds in a stepwise fashion in which each step can be predicted from (1) the last step and (2) the laws of physics. (Of course, for this to be true, I'm assuming the superfreak has _perfect_ knowledge of the _true_ laws of physics, which of course we as a species do not currently have.) Thus the superfreak has knowledge of each mutation, and -- coupled with His/Her _complete_ knowledge of the environment -- can predict whether it will be retained.

But that's a purely theoretical point, and I freely admit that large swaths of evolution were determined by (what _we_ see as) random events. From _our_ perspective, chaos theory comes into play here, as @Anonymous mentioned.

Thanks again for the article -- good stuff here.

Reduced: You are right that the findings of chaos theory don't preempt determinism there is a reason why the field is termed "deterministic chaos". I have always found the line between the lack of prediction "in principle" and "in practice" somewhat fuzzy in case of chaotic systems. These systems are definitely (mostly) unpredictable in practice. But being predictable in principle would mean being able to specify the initial conditions of the system to an infinite degree of accuracy. I don't know if this is possible even in principle.

MJ: Do you know of any parity-themed papers on chirality for the intelligent layman?

@Wavefunction: Great point on infinite precision of initial condition defintions -- I feel that's finally the bridge between practical and theoretical limitations I was looking for.

By the way: for posterity's sake, do you know what happened to my previous @Wavefunction post? It seems to be AWOL, and the conversation is kind of disjointed without it.

The old adage: "the more you know about physics, the simpler it gets and the more you know about biology, the more complicated it becomes."

However,fundamentally it is all physics. The fact that we cannot grasp the connection is our epistemological shortcoming. Moreover it is clear that Nature is not perfect so all it has to do is work. Maybe 25 amino acids will work better than 20, maybe a different protein fold along the way would not lead to cancer, but it does not matter. Eventually evolution will sort things out, given the right environment.

I unfortunately don't know of any good review papers off the top of my head, but I would imagine if you search for Peter Schwedtfeger (the big-name theorist down in NZ), you'd eventually find something suitable. The entire "parity violation and chirality" topic was something that momentarily caught my eye when I was puzzling over a sideline topic a while back. From what I know, there hasn't yet been any experimental verification, although various metrology/precision spectroscopy groups are going after it.

Also, something to think about in relation to chaos and being able to specify initial conditions - in classical systems, you describe your system in terms of its particles' position and momenta. Given that, one can specify said position and momenta exactly. When one moves into quantum mechanics, you suddenly now have a distribution in position & momenta that is a small "patch" in phase space that is proportional to Planck's constant, as one can only jointly localize position and momentum so far. I suppose this is why I find the mere notion of quantum chaos to give me headaches thinking about the evolution of little hyperblobs in six-dimensional phase space. Heh.

I think a better analogy for physics compared to the biology and chemistry examples given would be whether the supersmart freak could predict the number and location of stars in the galaxy/universe and how many planets are around each star.

Chemists, knowing the fundamental laws of chemistry, can give knowledge of the properties and reactivity of as yet unknown compounds in much the same way physicists can with atoms.

Biologists cannot be compared because biology deals with the specific system of life that has already arisen. To ask why this freak couldn't biology predict a giraffe when a giraffe isn't part of its biologic system is not an apt comparison in any way.

Adding my voice to some of the others:

If physics is deterministic (so let's suppose that a nonlocal hidden variable interpretation of quantum mechanics is correct), and if our superfreak (or Laplacian demon, as the more traditional account has it) knows the complete physical state in addition to all the laws (and has an unlimited computational capacity), then the superfreak will be able to predict the existence of giraffes and every other biological detail.

Leaving out relevant physical details (i.e., the initial conditions) does not not show that biology is non-physical. Of course, it is true that physical dynamics alone will never tell us what sort of creatures evolve and which don't, but why would we ever think it might? Mere physical laws can't even tell us that there will be protons and neutrons.

Physicalist, Bryan and Andre: Just want to make sure we carefully define what we mean by the reduction of biology to physics. It does not mean proving that all of biological matter is composed of basic subatomic constituents which is an obvious fact. It really means the "constructivism" that Paul was talking about. If we can truly reduce biology to physics, it must mean that we should be able to at least in principle do the opposite construct the present biological world starting from the basic laws of physics.

However it is not clear how we could go about doing this even in principle. The question is not just one of epistemology but of ontology. Again, I think Kauffman's example is very cogent. Even if the structure of the mammalian heart could be predicted in principle, it would be impossible to predict beforehand that the most important function of the heart among myriad others is to pump blood. The problem is not just epistemological in that we lack knowledge of all the conditions that could ultimately lead to a heart but ontological, namely that even if we had the knowledge we would be unable to assign probabilities to various scenarios. To me this seems to be the basic issue.

I am not sure the existence of protons and neutrons was as subject to chance and circumstance as the evolution of the giraffe since it can be predicted based on very basic principles of energetic stability and knowledge of the five forces. So can the synthesis of the elements. But everything from then onwards seems much more subject to chance and accident.

But you cannot necessarily construct the present physical world from the basic laws of physics, let alone the chemical or biological worlds. This is why the idea of predicting a giraffe doesn't seem to fit with the idea of predicting the elements.

Could the planet earth be predicted from physical laws (not the life on earth, but the specific planetary make-up)? That is more akin to the giraffe example.

More interesting (and IMO more appropriate) questions would be the following: Could, starting from the basic universal physical laws, complex or sentient life be predicted? Can the idea of biology itself be predicted?

Question. Does the difference between biology (and chemistry) and physics boil down to the difference between inductive logic and deductive logic? Inductive logic (biology) reasons from a specific case to a general pattern, whereas deductive logic (physics) reasons from or applies general axioms and principles to a specific case. JR, Greenville, SC

So did the laws of physics evolve or have they always "existed?"

Anon 2: I am one of those people who think of a law as a compressed description of a set of regularities in nature. In this sense something that represents the law that we use must have existed since the beginning.

Andre: I think that's a much better and more challenging question to answer and it takes us into all kinds of philosophical territory including the distinction between living and non-living. I am not sure physics could have predicted the existence of biology as we know it. But given enough time it probably could have led to aggregates of matter that demonstrate at least some features of life (at least growth). However from an ontological viewpoint I don't think physics could have predicted life since according to physics, life is nothing but a special but still uninteresting arrangement of quarks (or strings or whatever the physicists are calling it these days). There is no way a physicist could predict the various functions that the arrangement corresponding to, say a human being, could perform.

Anon 1: To some extent yes and that has been the main problem with reductionism, although it has also been responsible for reductionism's phenomenal triumphs.

Here is a longer response to anon 2 if it is not off topic. The answer to anon 2 is neither one of the alternatives. The laws of physics were created. WHEN THERE HAS NEVER EVER BEEN A CAR, this is like no one starting to build a car with no rules or ideas about what a car is, what it looks like, how it works, what it does, how it is different from a carrot, etc. On the other hand, if no one starts to build something, and something just appears by chance, the order and regularity that we see in the universe is astonishing. It is one thing if rules exist from the beginning for organizing creation according to statistics and chance. It is quite another if the rules themselves are the product of chance and appear out of "blind, thoughtless, mindless nothing”. How can a plan for the universe appear out of thoughtless mindless nothing? This is like waiting around for a rock that does not exist to have an idea. J. R. Greenville

Interesting article, and very instructive as to the deep complexity of biochemistry. Thank you. But I do think the premise really just traffics in the different semantic usages of "deterministic," the level of certainty you ascribe to the laws of physics as currently understood, and the capabilities of your hypothetical observer.

Let's say your "superfreak" has, in the religio-philosophical sense, complete omniscience but is still time-bound (i.e. does not simultaneously exist in the future as well as the past). If the "superfreak" has complete information about the laws of physics, AND has existed since the beginning of the universe, AND has the capacity to store and process information about every particle and wave function in the universe, then why couldn't he predict everything as it will actually turn out? The "random" mutation resulting from a "random" particle hitting a "random" atom in a "random" protein is only random if you assume that the "superfreak" hasn't followed each of those atoms, particles, etc. since the beginning of time.

Of course, if one assumes that the "superfreak" is bound by laws of quantum mechanics as currently understodd -- so that uncertainy and probability are built in as part of the laws he "knows -- then it's true that he couldn't predict everything. But such a definition in the hypothetical makes physics, as well as biology, non-deterministic. At that point, all you're saying is that biology is "less deterministic" because it involves larger sets of particles, but each of those particles is itself fundamentally unpredictable outside of probability.

Curious Wavefunction says:
"If we can truly reduce biology to physics, it must mean that we should be able to . . . construct the present biological world starting from the basic laws of physics."

Again, this form of "reduction" is just a non-starter. If you insist that we only have reduction when the laws (and only the laws) specify some feature of the world, then nothing can be reduced (except the laws themselves).

The physical laws are compatible with the complete absence of matter, so the laws are never going to tell you whether there's matter or a total vacuum.

The criterion for reduction that you're using is unhelpful, because on this account nothing can be reduced to physics.

A much more useful account of reduction is one which asks whether we can predict some feature if we are given both the laws and the complete physical state (and unlimited computational power, since we're interested in ontology not epistemology).

In this case, it seems clear that the Laplacian demon (superfreak) would predict the existence of giraffes (though the demon might not call them "giraffes").

"Even if the structure of the mammalian heart could be predicted in principle, it would be impossible to predict beforehand that the most important function of the heart among myriad others is to pump blood."

I know famous people like Fodor and Searle make this claim (I didn’t realize Kauffman did I’ll have to look at his book at some point), but it’s just wrong:

(a) Even if we insist that one needs to know the evolutionary history of a trait to know its “real function,” the Laplacian demon would have all of that information available. It knows the complete history of the total physical state of the universe.

(b) If our demon (“superfreak”) is smart enough to care about which functions are “the most important” then it should have little difficulty recognizing that the function of the heart is to pump blood (even without peaking at the past). It would be able to recognize certain self-regulating processes that maintain themselves against the flows of entropy, and it would be able to recognize that the heart’s circulating blood is an important component of this self-sustaining process (whereas, for example, the sound the heart makes is not).

Now, if we stipulate that our demon is not allowed to care about any structure or order above the level of particles, then you’re right that the demon will be ignorant of biological facts. But with this stipulation, the demon would also be ignorant of the shape of planets, the temperatures of stars, the rigidity of ice, and so on and so on. But this just shows that we shouldn’t make such a stipulation if we’re trying to figure out the ontology of the world.

Re: most recent posts from @Anonymous and @Physicalist:

I completely agree! I think I was stumbling to express similar thoughts earlier -- glad to see some backup. Thinking back now, the argument that initial conditions cannot in principle be precisely defined is really a limit to the superfreak's ability to predict *any* physical, chemical, or biological feature -- not just biological features like giraffes. So there's still no genuine distinction between the fields in that sense.

Also, function is a much fuzzier concept that physical existence, which makes it hard to blame the superfreak for any inability to predict function ab initio.

That said, to the extent that function can be defined, perhaps on the grounds of persistence of features/structures despite high entropy as Physicalist suggests (essentially a historical definition), the superfreak would have all the necessary information to make such a judgment, because He/She would know the entire physical history of the universe.

Again, thanks for the fascinating thought experiment, Wavefunction, but I've ended up entirely unconvinced!

I agree with FullyReduced's comment. Sadly the article is yet another example of a scientist misunderstanding the two different meanings of "reductionism". The first, simpler one, is theoretical and has to do with composition: if I take apart a person or a cell or even my alarm clock, I won't find any fundamental particles or forces unknown to physics. The second meaning is practical and has to do with explanation and prediction and suggests that all phenomena are best explained at the level of physics. The first is a bedrock of modern science and should be more widely promoted. The second is endorsed by essentially no scientists but is often confused by the public with the first. The first is the reason the superintelligent freak could (in principle) predict the entire history of life on earth. The second fails because we mere mortals can't. I wish scientists like this blog author and Kaufmann would stop doing a disservice to the public and be more clear about the two different meanings.

I find this debate fascinating and want to thank everyone for contributing. Firstly with reference to FullyReducible’s point, I am pretty sure that I (and presumably Kauffman) are not confusing the two types of reductionism. In fact the first statement- that everything is ultimately composed of quarks of strings or whatever- is not even reductionism it’s an obvious fact that nonetheless tells us nothing about complexity since there’s no context-specific dependence built into it For instance it cannot even tell us why two molecules with exactly the same atomic composition will have wildly different properties (again as a function of their environments).

Now that we have gotten the first kind of non-reductionism out of the way, let’s focus on the second kind which matters. I don’t know why it’s so hard for the reductionists here to understand the difference between enumeration of all possibilities and the assignment of probabilities to each of these possibilities. I have already agreed that a superintelligent freak could list all of the countless events that would encompass the random mutations and effects of chance that we are talking about. But it would be impossible to assign a priori probabilities to all these events and predict that the net probability of our current universe existing is 1. This would be possible only if the superintelligent freak knows the entire future of the cosmos, in which case the discussion becomes meaningless and unscientific.

Now let’s talk about function. I find FullyReduced’s statement about function being fuzzy very interesting (and it probably means we agree more than you think!) since that’s precisely why reductionism fails when it comes to function. It is precisely because ‘function’ is a result of the laws of physics compounded with chance that it’s difficult to predict on the basis of the laws alone. This leads into Physicalists’s objection that the Laplacian demon would be able to predict the function of the heart based on the environment in which it is embedded. But this environment itself is a result of countless chance events and encounters. So even if the demon could enumerate the many possible functions of the heart in advance, it would not be possible to say which function would turn out to be most important in our current environment. We are again facing the distinction between the a priori enumeration of possibilities and the assignment of weights to those probabilities. With reference to Physicalist’s last statement, it’s not so much that the demon is not allowed to care about structure, form and function but it’s really that she does not even know which form and function she should care about.

This discussion also leads into Physicalist’s very interesting point about nothing possibly being reducible to physics since the laws of physics support a universe without matter. That is absolutely true. In fact that’s precisely why I find the idea of multiple universes, each compatible with the laws of physics, so alluring. Multiple universes will allow us to make a perfectly good case for non-reductionism without destroying the utility and value of the laws of physics. Extending the distinction between enumeration and valuation, it would mean that the laws of physics could indeed list every possible universe that can exist but are agnostic with reference to our own universe.

Exploring Vents: Vent Biology

Since the discovery of animal communities thriving around seafloor hydrothermal vents in 1977, scientists have found that distinct vent animal species reside in different regions along the volcanic 40,000-mile Mid-Ocean Ridge mountain chain that encircles the globe. Scientists are investigating clues to explain how populations are connected, how they diverged and evolved separately.

To date, more than 590 new animal species have been discovered living at vents, but fewer than 50 active vent sites have been investigated in any detail. Scientists currently recognize six major seafloor regions&mdashcalled biogeographic provinces&mdashwith distinct assemblages of animal species.

In the eastern Pacific, tubeworms, clams and mussels dominate vent sites. In contrast, tubeworms are notably absent at vents in the Atlantic. Instead, billions of shrimp swarm at vents along the Mid-Atlantic Ridge, which bisects the Atlantic Ocean floor. There are two biogeographic provinces in the North Atlantic. Different species of shrimp and mussels predominate at vent sites that are at different depths. The deeper ones are south of about 30°N and shallower vent sites occur to the north. Both Pacific and Atlantic vents have mussels, but not the same species.

The fourth province is in the northeast Pacific, off the U.S. Northwest coast, which shares similar types of animals (clams, limpets, and tubeworms) with the eastern Pacific province, but markedly different species of each. In the western Pacific Ocean, at spreading ridges west of the Mariana Islands, vents in the fifth province are populated by barnacles, mussels, and snails that are not seen in either the eastern Pacific or the Atlantic.

VIDEO: Hydrothermal vents host life forms that exist nowhere else on earth, and form in places where there is volcanic activity, such as along the Mid-Ocean Ridge.

Scientists got their first chance to search for vents in the Central Indian Ocean in 2001 and found the sixth province. These vents are dominated by Atlantic-type shrimp, but also had snails and barnacles resembling those in the western Pacific.

Until 2005, all known Atlantic vent sites were north of the equator. Preliminary results from recent discoveries in the Atlantic south of the equator (5°-9°S) suggest these sites host similar but distinct species from known Indian Ocean and East Pacific Rise vents. Thus, the vents in the South Atlantic may represent a seventh biogeographic province.

The southernmost known chemosynthetic community in the Pacific is a vent site near 37°S on the Pacific-Antarctic Ridge. It includes Pacific-&ldquolike&rdquo fauna (bathymodiolid mussels, vesicomyid clams, and lepetodrillid snails).

Disclaimer

The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Advance Local.

Community Rules apply to all content you upload or otherwise submit to this site.

Meet Luca, the Ancestor of All Living Things

A surprisingly specific genetic portrait of the ancestor of all living things has been generated by scientists who say that the likeness sheds considerable light on the mystery of how life first emerged on Earth.

This venerable ancestor was a single-cell, bacterium-like organism. But it has a grand name, or at least an acronym. It is known as Luca, the Last Universal Common Ancestor, and is estimated to have lived some four billion years ago, when Earth was a mere 560 million years old.

The new finding sharpens the debate between those who believe life began in some extreme environment, such as in deep sea vents or the flanks of volcanoes, and others who favor more normal settings, such as the “warm little pond” proposed by Darwin.

The nature of the earliest ancestor of all living things has long been uncertain because the three great domains of life seemed to have no common point of origin. The domains are those of the bacteria, the archaea and the eukaryotes. Archaea are bacteria-like organisms but with a different metabolism, and the eukaryotes include all plants and animals.

Specialists have recently come to believe that the bacteria and archaea were the two earliest domains, with the eukaryotes emerging later. That opened the way for a group of evolutionary biologists, led by William F. Martin of Heinrich Heine University in Düsseldorf, Germany, to try to discern the nature of the organism from which the bacterial and archaeal domains emerged.

Their starting point was the known protein-coding genes of bacteria and archaea. Some six million such genes have accumulated over the last 20 years in DNA databanks as scientists with the new decoding machines have deposited gene sequences from thousands of microbes.

Genes that do the same thing in a human and a mouse are generally related by common descent from an ancestral gene in the first mammal. So by comparing their sequence of DNA letters, genes can be arranged in evolutionary family trees, a property that enabled Dr. Martin and his colleagues to assign the six million genes to a much smaller number of gene families. Of these, only 355 met their criteria for having probably originated in Luca, the joint ancestor of bacteria and archaea.

Genes are adapted to an organism’s environment. So Dr. Martin hoped that by pinpointing the genes likely to have been present in Luca, he would also get a glimpse of where and how Luca lived. “I was flabbergasted at the result, I couldn’t believe it,” he said.

The 355 genes pointed quite precisely to an organism that lived in the conditions found in deep sea vents, the gassy, metal-laden, intensely hot plumes caused by seawater interacting with magma erupting through the ocean floor.

Deep sea vents are surrounded by exotic life-forms and, with their extreme chemistry, have long seemed places where life might have originated. The 355 genes ascribable to Luca include some that metabolize hydrogen as a source of energy as well as a gene for an enzyme called reverse gyrase, found only in microbes that live at extremely high temperatures, Dr. Martin and colleagues reported Monday in Nature Microbiology.

The finding has “significantly advanced our understanding of what Luca did for a living,” James O. McInerney of the University of Manchester wrote in a commentary, and provides “a very intriguing insight into life four billion years ago.”

Dr. Martin’s portrait of Luca seems likely to be widely admired. But he has taken a further step that has provoked considerable controversy. He argues that Luca is very close to the origin of life itself. The organism is missing so many genes necessary for life that it must still have been relying on chemical components from its environment. Hence it was only “half alive,” he writes.

The fact that Luca depended on hydrogen and metals favors a deep sea vent environment for the origin of life, Dr. Martin concludes, rather than the land environment posited in a leading rival theory proposed by the chemist John Sutherland of the University of Cambridge in England.

Others believe that the Luca that Dr. Martin describes was already a highly sophisticated organism that had evolved far beyond the origin of life, meaning the formation of living systems from the chemicals present on the early Earth.

Luca and the origin of life are “events separated by a vast distance of evolutionary innovation,” said Jack Szostak of Massachusetts General Hospital, who has studied how the first cell membranes might have evolved.

From Dr. Martin’s data, it is clear that Luca could manage the complicated task of synthesizing proteins. So it seems unlikely that it could not also synthesize simpler components, even though the genes for doing so have not yet been detected, said Steven A. Benner of the Foundation for Applied Molecular Evolution. “It’s like saying you can build a 747 but can’t refine iron.”

Dr. Sutherland too gave little credence to the argument that Luca might lie in some gray transition zone between nonlife and life just because it depended on its environment for some essential components. “It’s like saying I’m half alive because I depend on my local supermarket.”

Dr. Sutherland and others have no quarrel with Luca’s being traced back to deep sea vents. But that does not mean life originated there, they say. Life could have originated anywhere and later been confined to a deep sea environment because of some catastrophic event like the Late Heavy Bombardment, which occurred 4 billion to 3.8 billion years ago. This was a rain of meteorites that crashed into Earth with such force that the oceans were boiled off into an incandescent mist.

Life is so complex it seems to need many millions of years to evolve. Yet evidence for the earliest life dates to 3.8 billion years ago, as if it emerged almost the minute the bombardment ceased. A refuge in the deep ocean during the bombardment would allow a longer period in which life could have evolved. But chemists like Dr. Sutherland say they are uneasy about getting prebiotic chemistry to work in an ocean, which powerfully dilutes chemical components before they can assemble into the complex molecules of life.

Dr. Sutherland, working from basic principles of chemistry, has found that ultraviolet light from the sun is an essential energy source to get the right reactions underway, and therefore that land-based pools, not the ocean, are the most likely environment in which life began.

“We didn’t set out with a preferred scenario we deduced the scenario from the chemistry,” he said, chiding Dr. Martin for not having done any chemical simulations to support the deep sea vent scenario.

Dr. Martin’s portrait of Luca “is all very interesting, but it has nothing to do with the actual origin of life,” Dr. Sutherland said.