Solar Energy

It is tempting to think of solar energy as panacea for the climate change problem in providing limitless carbon free electricity. The sun has been radiating the power of fusion for 4.6 billion years. While only a small percentage is directed toward the Earth, it is enough to have sparked the evolution of life that it has since sustained. It is the source of the energy of coal, natural gas and oil and the font of photosynthesis on which plants, fungi, and animals all ultimately depend. The Panglossian fix to the global warming problem is to stop using the sunlight energy stored as fossil fuel and start collecting sunlight energy directly. Back of the envelope calculations provide that the sun’s nominal one kilowatt per square meter power provided globally is more than adequate. Take any large tract of sun-drenched desert, fill it with solar panels―voila, case closed. The Sahara is usually the desert of choice due to its size and sunlight extremes. The energy falling within its torrid borders is three million billon watts per day, two hundred times the current global energy demand. Similarly, the western deserts of North America could be empaneled to produce fifty times the energy needs of the United States. [1]

Why this is not really the case is a matter of chemistry, physics, engineering, and economics. Solar panels are called photovoltaic (PV for short) because they collect sunlight energy (photo) and convert it to electrical current generated by a voltage gradient (voltaic). The individual solar cell is the sine qua non for a solar system of energy to supply electricity to the grid. Solar cells have their origin in research into the properties of semiconductors that also led to the development of transistors in the 1950s. A semiconductor is any material that has a conductivity between that of metals such as copper and insulators such as glass. The former conducts electrons readily and the latter impedes their movement. Resistance is the inverse of conductance; metals have low resistance and insulators have high resistance. Semiconductors are elements, notably silicon and germanium, that have the number and arrangement of electrons that is favorable to the generation and transport of a relatively small electrical current that can be controlled with high precision.

The chemistry of semiconductors is established by electrons. A fundamental property of science is that the components of any system will gravitate to a condition of greater stability, which is generally at the lowest energy level. This propensity is manifest in the chemical bond, as the electrons in the outermost or valence subshell of an atom seek to establish a stable state. The idea that stability at the ground, lowest energy state was the basis for all chemical bonding was suggested by the noble or inert gases (helium, neon, argon, krypton, xenon, and radon) that don’t combine with anything else. Argon, the first inert gas to be discovered in 1894 by Lord Rayleigh and Sir William Ramsay as a mysterious trace element in air which is otherwise nitrogen and oxygen, was named for the Greek word argos, which means “lazy.”  In 1923, the American chemist Gilbert Lewis proffered the eponymous Lewis theory of chemical bonding that has four fundamental tenets: (1) elements enter into compounds so as to share or exchange electrons; (2) in some cases, the electrons are transferred from one atom to another (an ionic bond); (3) in some cases, the electrons are shared between the two atoms (a covalent bond); and (4) each of the constituent atoms ends up with an “inert gas” outermost, or valence, electron shell.

The periodic table is arranged according to the progressive filling of electron shells with elements exhibiting similar characteristics in vertical columns called Groups numbered left to right from I to VIII (1 to 8). The inert gases are located on the far right. The elements that range across the middle are called metals and those that are near the inert gases on the right are called non-metals. In between metals and nonmetals are a smaller group of transition elements called the metalloids that exhibit both metal and non-metal properties. [2] The semiconductors are metalloids in the same group as carbon (Group IV) with the same bonding characteristics. Carbon is perhaps the most versatile of all elements due to its need for four electrons to complete its outer shell to the inert and stable configuration. It must therefore combine with four other elements by sharing electrons in covalent bonds. The entire field of organic chemistry concerns carbon compounds, the basis for life. If the four combining elements are also carbon atoms, the resultant combination is diamond, the hardest natural material known. The versatility of carbon bonding is shared by the semiconductors silicon and germanium that lie just below it in the periodic table―they also form four covalent bonds. Since the shells that contain the valence electrons in these elements are further away from the nucleus (higher energy states) than carbon, they can more readily be moved into a conducting state. The propensity of semiconductors to release an electron for use in and electrical circuit is enhanced by the addition of elements on either side (Group III or V) into a bonding arrangement, a process called doping. [3] Solar cells are made from doped semiconductors.  

The physics of solar cell semiconductors is based on the observed phenomenon that radiant energy in the form of photons impinging on some surfaces will result in a flow of electrons. The  German physicist Heinrich Hertz named it the photoelectric effect in 1887 after observing that ultraviolet light changed the voltage at which sparking occurred between a pair of metallic electrodes. By the early 1900s, it was determined through further experimentation that the number of electrons released was proportional to light intensity (measured in candlepower- now the candela) and that the energy of the electrons was dependent on the incident light frequency f (or wavelength λ as they are related by the equation f = c/λ where c is the speed of light). That this could not be explained by classical physics was the impetus for Albert Einstein to propose what is now the fundamental theory of light. He posited that light could be considered as particles (now called photons) instead of waves and that these particles could penetrate an atom and collide with and impart enough energy to its electrons for them to escape from their orbit around the nucleus. The paper he wrote in 1905, entitled ”On a Heuristic Viewpoint Concerning the Production and Transformation of Light” was the basis for the award for the Nobel Prize in physics in 1922. His work stimulated the then nascent field of quantum theory promoted by the Danish physicist Niels Bohr who conceived the atomic model of electrons orbiting the nucleus in discrete energy levels called quanta. [4]

Physics also establishes the inherent limitations of solar panels because the photoelectric effect only occurs according to inviolate rules. Incoming solar energy must have sufficient intensity at the appropriate frequency to remove one of the  outer shell or valence electrons of an atom to become part of the electrical current flow output of the solar panel. Electrons occur around the nucleus in discrete orbits that are separated into discrete quantum energy levels. The photoelectric effect in semiconductors can only be understood in terms according to the rules of quantum mechanics. An incoming photon of light with sufficient frequency and intensity strikes an electron, knocking it from the valence energy band into the conduction energy band; literally a quantum leap. However, the electron must then make its way through the rest of the atoms in the panel to reach the surface, expending energy with every encounter. Einstein called this the work function with the symbol omega (ω). The work function varies with many factors, notably the surface condition of the material, its purity, and what is called the packing arrangement of its atoms in crystalline form. The optimization of the amount of electricity that can be extracted from sunlight must take these factors into account. [5]   

The use of the chemistry of semiconductors and the physics of the photoelectric effect to produce electricity requires engineering, the practice of putting scientific knowledge to practical use. Engineering is the bridge from the laboratory solar or photovoltaic cell to a practical solar cell that can be used as part of a fielded electrical power supply system. The era of solid state electronics started at Bell Laboratories in the 1950s, the epitome of electrical research and development rivaled only by Thomas Edison’s Menlo Park for its relevance to modernity. William Shockley was hired just after World War Two to lead the efforts to expand on prewar research that had led to the discovery of what were called P type for “positive” and N type for “negative” silicon semiconductive materials. The serendipity of chance  discovery here played a key scientific role. Shipments of silicon that had been received at Bell Labs from various manufacturers were found to have different properties leading to the hypothesis that the differences were caused by impurities. Further experimentation revealed that P type silicon was contaminated with boron, just below silicon in the periodic table (Group III) and that the N type silicon had phosphorous, just above silicon in Group V. On Friday the 13th of April 1945, Shockley drew a diagram in his lab notebook for a P-N junction which he called “a solid state valve drawing small control current” that could be used for “controlling the flow of electricity in a conducting path.” [6] The solid state transistor to control current in an electrical circuit that he imagined was the harbinger of the information age.

Solar cells followed using the same P-N junction principle. Here the object is not to amplify and otherwise control electrical current but to make electricity out of sunlight. The key to doing this was a matter of materials engineering using different combinations of semiconductor materials with different additives called dopants to improve efficiency―the amount of electrical energy out relative to the amount of sunlight energy in. For single silicon cells, the maximum theoretical efficiency of 33.7 percent imposed by physics is called the Shockley-Queisser Limit with more than 50 percent of the sun’s energy lost as heat. The importance of doping is straightforward. Antimony from Group V with five valence electrons added to Silicon with four valence electrons yields one extra electron that can then be readily removed as current with both atoms having their “inert” configuration of covalent bonds. Bell Labs produced the first operating silicon solar cell in 1954 with an efficiency of 6 percent. [7]

Solar cells for spacecraft became the first practical application of photovoltaic technology. The International Geophysical Year of 1957 to 1958 was initiated in 1950 by scientists from across the globe to promote scientific cooperation.  The US and the USSR announced plans to launch earth satellites in 1955. The US program consisted of two publicly announced and progressed projects: Vanguard, a three-stage rocket designed by the Naval Research Laboratory and Explorer to be launched on a missile designed by the US Army Ballistic Missile Agency. The Soviets were mum until the surprise launch of Sputnik, the world’s first artificial satellite, on 4 October 1957 followed by Sputnik 2 one month later carrying a dog named Laika. The Vanguard I launch collapsed in a huge fireball, which the press dubbed “Flopnik” in December. The Explorer was launched successfully in January. The new and improved solar cell powered Vanguard II was launched on 17 March 1958; it is still in orbit. [8] The Vanguard solar cells had a  total power of one tenth of a watt in an array of one tenth of a square meter, the equivalent of 1 watt/m2 with and efficiency of 10 percent. They only work out as far as the orbit of Jupiter where the sun’s radiant energy fades. Beyond that, nuclear cells that produce their own radiation from radioactive decay become necessary.

Solar cell technology advanced as an integral part of the space race between the US and the USSR in the second half of the twentieth century. With design constraints that necessitated minimum weight and surface area due to payload launch constraints, aerospace applications favored higher power density cells without regard to unit cost. The key parameter is specific power, which is watts per kilogram.  By using multiple layers of solar cells with different materials to take advantage of different wave lengths of incident solar radiation, efficiencies of over 45 percent have been achieved. The subsequent world-wide roll out of solar cell technology was precipitated by the need for  stand-alone powering capabilities where transmission lines would not reach or where batteries were too expensive to install and maintain. Ironically, the oil and gas behemoth Exxon-Mobil provided part of the funding to develop affordable solar cells using lower grade silicon and cheaper materials to drive the cost from $100 to $20 per watt. The motivation was to provide power for remote pumping stations and off shore rigs primarily for signal and alarm systems. The cheaper cells made it cost effective for the US Coast Guard to implement solar cells to replace batteries on ocean buoys and for railroads to upgrade to wireless solar cell signaling systems.  The closing decades of the twentieth century raised the ante for solar cells with the advent of roof-top panels for buildings and solar powered pumps for irrigating far flung fields. [9]

The twenty-first century opened with the inconvenient truth that the Industrial Revolution had an unintended consequence. The United Nations Environmental Programme established the Intergovernmental Panel on Climate Change (IPCC) in 1988 to provide “an assessment of the understanding of all aspects of climate change, including how human activities can cause such changes and can be impacted by them.” The Third IPCC Assessment Report at the turn of the century was clarion call to action, confirming that over the course of the twentieth century, temperature had risen 0.6°C, snow and ice cover had fallen by 10 percent leading to an average sea level rise of 15 centimeters, and that precipitation had increased by 5 percent. [10] The search for carbon free energy on a global scale was on and photovoltaics was in the crosshairs of innovative engineering. Cost would be the determinant figure of merit. To manufacture and install solar panels in acres of arrays to generate electricity at a cost per watt comparable to fossil fuel became the “over the rainbow” goal. After a decade delay that followed the post 9/11 global war on terror and the financial meltdown that followed, the US government was finally able to focus on climate.   

The crux of the economics issue is that high efficiency solar cells are expensive and cheap solar cells are inefficient. To be affordable as an integral part of an energy grid of the future requires solar cells to be both cheap and efficient. Silicon is the semiconductor of choice because it is abundant and therefore cheap; it is second only to oxygen as the most common element in the earth’s crust (28.2 percent). However, raw silica must be chemically treated to convert it into a crystalline form that will conduct electricity. Silicon PV cells are made by cutting crystalline silica into thin slices that are doped to produce the PN junction of a diode with metallic contacts to conduct the photon generated current flow. The crystal structure determines the efficiency of the cell. Single crystal cells are the most efficient but they are more expensive to manufacture than cells with multiple crystals. The efficiency of the best commercial solar cells using single crystal cells is about 20 percent. This can be improved by adding additional cells that are designed to capture photons at different frequencies. When these are combined in a single panel, known as multijunction cells, efficiencies of nearly 50 percent can be achieved. These PV cells are at the efficient but expensive end of the spectrum. At the opposite end of the spectrum are thin film solar cells that are applied to a substrate of metal, glass, or plastic that can be flexible to allow for contoured surface installations. Thin film solar cells trade off efficiency for expense. The ultimate goal of any successful solar cell is to produce electricity at the lowest dollar per watt value after all factors, including installation, maintenance, replacement, and materials cost, are included in the calculation.[11]

That achieving the right balance between cost and efficiency would be difficult was evident early on. The Energy Policy Act of 2005 empowered the Department of Energy (DOE) to “spur commercial investments in clean energy policies that use innovative technologies” through the use of federal loan guarantees to private companies. Solyndra, a California company that had developed copper indium gallium di-selenide thin-film solar cell technology seemed a sure bet and was richly endowed with federal funding. The technology worked to reduce power cost but the economics didn’t―they were unable to compete with conventional, flat silicon solar panels. When the company went bankrupt two years later it was considered the “first serious financial scandal of the Obama Administration.” When the dust settled, it was generally concluded that the government’s ability to pick technology winners was inherently flawed; federal funding should be directed at research and development with the marketplace promoting viable technologies. Bloomberg News concluded that “If the Solyndra debacle gets U.S. policy pointed in the right direction, the loan-guarantee losses won’t have been totally in vain.” [12]

 The Advanced Research Projects Agency for Energy (ARPA-E) was established and funded in 2009 to advance “high-potential, high-impact energy technologies that are too early for private-sector investment” using the Defense Department DARPA model that pioneered the Internet. Of the 46 ARPA-E energy research centers funded in 2010, 24 were working on solar energy issues. These initiatives are rightly in the areas of basic research to try to develop a solar cell that is easy to manufacture from cheap materials with sufficient efficiency to be cost competitive. Basic research is long term by its nature with failures outnumbering the rare success by an order of magnitude. Among the programs in the works are Solar Agile Delivery of Electrical Power Technology (ADEPT) to improve PV performance to Full-Spectrum Optimized Conversion and Utilization of Sunlight (FOCUS) to expand the range of solar cells to encompass a broader bandwidth of solar radiation frequencies. It goes without saying that a mnemonic acronym is nearly a prerequisite for government funded programs. While there have been no eureka breakthroughs to date, there is every reason to hope that there will be. [13]

While a super solar cell may be in the offing at some point, there is something to be said for Adam Smith’s tried and true economies of scale. Spaceship Earth is not payload limited like Vanguard rockets. Manufacturing myriad, large, cheap solar panels in an assembly line manner to cover large swaths of surface area is sure to drive the cost per unit down, just as it did pins according to Smith’s dictum. In 2006, one of the world’s largest semi-conductor manufacturers embarked on a program to manufacture garage door sized glass panels coated with thin films of amorphous silicon, a focused attempt to sacrifice efficiency for size to lower the dollar per watt cost. The assembly line process started with 60 ft2 glass panels precoated with a thin metal oxide film run on a conveyor belt through an automatic laser scribe to define the boundaries of 216 individual cell panels. Three layers of amorphous silicon that each absorb light from different parts of the spectrum were robotically added sequentially using vapor deposition. With the addition of metal contacts and a junction box, the panels were ready for shipment. With a cost of $3.50 per watt that was projected to decrease to $1.00 per watt as production ramped up, the prognosis for large scale arrays was sanguine. [14]  The company shut down the assembly line in 2010 due to lack of demand. [15]

The difficulties with manufacturing solar cells in the United States to satisfy market demand at both the high efficiency, high cost and low efficiency, low cost ends of the spectrum is indicative of a global economic megatrend. Solar panel supply and demand imbalance is a microcosm of the effects of China’s manufacturing juggernaut. The Chinese produced 85 percent of all solar panels sold across the world in 2022 with almost the entire balance from other Asian-Pacific (APAC) nations, mostly Vietnam. The United States and Europe produced less than one percent each. This contrasts with the total of 1,000 terawatts (TW) of PV panels installed globally in 2022 with 50 percent in China and APAC and about 17 percent in both Europe and in North America. This sounds like a lot of power, but it is only about 15 percent of the total renewable capacity of 7,500 TW which is only about 10 percent of the global energy supply. This means that solar energy comprises only one percent of world total electrical generation.  The 150 gigawatts of solar energy added in 2021 was a record amount; it is one third of the average annual addition in PV power needed annually over the next decade to meet the goal of carbon neutrality in 2030. [16]

Returning now to the original thesis that the sun produces ample energy to empower human enterprise many times over. Even if PV cell chemistry and physics could be engineered imaginatively into  cheap and efficient solar panels, two intractable problems remain: diurnal and seasonal  sunlight as an energy source and a lack of a repository to store electricity generated as supply that exceeds immediate use demand. Most of the industrial world is geographically situated between 30 and 50 degrees north of the equator. This means that the 1,000 watts per square meter that falls on the equator at midday is reduced to about 600 watts per meter in the industrial zone. It is only midday at noon, so the overall energy delivered must be also discounted by half to 300 watts per meter to account for mornings and afternoons. Cloud cover, which in some locations like the UK amounts to more than half the day on average, results in a diminution of solar cell production by an additional factor of ten. The net effect is that the actual amount of solar energy that impinges on panels ranges from about 100 watts per square meter in Germany and New York  to 200 watts per square meter in Spain and Texas. With commercial solar cell efficiency at 10 percent and unlikely to ever exceed 20 percent, the output electricity is only about 10 to 20 watts per square meter. [17] This means that Gargantuan solar panel “farms” are needed to provide for a city size load in the gigawatt (GW – billions of watts) range. These are only likely to be economical in those areas that are closer to the equator and are relatively near to the cities they supply. The largest solar farm in the world is in the desert state of Rajasthan just west of Delhi, India covering 14,000 acres producing just over two gigawatts. The largest facility in the United States is in California with 579 MW (0.6 GW) covering 3,000 acres. The largest solar farm in the state of Delaware produces 15 MW on 80 acres.

It is conceivable that enough solar panel mega farms could be built in some places to make enough electricity to meet demand when the sun shines. But what do you do at night and during winter? And what do you do when PV power supply exceeds grid demand? The answer to both questions is energy storage. Saving the excess current of PV cells during cloudless, sunny days in summer to be used at night and over the winter is the Achilles’ heel of renewable energy. Long-duration energy storage (LDES) is the collective name for methods, both real and imagined, that seek to alleviate the renewable storage problem. Rechargeable batteries cannot store energy on a large enough scale because they have low energy density, a short life cycle, and, ultimately, cost too much. The most well-established LDES technology is pumped-storage hydropower, the name a literal description of its modus operandi. Excess renewable electricity produced is used to pump water from a low elevation catch basin to an elevated reservoir. The stored potential energy is converted back to electricity on windless nights by water turbines. There are also proposals to use the excess solar power electricity to make hydrogen gas with electrolysis. One may conclude that, while solar energy may be one of many technologies that will need to be employed to reduce fossil fuel demand, it is hardly a panacea.

References:

1. Laughlin, R. Powering the Future, Basic Books, New York, 2011, pp 91-93.

2. Petrucci, R. General Chemistry, Principles and Modern Applications, Macmillan Publishing Company, New York, 1985. Pp 198-203, 364-401.

3. Semiconductors and Insulators, Theory of, Encyclopedia Britannica, Macropedia 15th Edition  William Benton, Chicago, Illinois, 1974, Volume 16, pp 522-529.

4. Marton, L. “Photelectric Effect” Encyclopedia Britannica, Macropedia 15th Edition  William Benton, Chicago, Illinois, 1974, Volume 14, pp 296-300.

5. Neamon, D. Semiconductor Physics and Devices, McGraw Hill Boston, MA, 2003. p 104-106. http://www.fulviofrisone.com/attachments/article/403/Semiconductor%20Physics%20And%20Devices%20-%20Donald%20Neamen.pdf

6. Riordan, M. Crystal fire: the invention of the transistor and the birth of the information age, W.W. Norton Company, New York, 1988. pp 97-113.

7. Smil, V. Energy in Nature and Society, MIT Press, Cambridge, MA, 2008, pp 255-257

8. https://www.nasa.gov/feature/65-years-ago-the-international-geophysical-year-begins

9. Perlin, J. “Late 1950s – Saved by the Space Race”. SOLAR EVOLUTION – The History of Solar Energy. The Rahus Institute. http://californiasolarcenter.org/old-pages-with-inbound-links/history-pv/

10. Climate Change 2001 Synthesis Report, Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, UK. 2001.

11. https://www.energy.gov/eere/solar/solar-photovoltaic-technology-basics     

12. Lott, M. “Solyndra — Illuminating Energy Funding Flaws?” Scientific American.  September 27, 2011.

13. https://arpa-e.energy.gov/technologies/programs

14. Bourzac, K. “Scaling up Solar Power” MIT Technology Review, March/April 2010, pp 84-86. 

15. Kanellos, M. “Applied Materials Kills its SunFab Solar Business”. Greentech Media 21 July 2010.

16. https://origin.iea.org/data-and-statistics/charts/solar-pv-manufacturing-capacity-by-country-and-region-2021

17. MacKay, D. Sustainable Energy – without the hot air UIT, Cambridge, UK, 2009 pp 38-49.

Greenhouse Effect and Global Warming Gases

The greenhouse effect is the warming of the Earth due to its atmosphere. Solar radiation passes through the atmosphere like the panes of glass forming the roof and walls of a greenhouse. Radiant energy impinging on Earth’s surface and the floor of the greenhouse causes them to heat up. Since heat flows from hot to cold as a matter of basic physics, both of the now warmer surfaces heat the surrounding air by radiating upward. The greenhouse effect results because the solar radiation passes through the atmosphere and the glass with little absorption, but the surface heat radiation is partially absorbed as it seeks to escape. The reason for the difference is that the wavelengths of electromagnetic energy of the two are different. Solar radiation that reaches the Earth is shorter wave ultraviolet and visual. The heat radiation emanating outward from the surface is longer wave infrared. The terms ultraviolet and infrared refer to the wavelengths that are shorter than, or “beyond” the violet end of the visible spectrum and those that are longer than or “below” the red end (keeping ROY G BIV in mind). The significance of different wavelengths should come as no surprise. The microwaves used to heat up lunch while listening to the radio waves of music broadcast remotely are part of the same electromagnetic spectrum.

As diagrammed above, incoming solar radiation that reaches the top of the atmosphere is 342 watts per square meter (Wm-2 is shorthand for W/m2). Watt is the eponymous unit of power familiar from light bulb notoriety to honor James Watt, the inventor of the condensing steam engine. He coined the term horsepower so that people would understand what a steam engine could do; one horsepower is about 746 watts. Only 168 Wm-2 is absorbed by and heats the surface of the earth as 77 Wm-2 is reflected by clouds, aerosols, and atmospheric gases, 30 Wm-2 is reflected by the earth’s surface, and 67 Wm-2 is absorbed by the atmosphere. Thus, the sun’s primarily ultraviolet and visible light short wavelength incoming radiation mostly passes through the atmosphere, heating up the surface of the earth like it does a greenhouse. The outgoing surface radiation of 390 Wm-2 is shown on the bottom right. This is the longer wavelength infrared radiation of the Earth’s surface rising into the atmosphere. The change in wavelength between incoming and outgoing is because the sun is much hotter than the earth. [1]

The radiation spread or spectrum between high energy ultraviolet and low energy infrared is based on the temperature of the radiating body. Some of the infrared radiation (40Wm-2) escapes, but over 80 percent (324 Wm-2) is reflected back to the surface by the gases in the atmosphere,  which are called greenhouse gasses for this reason. The other heat energy components in the diagram are those associated with the hydrologic cycle; the evaporation and condensation of water is also a function of heat and temperature. The climate changing equation is that incoming short wavelength solar radiation must be either reflected back into space, or balanced by outgoing longwave radiation, mathematically 342 = 107 + 235. It is clear that the greenhouse gases play a key role in this balance. If more gas is added, more heat is radiated back from the atmosphere and surface temperature must go up to compensate. Global warming results. [2]

Trapping the heat of the sun under a warming blanket of atmosphere makes life on Earth possible. Without the greenhouse effect of its atmosphere, Earth would be like our planetary neighbor in the next outward orbital. Mars has an average temperature about 75°F below zero. If no action is taken to stem the tide of rising temperature, Earth will become more like Venus, where the mostly carbon dioxide atmosphere creates a super greenhouse effect with an average temperature of over 800°F. The planet hunting astronomers call the region near a star which falls in the range where liquid water can exist the Goldilocks Zone indicating that life as we know it could be possible there ― the circumstellar habitable zone. It is necessary but not sufficient that Earth is in one. It must also have sufficient moderating atmosphere with enough (but not too many) greenhouse gas molecules.  

The French mathematician Jean-Baptiste Fourier is credited with making the first observation that the earth must be warmed by solar radiation due to atmospheric containment: “Tous les effects terrestres de la chaleur du soleil sont modifiés par l’imposition de l’atmosphère.”  (all of the sun’s heat effects on earth are modified by the atmosphere). [3] This philosophical observation was rooted in science by the Swedish physicist Svante Arrhenius who first quantified the effect of carbon dioxide (then called carbonic acid) on temperature, which he called the “hothouse” effect (which, ironically, is probably the better term).  His conclusion was “… if the quantity of carbonic acid increases in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression.” [4] The theoretical musings about the greenhouse effect became more factual after the first four decades of the 20th century. British engineer Guy Callendar reviewed historical data in 1938 to conclude that: “by fuel combustion man has added about 150,000 million tons of carbon dioxide to the air during the past half century” resulting in a measurable global temperature increase “at an average rate of 0.005°C per year.” [5] While convincing, the correlation of carbon dioxide to temperature does not prove causation – that the accumulation of atmospheric carbon dioxide is sine qua non for the measurable rise in global temperature. 

The first scientific experiments were carried out by the Irish physicist John Tyndall who realized that the absorption of radiation by gases was “a perfectly unexplored field of inquiry” in 1859. He constructed the world’s first spectrophotometer, a tube that could be filled with different gases and subjected to radiation. It was instrumented with a recently invented device called a differential thermopile that could measure miniscule changes in temperature. Six months after he began his experiments, he presented his eureka results to Britain’s Royal Society: Different gases varied markedly in their ability to absorb and retransmit radiant heat. Nitrogen and oxygen, which make up over 99 percent of atmosphere were found to be essentially transparent to radiant heat, but other more complicated molecules, including water vapor, carbon dioxide, ozone, and (quixotically) perfume absorbed heat much more readily, even in small concentrations. Tyndall stressed the importance of water vapor, because “comparing a single atom of oxygen or nitrogen with a single atom of aqueous vapor, we may infer that the action of the latter is 16,000 times the action of the former.” [6] He concluded that water vapor was the most important gas controlling the surface temperature of the earth. This, then, became Royal Society gospel and accepted science for over a century.

The emergence of carbon dioxide as the true climate chimera was only a matter of time and science. Tyndall’s primitive experiment demonstrated only that humid air absorbed heat energy. Why it did so was another matter. The physics is complex, relating to the quantum energy levels of atoms of the greenhouse gas molecules. Spectroscopy, the study of the absorption and emission of light and other radiation as related to its wavelength, evolved rapidly in the early decades of the twentieth century. The emission or absorption of light within a narrow frequency and energy band is called a spectral line. Carbon dioxide has thousands of spectral lines that are responsible for the absorption of the infrared radiation of heat energy. A detailed understanding only became possible with accurate measurements at different heights in the atmosphere. These vary in intensity and width with temperature and pressure and therefore with altitude … a multivariable problem in three dimensions presenting a tangle of interrelated calculations.  High speed computation was needed to run the iterative  sequences of differential equations. By the 1950s, the measurements were available and the computers were programmed.  The absorption of heat energy by molecules of carbon dioxide became settled cause and effect science. As early as 1956, there was convincing evidence that “…if the carbon dioxide content of the atmosphere should double, the surface temperature would rise by 3.6 degrees Celsius.” [7]

There remains the vexing problem of water vapor.  There is a rational reason why water and its vapor loom large in debates about climate change causation. Weather, the fluctuating state of the atmosphere with elements of wind, rain, and sunshine that determines climate only when averaged over decades, is dominated by water. Rain in summer and snow in winter come from clouds that are condensed water vapor evaporated from liquid oceans, lakes, and rivers. Water  is the most variable component of atmosphere and is central to climate variability and change.  Oceans cover 70 percent of the Earth’s surface, contain over 96 percent of its water, produce 86 percent of all evaporation, and receive 78 percent of all rain. Spinning this sloshing volume at speeds of up to 1000 miles per hour  between and around the embedded land mass continents of a tilted, heated globe produces weather.

Water vapor is a natural greenhouse gas. It is also the most heat absorbing of all greenhouse gases. The hydrologic cycle of evaporation, rain, and runoff has been going on for billions of years ― the planetary plumbing system. The storing of the sun’s heat energy as the latent heat of evaporation of water into the atmosphere (note figure above) and its release when the vapor condenses to fall as rainwater provides the energy for weather. To complicate matters, water vapor produces positive feedback. Warmer weather means more evaporation which increases the water vapor in the atmosphere which traps more heat which causes warmer weather. Positive water vapor feedback  is considered the most important factor in amplifying the increase in surface temperature. Further, water vapor condenses into clouds, which are not gases, but contribute nonetheless to the greenhouse effect by absorbing and emitting the infrared heat radiation. But clouds also act a shield, cooling the climate by reflecting solar radiation. The variability of cloud formation and movement is one of the most profound conundrums of climate science. The only plausible way to address the chaotic interplay of sun, wind, and water was to develop increasingly sophisticated models that require high speed supercomputers. That has now evolved to many different models that can be compared and contrasted to narrow the uncertainty.

The Coupled Model Intercomparison Project (CMIP) was started in 1995 as a collaboration among models to compare results. First generation Atmosphere – Ocean General Circulation Models (AOGCM) used the physical dynamics of atmosphere, ocean, land, and sea ice as impacted by greenhouse gases and particulates called aerosols. State of the art Earth System Models (ESM) were more recently added to include the effects of  biochemical carbon, sulfur, and ozone cycles. Model validation consists in part of  inputting historical data to compare model output with the known result. The latest CMIP round was based on data collections that ended in 2013 to evaluate the relative efficacy of 56 different models from twelve countries including the United States, China, Russia, and Norway (where weather forecasting started). The conclusion was that doubling the amount of carbon dioxide in the atmosphere would result in a temperature increase of 2.1 to 4.7 degrees Centigrade. [8] It is worth noting that the 3.6 degree rise estimated in 1956 is consistent with this result.  Modelling continues as carbon dioxide emissions and temperature keep rising. 

Even though water vapor is the dominant greenhouse gas, it is essentially irrelevant to climate change just as it is paramount to weather. Its variability in the short term of weather is offset by its consistency over the long haul of climate. The rising concentrations of other atmospheric gases due not immediately impact weather ― but they are at the epicenter of the climate change problem because they have been and are being added to the atmosphere continuously.  The conclusion made by the United Nations was that the only way to arrest climate change was to reduce the atmospheric emission of greenhouse gases over time. The Kyoto protocol was a United Nations (UN) treaty initiated in 1997 and which went into effect in 2005 after ratification by Russia and Canada as the last two of the stipulated fifty-five nation quorum. It specified limits on the six greenhouse gases that were found to be the most damaging due to heat absorption characteristics and concentration in the atmosphere: carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), sulfur hexafluoride (SF6), hydrofluorocarbons (HFCs), and perfluorocarbons (PFCs). The Global Warming Potential or GWP was established as a parameter with a molecule of carbon dioxide having a value of 1 in order to quantify the effects of the other gases. The UN Conference of Parties (COP) that constitute the signatories to the treaty agreed to proceed “with a view to reducing their overall emissions of such gases by at least 5 per cent below1990 levels in the commitment period 2008 to 2012.” [9,10]

The last three “minor” greenhouse gases are frequently grouped together as the “F-gases” to indicate that they contain the element fluorine;  taken together, they constitute less that 1% of the total greenhouse gas emissions. Sulfur hexafluoride (SF6) gas is used primarily in high voltage electrical distribution systems due to its insulation properties. It was a replacement for oil-filled electrical components that contained polychlorinated biphenyls (PCBs) which were banned in 1979 by the Toxic Substances Control Act (TSCA).  Each SF6 molecule is the equivalent (GWP) of 23,900 molecules of CO2.  HFCs and PFCs consist of a number of different compounds that were formulated to replace chlorofluorocarbons (abbreviated as CFCs) that were banned by the Montreal Protocol of 1987 due to their ozone depleting effect (ozone filters damaging UV radiation). Their GWP values range between 140 and 11,700 for HFCs and between 6,500 and 9,200 for PFCs . In the 1950’s, Barry Commoner, a prescient scientist at the forefront of the environmental movement, devised four laws of ecology. [11] The irony of introducing greenhouse gases (HFC and PFC)  to replace an ozone depleting substance (CFC) is direct evidence of his fourth law  – “There is no free lunch” –  every environmental solution (ozone depletion) has a cost (greenhouse gases). This applies equally to SF6 and PCBs.

Nitrous oxide (N2O) is the least known of the three “major” greenhouse gases, its provenance usually listed as “agricultural soil management.” With a GWP of about 300, it constitutes about 8% of the total greenhouse gas composition. The main culprit is fertilizer, which is about 10 percent nitrogen that must be added to the soil to compensate  for the nitrogen removed with the harvest of the crop –  about 100 pounds of nitrogen are removed with the harvest of every acre of corn. Fertilizer is necessary and sufficient to “manage”  soil agricultural productivity. This added nitrogen is acted upon by the bacteria in the soil as a source of energy for their own growth and reproduction – a process that is called nitrification, basically the conversion of ammonium  (NH3)  into nitrate (NO3). Nitrous oxide is a naturally occurring by-product of bacterial nitrification of the added nitrogen-based fertilizer. Not to get too technical but to be complete, there is also a process called denitrification in anaerobic (lacking oxygen) soils where bacteria reduce nitrate to gaseous nitrogen; denitrification, like nitrification,  releases nitrous oxide as a by-product. Thus, as more crops are grown for the ever-expanding global population for either food, fodder or fuel (ethanol), more nitrogen enriched fertilizer must be used to reconstitute the depleted soil – and therefore more nitrous oxide results. The Anthropocene nitrogen cycle has been called the Wibbly-Wobbly Circle of Life. [12] Commoner’s first law  of ecology is  “Everything is connected to everything else.” The earth is such a complex and balanced ecosystem that every disturbance (added fertilizer) has far-reaching effects (greenhouse gases and global warming).

The three primary sources of methane (CH4) with a GWP of around 30 are enteric fermentation, natural gas systems, and landfills. Taken together, they contribute more that three fourths of the total methane emissions in approximately equal shares. Enteric fermentation methane is from the normal digestion of food by ruminant animals, particularly cattle. Ruminants are named for the rumen, the first of their four stomachs –  the repository for the fibrous material that they consume. Microbes in the rumen break down the tough cellulose as part of the digestive process; methane is a byproduct of that process that is expelled by the animal as exhalation. Over 95% of enteric fermentation methane is from beef and dairy cows. Other animals, including humans, produce the remainder of the enteric (intestinal) fermentation methane as flatulence. Methane is the primary constituent of natural gas that is widely used for heating and to generate electricity – some of this natural gas escapes into the atmosphere.  Landfills are the largest of the three major sources of methane, comprising almost 40% of the total – the source is anaerobic bacterial decomposition of human trash. Commoner’s second law of ecology applies to methane – “Everything must go somewhere” – there is no way to simply throw things (trash) away, because it will still be there and you have to live with the results (greenhouse gases).

And last but certainly not least is carbon dioxide, the scion of the industrial age and perhaps the harbinger of its demise; it makes up more than 80% of all greenhouse gasses – by definition it has a GWP of 1. The carbon cycle is the essence of life;  carbon dioxide is input to plant photosynthesis and output of organisms like humans oxidizing food for energy. The majority of excess carbon dioxide in the atmosphere comes from the combustion of fossil fuel – oil, gas and coal.  It is the energy released by the oxidation of hydrocarbons that is both the boon and the bane of the modern world. For example, the natural gas reaction is:

                          CH4    +  2O2      ―>   CO2     +   H2O  +    energy 

The level of CO2 in the atmosphere has historically been about 280 parts per million (ppm). It is now over 420 ppm.  The energy we use to make electricity and to operate vehicles is increasing greenhouse gas concentrations which are causing the earth to heat up. “Nature knows best” is Commoner’s third law of ecology – every human made change is likely to be detrimental to the balance of nature.  Anthropogenic greenhouse gases are the most obvious and potentially existential example. Our mother is nature.

References:

1. Third Annual IPCC Report – https://www.ipcc.ch/report/ar3/wg1

2. Dessler, A. and Parson, E. The Science and Politics of Global Climate Change, Cambridge University Press, New York, 2006, pp 6-11.

3. Fourier, J. “Remarques Generales sur les Temperatures Du Globe Terrestre et des Espaces Planetaires”. Annales de Chimie et de Physique. 1824 Volume 27 p 165.

4. Arrhenius, S. “On the influence of carbonic acid in the air upon the temperature of the ground”  The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. April 1896, Volume 41 No. 251: pp 237–276.

5. Callendar, G. “The artificial production of carbon dioxide and its influence on temperature” Quarterly Journal of the Royal Meteorological Society April 1938 Vol. 64 Issue 275 pp 223-240.

6. Fleming, J. Historical Perspectives on Climate Change, Oxford University Press, New York 2005. pp 66-74.

7. Plass G. “Carbon Dioxide and the Climate.” American Scientist, 1956, Volume 44 pp 302-316.

8. Flato, G. et al Evaluation of Climate Models. Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change Cambridge University Press, Cambridge, UK. pp 741-827.

9. Kyoto Protocol to the United Nations Framework Convention on Climate Change. Conference of the Parties. FCCC/CP/L.7/ADD.1, Kyoto, Japan, 10 December 1997.

10. https://www.epa.gov/enviro/greenhouse-gas-overview   

11. Miller, Stephen. “Early Voice for Environment Warned About Radiation, Pollution”. The Wall Street Journal. Retrieved June 2018. In his 1971 best seller The Closing Circle, Commoner posited four laws of ecology: Everything is connected; Everything must go somewhere; Nature knows best; and There is no such thing as a free lunch.

12. Essay in The Economist, 24 December 2022.

Moose

Moose frequent ponds and streams to eat aquatic plants.

Common Name: Moose, elk – The Algonquian word moos-u means “he shaves or trims.” Native Americans applied the sobriquet to describe the characteristic stripping bark and the lower branches from trees. It became moose as colonists migrated north into their habitat. Moose are called elk in Europe absent any prior vernacular names in native languages. Elk is derived from elaphos, the Greek word for deer.

Scientific Name: Alces alcesAlces is Latin for elk. It is the only species in the genus so one name serves for both. The American elk is a completely different species from the European “moose elk.” In the deer genus as Cervus canadensis, elk are also called wapiti from the Shawnee word meaning “one with a white rump,” a prominent visual characteristic.

Potpourri:  Moose are solitary sentinels in the northern, boreal forests spanning the globe in both North America and Eurasia, where they are known as elk. Moose is metaphor for rugged individualism, surviving the extremes of ice and deep snow with diminished sunlight and plunging temperatures without the restful hiatus of hibernation. They are the giants of the deer family, their bulk sustained by an herbivorous diet. Moose are capable of consuming almost anything that they can find. During winter, they consume up to fifty pounds of twigs and shrubs a day. Summer is a relative smorgasbord with closer to sixty pounds of birch, willow, aspen, and maple leaves supplemented with a wide range of aquatic plants. [1] With towering columns for legs, they can pass through snowdrifts in pursuit of nature’s scant winter provender. Unlike their cervid cousins that form herds for some protection in numbers, moose keep to themselves. Their shear bulk wards off all but the most determined of predators, primarily wolves. From Teddy Roosevelt’s Progressive “Bull Moose” Party to the multitudes associated with Moose International, moose is metaphor … an indomitable animal astride the frozen tundra symbolizing strength and salubrity.

Moose have all of the characteristic features of the deer or cervid family to which they belong. Cervidae is derived from the Latin cervus, meaning hart or stag (applied now only to males) which in turn comes from keras, the Greek word for horn. Deer are hoofed mammals that subsist wholly on plants that have horns in the form of antlers. While the class Mammalia generally means warm-blooded, hairy animals that feed offspring with milk from mammary glands, Linnaeus, when he first introduced the taxonomic grouping in the tenth edition of Systema Naturae of 1758, also included four-chambered hearts, lungs, a covered jaw, five sense organs mostly with four feet and a tail as key traits. [2] All of these features are modified according to evolution to suit the particular mammalian environment in which survival is sought. In the case of moose, large body size, palmate antlers, dense hair, long legs, and an extended, snouted jaw are necessary and sufficient to eke out a living in the extremities of northern latitude.

Moose Habitat. Cascade Canyon, Grand Teton National Park

Large size and cold climate are related according to Bergmann’s Rule.  The eponymous correlation was first established by the German biologist Carl Bergmann in 1847 with the hypothesis that heat loss was proportional to the ratio of an animal’s surface area to its volume. The logic is that thermogenesis (“heat production”) is a matter of body mass and heat loss emanates mostly from its surface. In essence, larger moose, bear, and lynx are more likely to survive the cold to reproduce more effectively than those at the lower end of the size spectrum. The rule holds fairly well with warm blooded animals like mammals (71 percent) and birds (76 percent). There are other biological traits that vary according to latitude. Gloger’s Rule is that lighter colors prevail in northern areas as a matter of survival due to cryptic coloring for both predators and prey; the arctic fox is hard to spot and the snowshoe rabbit is hard to find. As moose are never predator nor rarely prey (except as calves), there is no environmental stressor to select for whiter fur.  Similarly, Allen’s Rule is that northern animals have smaller appendages like ears, tails, and limbs relative to southern cousins ― a corollary also based on heat loss. [3] Field studies have shown that moose do indeed get larger as you go from the southern end of their range to the north following Bergmann’s Rule but their ears and antlers get larger and wider at the same time, violating Allen’s Rule. [4]

Moose antlers are employed in head butting contests to establish male pecking order dominance during the annual rut. This is as dangerous as it sounds. Weighing as much as a ton each, two bull moose jousting with bony, multi-tined weapons frequently break ribs and scapulae while frequently tearing through flesh. About five percent of moose die in combat every year and one third will die of wounds inflicted over the course of their short, brutish lives. The winners do most of the mating. All of this is nature’s pathway for the winning bull moose to perpetuate genes for  bulk, brawn, and big antlers without regard to latitude. Once this is accomplished, the antlers fall off in the winter only to be regrown from the scull up in time for the next mating season. In the wild where survival depends on serendipity, the handiwork of evolution is here evident. The energy needed to grow a set of bull moose antlers made from the same skeletal bone tissue that forms the framework of the body is up to five times that needed for sustainment metabolism. Bull moose can lose up to twenty percent of their body weight in the run up to the rut. This is about the same energy differential as that needed for a cow moose to give birth to a calf (sometimes two and rarely three). Getting enough protein from an herbivorous diet is hard enough, but getting enough calcium and phosphorous to make antler bone tissue is the real challenge. These minerals must be sequestered from extant bone, resulting in osteoporosis and weakness just in time for rut trial by contact combat. [5] While this must have be a good evolutionary result for moose in their current environment, it may not be sustaining in the long term. Evolution is a record of the past with no plans for the future.

The violence of antler assaults is  testimony to the importance of sexuality in the evolutionary cycle of life in the caldron of survival selection. It is just one of numerous characteristic traits that emerged as successful in promoting the moose brand with cows attracting bulls and vice versa. Reproduction is a biological mandate, not an option. Meeting and mating for moose that live alone out of sight in remote wilderness habitats must rely primarily on sound and smell. For enhanced audio, bull moose ears and antlers operate in synchrony with four key features to detect even the faintest cow moose bleat. The moose ear or pinna is about 65 square inches, more than fifty times larger than ours. Stereophony, the ability to determine sound  directionality is enhanced with a wide separation of a foot, twice that of humans. Moose ears operate independently, each rotational for a complete 360 degrees and tiltable by 90 degrees away from vertical. And lastly, like the hearing horn of yore, moose antlers concentrate sound to amplify the signal to enhance detection over background noise. Measurements using a taxidermic antlered moose head (there are regrettably many to choose from) revealed a fifty percent decibel increase when measured at the base of the antler. It is probable that the wide, palmate shape of moose antlers, unlike the tubular shape of most other deer, evolved as a more effective sound receiver. It is also probably not a coincidence that female moose have a better sound repertoire than their male counterparts, which is unique among cervids. [6]      

Pheromones as aphrodisiacs operate over time to establish a geographical datum to which a sound signal may only have provided a vector direction. Smells are considered to be the most enduring of animal senses. For moose, they are sine qua non. Mammalian olfactory systems consist of two separate “chemosensory” signals to different parts of the brain that end in the hypothalamus, the region controlling behavior and endocrine/hormonal response. The main olfactory system (MOS) samples the air for volatile chemicals across a broad spectrum for general situational awareness, such as food emanations. The accessory olfaction system (AOS) triggers response in a specialized sensor called the vomeronasal organ (Jacobson’s Organ) that is thought to be exclusively for reproduction related smells. There are four kinds of pheromones: Modulators influence general psychological state; Releasers have specific, immediate responses; Signalers are less specific and gradual; and Primers change behaviors over the longer term. For moose, bull rut urine is Cupid’s aromatic arrow. As a pheromone, it is a Releaser, causing “overt displays of attraction and copulation.” It is a complex compound that has not been fully characterized with over 100 chemical constituents. Courting consists of a rutting bull digging and urinating in a dirt pit, wallowing in the resultant muck to obtain a whole-body bridal bouquet. Cows attracted to the smell follow suit until the nuptial party is fully aroused and sex ensues.[7] Life goes on according the laws of nature, a new calf conceived.

Moose have a distinctive rounded, downward drooping snout from which a fleshy outgrowth called a dewlap is suspended. The bulbous snout houses an elaborate snorkel system comprised of two fatty nose plugs that are held over the nostrils by powerful muscles. [8] The moose snorkel seals air lines against water intrusion in like manner to submarines and reef divers. The delicate, conical deer family muzzle was transformed to moose snorkel-snout to facilitate consumption of aquatic plants, not infrequently with full immersion dives to lake bottoms. In one observed forage, a moose dove for almost an hour, covering 100 square yards, swimming at speeds comparable to a paddled canoe. Moose, then, are semi-aquatic deer. Their effect on riparian ecosystems is substantial, depositing the equivalent of one hundred pounds of commercial fertilizer in a year. The moose dewlap projection is similar to the swollen necks of lizards and to bird wattles. There are numerous hypotheses about the function of dewlaps that range from sexual attraction to predator avoidance. The former is based on the “peacock’s tail argument” in that having a huge encumbrance with no function must mean good genes and the latter is based on increasing apparent size to scare off would be attackers. [10] It is hard to see how this might apply to moose, where sexuality is a matter of olfaction and whose huge bulk hardly needs accentuation. Since bull and cow moose both have dewlaps, albeit with a substantial amount of sexual dimorphism (the male dewlap is much larger), it is more likely that it is vestigial like the tailbone coccyx in humans. The comical appearance of droopy snouted moose is epitomized by Bullwinkle, the sidekick of Rocky the Flying Squirrel, who lacks a dewlap altogether.

Moose have followed the same population fluctuations as the white-tailed deer from the bust of the nineteenth century to the boom in the twenty-first. The burgeoning human enterprise moving inexorably west and north through the 1800s depleted moose directly by hunting for both food and sport (moose would hardly call it that) and indirectly through tree removal habitat destruction. As the human diaspora reversed in the urbanization of the 20th century, newly fallowed fields progressed to forests and moose moved southward to their original range across the northern tier of states. With an estimated fifty thousand moose in the northeast alone (and more across the upper Midwest), moose crossing signs now proliferate, warning motorists to be wary ― due to their hood-high height, a direct collision with a moose results in a one ton weight through the windshield with usually fatal certainty. This is especially a problem in winter, when moose seeking salt learn that treated roads are covered with it. [11] Increasing numbers of hungry moose wandering near homesteads also increases the likelihood of human encounters, which are not always benign. Unlike deer, moose can be quite aggressive, particularly during mating season and when accompanied by calves. Man’s best friend is equally sworn moose enemy due to their penchant for barking and chasing. Dogs are accordingly subject to targeted moose attacks even without specific provocation. [12]

Moose population dynamics have long been a matter of scientific interest. Questions about environmental sustainability and the role of predators can only be properly answered with field observations, which are impractical to conduct in open ranges. Michigan’s Isle Royale National Park situated fifteen miles off shore in Lake Superior has served as an isolated experimental enclave for well over a century. In the 1900s, several moose crossed over to the 200 square mile island, and, absent wolf predation, proliferated. The moose population surpassed 3,000 in 1930, consuming most of the food supply, resulting in a period of starvation from which only one out of every fifteen moose survived. In about 1950, a population of wolves also immigrated to the island across a frozen channel, allowing for a comprehensive study of predator/prey behavior in the wild. Over time, a pack of about twenty wolves  preyed almost exclusively on the young, old, and infirm which stabilized the moose population at 500 with a self sustaining food supply validated by measuring tree ring growth. [13]  The now balanced ecosystem became one of the foundational bases for the reintroduction of apex predators to several western states. By 2017, the Isle Royale wolf population had dwindled to just a single mating pair. This was attributed to inbreeding, one of the unintended genetic consequences of the mammalian dominant male model. As a result, the moose population had tripled to 1500 with commensurate overbrowsing damage. To remedy the otherwise inevitable die-off, six wolves have been released on the island as part of new 20 year study by the National Park Service. [14]

In recent years, moose populations have plummeted, most notably in Vermont and Minnesota. While speculative, the effects of climate change are thought to play a key role. Three factors are germane: rising temperature; changes to forest species composition; and changes in the species and numbers of parasites. The dense, double-layer pelt that protects moose from the rigors of winter becomes a heat blanket when temperatures rise. Heat stress in moose occurs when summer temperatures exceed 57 degrees F and winter temperatures exceed 23 degrees F. The only cooling remedies available are seek shade, get wet, or move north, and many do. Tree species migrate north for the same reason ― they evolved to operate within a temperature band that balances evaporation with uptake. The maple and birch trees that are staples of moose cuisine are being driven out by the less palatable tough-barked oaks and hickories. [15] But the primary culprit for the struggling moose population is parasitic. White-tailed deer are carriers of the black-legged tick that causes Lyme Disease in humans. They are also carriers of the winter tick (Dermacentor albipictus) that infests moose. It has been concluded by some researchers that the winter tick is the primary cause of moose mortality in New England. Individual moose can have up to 50,000 ticks, causing lesions that result in the loss of almost all of the protective fur. In some areas, more than fifty percent of juvenile moose succumb. [16] The plight of polar bears has been the focus of climate change Cassandras. Moose may be next.

References:   

1. http://www.env.gov.nl.ca/snp/Animals/moose.htm    

2. Drew, L. I, Mammal, Bloomsbury Publishing, London, 2017, pp 9-25.

3. Millien, V. “Ecotypic variation in the context of global climate change: Revisiting the rules”. Ecology Letters. Volume  9  Issue 7, 23 May 2006  pp 853–869.

4. Nygrén, T. et al “Moose Antler Type Polymorphism: Age and Weight Dependent Phenotypes and Phenotype Frequencies in Space and Time.” Annales Zoologici Fennici 19 December 2007 Volume 44, Number 6,  pp 445-61.

5. Emlen, D. Animal Weapons, Henry Holt and Company, New York, 2014, pp 117-122.

6. Bubenik, George A.; Bubenik, Peter G. “Palmated antlers of moose may serve as a parabolic reflector of sounds”. European Journal of Wildlife Research. August 1, 2008, Volume 54 Number 3 pp 533–535.

7. Whittle, C. “Identification and Function of Male Moose Urinary Pheromones” PhD Thesis, University of Alaska, 2005.

8. . Sharp, D. “Researchers take a look at the moose’s enigmatic nose”. USA Today. May 5, 2004.

9. Pennesi, E. “This diving, pooping moose is saving the ecosystem – for now” Science, 21 October 2018.

10. Bro-Jorgensen, J. “Evolution of the ungulate dewlap: thermoregulation rather than sexual selection or predator deterrence?” Frontiers in Zoology. 18 July 2016 Volume 13 Number 1 p 33.

11. Schueller. G. “Moose in a Mess” Defenders of Wildlife Magazine,  Winter 2007

12. Alaska Department of Fish and Game “What to Do About Aggressive Moose” at http://www.wildlife.alaska.gov/index.cfm?adfg=aawildlife.agmoose     

13. Lack D. “Population, Biological” Encyclopedia Britannica Macropedia W. Benton Publisher, University of Oxford, Volume 14 p 839.

14. Mlot, C. “Classic Wolf-Moose Study to be recreated on Isle Royale” Science Volume 361 Issue 6409, 28 September 2018. Pp 1298-1299.

15. . Rines, K.. New Hampshire’s moose population vs climate change. New Hampshire Fish and Game Department Report 5484.

16. Debow, J. et al “Effects of Winter Ticks and Internal Parasites on Moose Survival in Vermont, USA”. The Journal of Wildlife Management. 2 August 2021 Volume 85 Number 7 pp 1423–1439.

Wintergreen

Common Name: Wintergreen, Teaberry, Checkerberry, Boxberry, Mountain tea, Deer berry, Ground holly, Spiceberry. Often confused with Partridgeberry due to similarities in ground hugging habitat, berry size, and color – In a forest of deciduous trees that is otherwise nearly denuded in winter, the clusters of bright green shiny leaves that cover the ground in large swaths are eye catching, a reminder that even in winter there is green ― wintergreen.

Scientific Name: Gaultheria procumbens – Jean François Gauthier was the royal physician and botanist for King Louis XV in the North American colony of New France. The Swedish/Finnish naturalist Peter Kalm, an apostle of Carl Linnaeus, honored Gauthier with the eponymous genus name in recognition of the support he had provided during Kalm’s expedition to North America in 1748. The species name is  from the Latin verb procumbere which means to fall, bend, or lean forward. Procumbent is a botanical term for plants that have stems that trail along the ground without putting down roots.  

Potpourri: Wintergreen is a contradiction in terms. Winter is white snow and occasionally black ice. In the waning light of autumn, leaves of deciduous trees turn from green to yellow and/or red and eventually brown as they die and fall to become a part of the earthworm-churned humus below. Trees with leaves or needles that don’t fall in fall are called evergreen as they always are (ever green); winter has nothing to do with it. The seasonal oxymoronic distinction for the diminutive ground cover is likely a matter of perspective. The expanse of shiny bright green leaves trailing through the woods is in stark contrast to the browns and grays of the wintering forest floor. Wintergreen is most notable for the aroma and flavor of its leaves and berries. The name wintergreen accordingly evokes the freshness of the mountain air in winter and is a metaphor for natural purity. Like all floral emanations, however, wintergreen is produced by the plant for its own purpose absent any human influence.

Wintergreen is a member of the heath family, Ericaceae, derived from ereike which is the Greek name for heather. The ericoids are predominantly perennial, woody shrubs and herbs that occupy acidic uplands with low soil fertility ― they necessarily evolved survival strategies suited to these distressed, niche areas. Among the roughly 2,000 heather-type plants are some of the most noteworthy montane species including mountain laurel, pink azalea or Pinxter flower, mountain rosebay or Catawba rhododendron, and high-bush blueberry. In many cases, heath plants dominate their habitat, crowding out the competition to create a virtual monoculture in the understory. This is evidenced by the dense stands of mountain laurel and rhododendron in the northern and southern Appalachians respectively.  Wintergreen is their diminutive cousin, consisting of leathery, alternate leaves with a distinctive sheen and almost imperceptible teeth along the margin. Bell-shaped flowers scented to attract pollinating bumblebees become the red berries of autumn that persist into winter. Red attracts foraging birds to consume the pulp, depositing the indigestible seeds remotely in a dollop of nutritious excrement. [1]

Wintergreen flowers attract pollinators

Marginal habitats are especially challenging for all living things. Animals have the option to rove in and out seasonally in search of food and surreptitiously to avoid predators. Plants and fungi are sessile, growing only upward and outward from a set datum. Once they establish underground interconnected networks of roots and mycelia (the” wood wide web”), the die is cast. Any and all interaction with the outside world to attract benefactors and repel invaders becomes a matter of plant physiology. Metabolism is the general name for the chemistry of growth and decay, consisting of both the new tissue growth of anabolism and the energy creation and waste disposal of catabolism. Plants produce hundreds of thousands of primary and secondary chemicals for metabolism called metabolites that are both necessary and sufficient for attraction and repulsion.  Metabolites with low molecular weight and an affinity for fat (lipophilic) are often volatile, becoming airborne due to evaporation when exposed to air at ambient temperature. The most effective way to communicate at a distance is to take advantage of atmospheric motion and dispersal. More than 7,000 volatile plant metabolites have been identified from foods and beverages. [2] The volatile oil that is produced by the wintergreen plant is methyl salicylate.

Methyl salicylate is a colorless liquid at room temperature and consists of 8 carbon, 8 hydrogen and 3 oxygen atoms with the formula (C8H8O3). [3] Fresh wintergreen leaves have less than 1 percent by dry weight (technically called weight percent abbreviated wt%). The oil is extracted by bulk fermentation of harvested leaves; an enzyme breaks the chemical bond to release almost pure (96-99 wt%) methyl salicylate. [4] Volatile oils like wintergreen probably originated due to the random mutation of evolution as a way to deter herbivores. While herbivore is generally applied to distinguish vegetation eating as an alternative to meat eating among animals, here it refers to leaf eating insects that were a primary threat to primordial plants. Methyl salicylate evolved independently of wintergreen in other plant species, as similar threats to survival yield similar reactions. This is a well-documented phenomenon called convergent evolution. Over the eons, the original volatile plant oils evolved further to promote survival, taking on a wide variety of functions such as attracting some animals and repelling others. The complex nature of plant chemical interactions with their environment remains largely opaque to science absent those subject to field study ― like methyl salicylate. [5]

Botanists have suspected of over half a century that there was one phytochemical (phyton means plant in Greek) responsible for triggering plant defensives in response to alien invaders. Defensive behavior had been observed by within an individual plant and remarkably from one plant to another ― presumably via volatile chemical communication. Salicylic acid was suspect for a time but testing of specific defensive response failed to correlate with concentrations of the chemical. Continued research revealed that SABP2, the enzyme that converts methyl salicylate to salicylic acid, is the probable intraspecies signal. The observed phenomenon is attributed to a plant producing methyl salicylate at the damaged site and transmitting it through its vascular system with SABP2 converting it to salicylic acid to trigger resistance remotely. [6] And that is not all. Methyl salicylates attract predatory insects. Experiments with hops, an important crop for the brewing industry, revealed that four times as many species of predatory insects were attracted when controlled release dispensers of methyl salicylate were placed in the field. This resulted in an equally dramatic drop in the number of spider mites, the primary arthropod predator of hops. [7] The predatory insects evidently developed methyl salicylate sensors as a means of locating an easy meal of mites. Everything is connected to everything else in ecology.

Oil of wintergreen is toxic and therefore potentially deadly at high dosage. One 5 milliliter teaspoon of oil contains 6 grams of methyl salicylate, comprised in part from salicylic acid. Aspirin, the first commercial analgesic,  owes its effects to the release of salicylic acid originally extracted from willow trees (genus Salix). One teaspoon of wintergreen oil is the equivalent of swallowing twenty aspirin pills (the normal dose is two tablets every 6 hours). Since ingested chemicals are spread throughout the body once absorbed through the walls of the small intestine, doses are normalized to body weight using the ratio of milligrams per kilogram, the equivalent of parts per million (ppm). A dose of 100 mg/kg can be fatal. As an example, only 3 grams or half a teaspoon of oil of wintergreen would be a potentially fatal dose for a child weighing 30 kilograms (about 65 pounds). [8]  A popular medicinal field guide includes the caveat that oil of wintergreen is “highly toxic;absorbed through skin, harms liver and kidneys(emphasis in the original). [9]

In spite of its toxicity, wintergreen is edible. The minute amount of methyl salicylate in the leaves and berries is well below the threshold for harmful effects in mammals and birds. It is the volatility of methyl salicylate that imparts the pleasant smell and taste of wintergreen that attracts them. The taste is perceived from the aroma since there are only five taste receptors: sweet, sour, salty, bitter, and savory; there is no wintergreen. When the leaves or berries are chewed, the released volatiles ascend through passages from the mouth to the olfactory epithelium in the nasal cavity that connects to the brain where scent information is processed. There are essentially an unlimited number of different aroma combinations and wintergreen is one of them. From the human perspective, wintergreen tea has been a North American staple ever since the colonists adopted the practice from Native Americans. Wintergreen berries consumed directly or made into pies and jellies have the same provenance. The wintergreen flavor, now mostly a food industry additive made from either laboratory produced methyl salicylate or black birch trees (which also contain methyl salicylate), is widely used in chewing gum and other consumer concoctions. Wintergreen is an important food source for birds, particularly ground dwelling species like the ruffed grouse, comprising over 2 percent of their food intake year-round. Among mammals, wintergreen browse is estimated to comprise about 5 percent of total food input for white-tailed deer, particularly further north. Wintergreen berries constitute up to 2 percent of the diet of black bears. [10,11]

Medicinal is the middle ground between toxic and edible. From the dose perspective, size matters. An amount of oil consumed and distributed in a large body can in its diluted state can prevent smaller animals like microbes from proliferating. Since methyl salicylate/oil of wintergreen probably arose as a deterrent to insects, that faculty persists. Most if not all antiseptic mouth washes contain methyl salicylate, listed on the label as anti-gingivitis and anti-plaque, killing the causative bacteria. Wintergreen leaves, while they may be eaten whole by deer, show no evidence of insect chewing. Apparently, the taste or smell suffices to deter them. Some of the “family friendly plant based” insect repellents that eschew DEET contain wintergreen oil, taking advantage of this effect. Based on personal experience, these natural chemical sprays work better than their industrial counterparts in deterring the confounding cloud of gnats that dive bomb into eyes and ears. This makes sense because plant chemical shielding is based on millennia of trial-and-error mutations and those chemicals that persist in living plants must have been effective.  Two comments about gnats ― a loose, descriptive term for small flying insects. First, there is a reason that they home in on eyes and ears. This must also be scent-based chemical attraction, in all likelihood an ingredient necessary for gnat sex according to their Kamikaze persistence. Second, gnats consist of multiple species all with their own evolutionary history and there is therefore no single chemical that will deter them en masse. This is why the so-called natural sprays, which can contain geranium oil, soybean oil, castor oil, cedarwood oil, citronella oil, peppermint oil, and lemongrass in addition to wintergreen oil, work better than chemical sprays, which mostly contain just DEET, most effective against ticks and mosquitoes.

Oil of wintergreen is an effective and potent pain medication. Methyl salicylate and its derivative salicylic acid are demonstrably one of the best treatments for everything from aching joints to migraines. Given the rudimentary understanding of the nervous system and pain propagation, the actual mechanism remains a mystery but surely it has something to do with neurotransmitters and receptors. Aspirin was the only commercial pain killer until the advent of ibuprofen (Advil), acetaminophen (Tylenol), and naproxen (Aleve) starting in the mid 1970’s. Native Americans used wintergreen broadly for a wide variety of ailments. Cherokees chewed the leaves for sore gums and to alleviate the symptoms of dysentery in addition to a substitute for chewing tobacco (which also contains methyl salicylate). The more northerly Iroquois Confederation tribes used wintergreen as a topical treatment for arthritis and rheumatism and internally as blood purifying tea.  In many cases, the specific treatment employed a concoction of several different herbs including wintergreen; its contribution to salubrity is moot.[12] There is some science here, however. A randomized double-blind trial with 182 participants with acute pain conducted using “topically applied rubefacients containing salicylates” to one group and a placebo to the other resulted in a 50 percent pain reduction. Similarly, a trial with 429 participants with chronic musculoskeletal and arthritic pain yielded a moderate but lower pain reduction. [13] One must conclude that oil of wintergreen is one of the few validated herbal remedies; it actually works.

References: 

1. Niering, W and Olmstead, N, National Audubon Society Field Guide to North American Wildflowers, Alfred A. Knopf, New York, 1998, pp 496-510.

2. Goff, S and Klee, H “Plant Volatile Compounds: Sensory Cues for Health and Nutritional Value?” Science Volume 311 Issue 5762, 10 February 2006, pp 815-819.

3. http://chemister.ru/Database/properties-en.php?dbid=1&id=2994

4. https://hort.purdue.edu/newcrop/med-aro/factsheets/WINTERGREEN.html     

5. Pichersky, E. “Plant Scents” American Scientist Volume 92 Number 6, November – December 2004, p 514.

6. Leslie, M. “At Long Last, Pathologists Hear Plants’ Cry For Help” Science, Volume 318 Issue 5847, 5 October 2007, pp 31-32.

7. James, D. and Price, T.  “Field-testing of methyl salicylate for recruitment and retention of beneficial insects in grapes and hops” Journal of Chemical Ecology 30 August 2004, Volume 30 Number 8 pp 1613–1628.

8. Tidy, C “Salicylate Poisoning” Patient Professional Articles 2014 at https://patient.info/doctor/salicylate-poisoning  

9. Foster, S. and Duke, J. Peterson Field Guide Medicinal Plants and Herbs, Houghton Mifflin Company, Boston, 2000, p 31.

10. https://wildadirondacks.org/adirondack-wildflowers-wintergreen-gaultheria-procumbens.html

11. Angier, B. Edible Wild Plants, Stackpole Books, Mechanicsburg, Pennsylvania, 2008, p 262.

12. The ethnobotany Native American database lists all documented uses of drugs by different tribes. http://naeb.brit.org/uses/search/?string=gaultheria+procumbenshe   

13. Mason, L.et al “Systematic review of efficacy of topical rubefacients containing salicylates for the treatment of acute and chronic pain” British Medical Journal 24 April 2004 Volume 328 Issue 7446 p 995.

Parasol or Lepiotoid Mushrooms

Of all the fungi that have the umbrella shape, the Parasol Mushroom is the epitome

Common Name: Parasol Mushroom – The umbrella analogy is applicable to all mushrooms that have a stem or stipe holding up a cap or pileus. Since the umbrella (from the Latin umbra meaning shade) is equally a protection against rain or sun, parasol (Latin parare to shield and sol, the sun) is equally apt. Parasol is applied only to this mushroom out of the thousands of possible candidates due to its exceptionally broad cap held aloft by a relatively narrow handle-like stem.

Scientific Name: Lepiota procera – The generic name is from the Greek lepos, meaning rind, husk, or scale in reference to the scurfy surface of the cap. Procerus is Latin for tall. It is equally known as Macrolepiota procera to reflect the breakup of the original Lepiota genus into many new genera according to genetic DNA-based associations.

Potpourri: The lepiotoid mushrooms occupy an uncertain niche between the agarics and the amanitas. The agarics are exemplified by the “supermarket” White Button Mushroom (Agaricus bisporus), a cultivar of the Meadow Mushroom (Agaricus campestris) originating in the caves of Paris in the seventeenth century. They are characterized by having brown spores, free gills, and a partial veil.  The amanitas are among the most notable of all mushrooms, including the deadly, pearly-white Destroying Angel (Amanita disporangia) and the iconic red, white-dotted Fly Agaric (Amanita muscaria). Amanitas also have free gills, a partial veil, but with white in lieu of brown spores and a full veil. Lepiotas have white spores, free gills, and a partial veil, combining the traits of Agarics and Amanitas. [1]

Key mushroom features

Partial and full veils, as the names imply, are thin membranes that protect (veil) the gilled spore-bearing surfaces of some mushrooms until just before spore release to minimize any damage that could accrue during their emergence from the subterranean domain of the fungal mycelium. The partial or inner veil of Lepiotas, Agarics, and Amanitas is attached from the edge of the cap to a ring or annulus on the stem. The full or universal veil of most Amanitas covers the entire mushroom (like an egg), leaving a bowl-shaped remnant called a volva at the base of the stem, and frequently “veil fragments” on the cap. Free gills means that the gills are not attached to the stem affixed to the underside of the cap. Gill attachment is one of the primary features used by mushroom keys to distinguish one species from another. Notched and decurrent (descending the stem) gill attachments are the two primary alternatives to free gills. Another mushroom key distinction is the presence of scales on the cap of many lepiotoid mushrooms, especially the larger, “parasol-like” species. These structures are outgrowths from the cap and are not fragments of a gill-enclosing veil. In general, if a patch on the cap of a mushroom is flattened and light-colored, it may be a fragment of the universal veil. If it is angular and darker, it is a scale.   

The Agaric – Lepiota Family (Agaricaceae) and the Amanita Family (Amanitaceae)  are both in the order Agaricales. The fact that the Amanita muscaria is also called Fly Agaric is indicative of the still unravelling origin story of gilled mushrooms. Carolinus Linnaeus established the current system of biological classification or taxonomy with the publication of Species Plantarum in 1753. Fungi were placed in Cryptogamia, “hidden life” in Latin, one of the twenty four classes of the Plant Kingdom. This designation was for those plants that had reproductive systems that had not yet been determined (and were therefore hidden), as spores not visible to the naked eye had yet to be rationalized as a means of sexual transmission. The four orders of Cryptogamia were ferns (Filices), bryophytes like mosses and liverworts (Musci), algae which included all lichens, and fungi. There were ten genera of fungi, including Boletus for all mushrooms with pores instead of gills, Phallus for stinkhorns, Clavaria for coral fungi, Lycoperdon for puffballs, and Agaricus for all gilled mushrooms. [2] The bareboned Linnaean system persisted for about a century, becoming the baseline for those inclined toward generalizations that were adequate for comprehending the basic organization of life, a group that has since become known as the “lumpers.” The “splitters” are their antithesis, carving out increasingly narrow speciation in search of the biological holy grail of monophylogeny, having a single common ancestor.    

The genus Lepiota was one of the first to be stricken from the ranks of Agarics. This occurred in the late nineteenth century, as spore color became one of the characteristics that served to further distinguish mushroom genera.  Thus the original lepiotoids were defined as all white-spored mushrooms that had free gills that were not in the fully veiled Amanita genus. By 1888, those mushrooms with radiating ridges on the cap that look like and are called pleats were placed in the new genus Leucocoprinus (leuco means white in Greek). Ten years later, the one green spored Lepiota was moved to the genus Chlorophyllum. In 1948, Lepiotas with a different mechanism for growth involving what are called clamp connections were moved to Leucoagaricus and the largest Lepiotas were moved to Macrolepiota (macro is Latin for big) in which L. procera is currently placed. The splitting continued as DNA became the final arbiter of species. The only mushrooms of the approximately 1,000 species of white spored mushrooms with free gills that do not have a universal veil (Amanita) that remain in the original Lepiota genus are smaller in size mostly with scales on the cap and banding on the stem. [3] But more recent phylogenic evaluations of the 22 extant genera have shown that “taxonomic circumscription and segregation of the genus Lepiota has been problematic.” [4] Which is why, for the sake of some consistency in field identifications, Parasol mushrooms as an archetype as used here should suffice. 

Since this article is about Parasol Mushrooms, it is apropos to address the mushroom umbella analogy. The logic of syllogism would suggest that if it looks like an umbella, then it must be a rain shield. In reality, the cap has the opposite function – to retain water. The umbrella shape is to ensure that there is enough humidity as water vapor on the underside of the cap for water droplets to condense in the vicinity of the spore-bearing gills. To explain why this is so, a few points about mushroom physiology must be noted. A mushroom is a fruiting body that is produced by a fungus, the tangled mass of thread-like strands called a mycelium that is wholly underground or inside dead wood. The only function of the fruiting body mushroom is to spread the reproductive spores into the environment to propagate the species (like apples on an apple tree). When a fungal mycelium is ready to reproduce, it forms a self-contained and out of sight proto mushroom called a primordium. Once fully formed, the fungus waits for promising weather, which is quite frequently after substantial rain has fallen. This is why mushrooms mysteriously appear overnight after rain; they are already there, ready, and waiting. Once the mushroom erupts from its hypogeal lair, the cap opens, separating the partial and/or universal veil if it has one, exposing the gills to the surrounding air for the first time. [5]

Spore shooting force F

So why does the air under the mushroom cap need to have plenty of water vapor? Because the spores need to be literally shot away from the gills so that they can freefall into the wind for dispersal. The motive force that ejects each spore outward is the result of the condensation of water vapor into a tiny droplet. Gills are like vertical, side-by-side slats suspended from the underside of the cap. The reason for this arrangement is to maximize the surface area available to be able to produce as many spores as possible; a flat surface would provide only a small fraction of the area with gills.  Because the probability of any one spore successfully germinating to produce a new fungus is vanishingly small, mushrooms need to produce millions of spores to succeed. Since the gills are mounted vertically on the gills rather than horizontally, a spore, if simply released, would remain stuck to the surface. Nature’s evolutionary solution is to literally shoot the spore horizontally, away from the side of the gill into the air gap between gills so that it can then fall due to gravity. Each spore is held at the tip of a stalk called a sterigma at a point called the punctum lacryman (Latin for the “point that cries”) depicted in the figure at A. It is here that the water vapor condenses, shown in B. The water droplet extends onto the spore surface in C due to surface tension, causing the center of gravity of the spore/water mass to shift rapidly to create what is called a surface tension catapult force (marked with an F in the figure). The miniscule (about 10 microns in diameter) spore is ejected outward at a speed of about 10 miles per hour with an acceleration of 25,000 times the force of gravity (called G-force). First hypothesized at the beginning of the 20th century, the catapult force was captured by high-speed camera about twenty years ago. The mechanics was demonstrated conclusively by modelling the spore/sterigma interface using polystyrene hemispheres just five years ago. [6] Fungi have been described as fantastic with good reason.

Green-spored Lepiota

Parasol mushrooms are extolled as one of the commendable edible species, with commentary that ranges from “choice, with caution” [7] to “tender caps … edible and highly regarded by many mycophagists.” [8] An abundance of caution is warranted. There are a number of species that are quite similar in appearance to L. procera with edibility caveats that range from unknown and therefore not recommended to demonstrably poisonous. There is even one species sometimes known as the Deadly Lepiota (L. josserandii) since it has the same amatoxin chemicals that are found in the most notable of all deadly mushrooms, the Destroying Angel (Amanita bisporigera) and the Death Cap (A. phalloides). The simple fact that the Parasol-type mushrooms are similar in characteristics to the problematic Amanitas (white spores and partial veil) should raise the red flag potential for mistaken identification. Absent a complete and through assessment of a parasol-like mushroom by a competent expert to include spore color, veil attachments, and scale configuration, consumption is unwise. There is an old saying: There are bold mushroom hunters and old mushroom hunters, but no old, bold mushroom hunters. The other aphorism of note is that you can eat any mushroom – once. The alleged ubiquity of deadly mushrooms in Anglo-Saxon culture and literature is a matter of phobia and not fact. The North American Mycological Association (NAMA) has maintained a national mushroom poisoning data base since 1982. It is not comprehensive since it relies on proffered reports; there is no requirement for medical and veterinary establishments to report mushroom poisonings. However, it provides some baseline data that is instructive. There were a total of 1700 reports of mushroom poisoning reports over thirty years with the vast majority involving ingestions by young children and dogs. Almost all result in various degrees of temporary gastrointestinal distress and full recovery with no lingering long-term effects.  Contrary to the perception of the general public, only about 10 percent of poisonous mushrooms― i.e. those which cause nausea and diarrhea (and sometimes both) ― are potentially deadly. Deadly plant toxins like those of hellebore and white snakeroot are much more common than deadly fungal toxins.  There is one lepiotoid mushroom that deserves special attention. According to NAMA, “Of the mushrooms generally considered poisonous, the one far most often consumed is Chlorophyllum molybdites. It is large and meaty; it resembles a generally choice edible, it tastes good, and it grows in lawns and parks. Chlorophyllum molybdites quickly rewards the unwary with gastric distress, vomiting, and diarrhea lasting several hours.” [9]  

The white gills of a young Green-spored Lepiota

The Green-spored Lepiota (Chlorophyllum molybdites) is variously known as “the vomiter” and “the gut-wrencher” for its notable stomach and bowel emptying effects. [10] There are a number of characteristics that can be used to make the distinction between the edible lepiotoid mushrooms and its poisonous doppelgänger. Habitat, distribution, and season are the most notable. Green-spored Lepiotas appear in clusters in grassy areas in the heat of summer and their edible cousins are found singly in mulch and open woods in the fall. What about the spore color? While it is true that C. molybdites has dingy greenish colored spores when fully open and mature, the gills are white and only turn slightly dingy with age. The green, although unique among mushrooms, is more a nuance than the convincing traffic light color. Most edible fungi are better when collected young and fresh; just about everything (and everyone) gets tough and sinewy with age. This then is the bane of the mushroom hunter. For Green-spored Lepiotas gathered while still immature, the gills would be white with scarcely a hint of the tell-tale green. The scenario: A flush of succulent looking white mushroom just like the one that you buy at the store pop up in the courtyard of your apartment complex and you rush out to gather, cook, and eat them. A beautiful summer day turns into a medical emergency in a matter of hours. [11]

References: 

1. Arora, D. Mushrooms Demystified, 2nd edition, Ten Speed Press, Berkeley, California, 1986, pp 293-310.

2. Linnaeus C.  Species Plantarum. Stockholm, Sweden,1753 : pp 1061-1186.

3. Vellinga, E. “An Overlooked California Lepiota- Old or New?” Fungi Magazine, Volume 2 Number 4, Fall, 2009, pp 7-9.

4. Johnson, J. and Vilgalys J. “Phylogenetic systematics of Lepiota sensu lato based on nuclear large subunit rDNA evidence”. Mycologia. 10 June 1998 Volume 90 Number 6,  pp 971–979.

5. Kendrick, B. The Fifth Kingdom, Focus Publishing, Newburyport, Massachusetts, 2000, pp 80-98. This is the single best desk reference for the Kingdom Fungi.

6. Chang, K. “Fungi Physics: How Those Spores Launch Just Right” New York Times, 27 July 2017. https://uphyl.pratt.duke.edu/NYTimes_Fungi_2017.pdf    

7. Lincoff, G. The National Audubon Society Field Guide to North American Mushrooms, Alfred A. Knopf, New York, 1981, p 520.

8. Roody, W. Mushrooms of West Virginia and the Central Appalachians, The University Press of Kentucky, Lexington, Kentucky, 2003, p 72-73

9. Beug, M. “An overview of Mushroom Poisonings in North America”. Mycophile Volume 45 Number 2, March/April 2004.

10. Salzman, J. “ Your Yard Might Be Home to the “Vomiter” Mushroom” Huffington Post 29 April 2011.

11. Hedgpeth, D. “Virginia family hospitalized after eating wild mushrooms found at apartment complex” Washington Post, 22 August 2018.

Woolly Bear Caterpillar

The width of the brown segment allegedly correlates to the duration of the upcoming winter.

Common Name: Woolly Bear Caterpillar, Woolly Worm, Fuzzy Bear, Hedgehog Caterpillar – The dense tufted bristles are black at both ends and brown in the middle with a texture that is  similar to wool. The short, rounded and blunt shape suggests an ursine association ― ergo, woolly bear. The caterpillar is the larval stage of the Isabella moth.

Scientific NamePyrrharctia isabellaPyyrh is the Greek word meaning red or tawny. Arctia is derived from the Greek word arktos, which means bear. The Isabella moth has red markings on the wings and is a northern species ranging into the arctic regions. The association of bear with arctic is due to the astronomical importance of Ursa Major, the Great Bear also known as the Big Dipper. The pole star Polaris at one end of Ursa Minor and  pointed to by two of Ursa Major’s stars is used in celestial navigation to locate true north. Isabella is a color that ranges from yellowish brown to olive brown, the basic wing color. Former scientific name Isia isabella.

Potpourri: Caterpillars are the larval stage of insects of the order Lepidoptera comprised of butterflies and moths. The distinction between these two groupings is as arbitrary as the common names given to animals and plants that are mostly descriptive with occasional mythic etymologies. Butterfly, as a case in point, is thought to derive from the belief that witches in the shape of flying insects stole milk and butter. Butterflies are most notable for their  brightly colored wings and zigzagging drunkenly across meadows in daylight. Moths are everything else that flies like a butterfly (and doesn’t sting like a bee). They are united in one order due to the physiology for which they are named, as butterflies and moths all have scaly (lepis in Greek) wings (pteron in Greek). The other almost universal difference is that butterflies are diurnal and moths are nocturnal. Their larvae, for the most part, stay hidden in the foliage as a survival matter. The trundling woolly bears are an exception to this rule. [1]

The woolly bear caterpillar is best known for its headlong scramble across trails in late fall and early spring. They have been observed by wayfarers since the colonial era, gaining a measure of notoriety. The reason for their haste at the larval sprint pace of four feet per minute is unknown, but there is a continuity of direction that suggests a specific goal. Conjecture is based in part on the well-established fact that woolly bears overwinter as caterpillars, freezing nearly solid as temperatures plummet according to season, latitude, and elevation. In the fall months, this would imply that there was some necessary location favoring cryogenic hibernation. However, since shelter from cold cannot be a factor, seclusion could only be to prevent predation ― but any nook or cranny would do. In the spring, the path is reversed for pupation, also a matter of finding an out of the way place to wait in helpless suspended metamorphic animation. Regardless of the destination, physiological actions occur in preparation for migration. A peak in the level of ecdysteriods (hormones that promote molting) triggers a cessation of feeding and gut evacuation followed by the quixotic quest. Since they are and have been a successful species, relocation has promoted propagation. This is in spite of the fact that caterpillars squashed while crossing busy roads and trails cannot equally promote longevity. The evolved ability to endure winters as larvae is certainly a relevant factor. [2]

Woolly bear is a contradiction on two counts: Caterpillars are not bears and their “wool” is not a winter coat. The stiff hairs called setae that extend outward in all directions from the larval body are the most notable features of the caterpillar, blocking out all other detail. The function of the hair is to some extent protective, as it blocks wasps like yellow jackets from direct access to administer their all too lethal sting. [3] Woolly bears instinctively roll up into a protective ball of spines to augment this defensive measure when frightened; this is the origin of the alternative common name hedgehog caterpillar. The dense hair also allows for controlled whole-body freezing, a rather surprising capability that is shared by only a few other animals, notably wood frogs.

As cold temperatures set in, a natural antifreeze compound composed of lipids and alcohols called glycerol is produced and distributed to the body by hemolymph, the blood-like body fluid of insects. The change in cryoprotectants with temperature has been verified in laboratory conditions. [4] As the cold slowly seeps in, the entire body except the very centers of the cells freeze solid in anticipation of the spring thaw. This capability has extended the range of woolly (polar) bears to the Arctic, where they have survived winters with temperatures as low as 90 degrees F below zero. Their life sequence is slowed to match the metabolic reduction to the extent that the normal one-month metamorphosis from larva to adult moth can take up to 14 years. [5] 

Woolly Bears curl up for protection.

The distinctive black-brown-black banding of the setae of the Isabela moth larva is the basis for the mythology of woolly worm weather prediction. It is not altogether unreasonable to believe that animals might be able to sense the mood swings of climate. The traditional folk wisdom of caterpillar color bands for winter forecasting may have been a direct assimilation of Native American lore. In any agrarian society, crop cycles are crucial to survival. Intelligence about the beginning, duration, or end of winter would be of abiding interest. For example, the counterpoint to woolly bears and the harshness of winter is the groundhog’s shadow that allegedly determines its duration. In its most general form, the definitive metric of woolly bear winter is the width of the middle brown band between the two black end bands. A narrow brown band is indicative of a severe winter whereas a broad brown band indicates a mild weather.

The quaint custom gained national credence in 1948 when two entomologists from the American Museum of Natural History in New York City collected 15 woolly bears and predicted a mild winter based on band width averages. Rather than publish a scientific paper, the two insect experts provided their results to a reporter whose article made the front page of the New York Herald Tribune, a respected newspaper. When their prediction proved correct, woolly bear wisdom gained a national audience. For the next seven years, the paper’s readers demanded annual weather predictions. The custom eventually fell out of fashion as its randomness inevitably became obvious. [6]

There is no correlation between woolly bear coloring and winter.  The relative size of the brown center band relative to the two black end bands is a matter of nutrition and age.  The more fruitful the summer feeding season, the larger the caterpillar will become, its growth narrowing of the central brown section. The age factor involves molting. Woolly bears grow and mature in six intermediate steps called instars, shedding their skin each time and becoming less black and more brownish sequentially. It might be feasible to correlate the bounty of a summer season with woolly bear ring sizes, but that is after the summer fact and has nothing to do with the winter future. The established biology of caterpillar growth and molting has had little effect on the public embrace of the original myth, even adding new variants. For example, the woollier the coat, the worse the winter. A more creative version concerns the direction of transit. A woolly bear moving south is escaping the coming harsh winter and vice versa, going north if milder.

In an attempt to coopt the success of Punxsutawney Phil, several small towns have inaugurated fall celebrations featuring nature’s herald as star attraction. The most well known is the Woolly Worm Festival held annually in October to promote tourism to the small town of Banner Elk, North Carolina. The event is promoted as a race between contestant caterpillars in heats to determine the champion woolly worm. The festival has drawn as many as 20,000 attendees witnessing racing heats involving 1,000 participant larvae. The winner is the official  weather prophet, the color of each of its 13 segments correlated to the 13 weeks of winter. Black indicates below average temperature, light brown above average temperature, dark brown average temperature, and something called fleck is low temperature with light snow. This not without financial remuneration as incentive. The winning worm receives a $1,000 prize. [7]

The adult moth to which the woolly bear larva pupates is one of the tiger moths of the family Arctiidae that are mostly moths in butterfly clothing … many are brightly colored with spots and stripes in contrast to the whites, browns, and grays of the majority of moths. The Isabella moth is an outlier with muted yellow-brown wings as its name implies. Tiger moths in general are also capable of producing audible sounds, a trait not normally associated with moths, mostly seen but not heard. Noisemaking is to deter bats, the nemeses of “mothdom” in their shared nocturnal air space. Bats are small-bodied and warm-blooded, consummately voracious to maintain the necessary energy input. Tiger moths, and the closely related owlet moths (Family Noctuidae – the ones most frequently seen fluttering around lights at night as their nocturnal name implies; their larvae are cutworms) have large eardrum-like structures called tympanic organs to detect the echolocation sounds made by bats. Located on either side of the head with a sensor for intensity that correlates to distance, tiger and owlet moths can determine the direction and the distance of an approaching bat and take appropriate evasive action. The tiger moths take the listening and evading strategy a step further, emitting a clicking noise that is thought to disrupt bat sonar altogether. [8] Evolution is powered by predators.

Bright colors are counterintuitive for defenseless insects that would likely have better survival chances by being neither seen nor heard. This is as true for the larval, caterpillar stage as it is for the adult moth stage. The black brown banding of woolly bears is eye-catching, particularly when it is moving across open areas. Tiger moths, at all life stages, widely employ distastefulness as a defensive mechanism. The use of vivid colors by animals to indicate that they are not palatable is called aposematism. Predators learn to avoid them after the first encounter, recognizing the color and pattern that is intentionally obvious for that reason. Monarch butterflies and red efts are good examples.

It has been established in laboratory testing that woolly bear caterpillars consume plants containing pyrrolizidine alkaloids for chemical resistance to lethal parasitic tachinid flies. In fact, experiments showed that they do this as a matter of self-medication, one of the first instances of some form of cause and effect invertebrate cognition. [9] Pyrrolizidine alkaloids found in some asters and legumes are one of the most toxic substances to both domestic animals and humans, causing severe metabolic disruption. Wild animals learn to scrupulously avoid them. Woolly bear caterpillars, along with many other tiger moth caterpillars that live exposed lives, are able to  live rashly and openly only because they are chemically protected. Tiger moth larvae are also among the most polyphagous of all the lepidopterans, eating as many as 88 different plant species. [10] It is probable that the chemistry of the foods they eat is inclusive of pyrrolizidine alkaloids.  

It may be concluded that woolly bears are exceptional caterpillars. Taking survival of the fittest seriously, they have evolved a formidable suite of adaptations. Extending poleward to the frozen northern latitudes to escape bug and bird infested trailways, they eschew the warmth of woolens for woolly-ness. Seeking nutrition across a broad range of foliage choice, they ensure that there will always be a dessert of protective poisons. But why the annual diaspora? Maybe they are the tramps of the lepidopterans with Bruce Springsteen’s mandate … they are born to run. But they don’t predict the weather.

References:

1. Milne, L. and Milne, M. National Audubon Society Field Guide to North American Insects and Spiders, Alfred A. Knopf, New York, 1980, pp 697-698, 790.

2. Wagner, D. “The Immature Stages: Structure, Function, Behavior, and Ecology”. In Conner, William E. (ed.). Tiger Moths and Woolly Bears: Behavior, Ecology, and Evolution of the Arctiidae. 2009 Oxford University Press  pp. 31–53.

3. Rich, G. “How woolly bear uses clever tricks to survive” Washington Post, 23 November 2021.

4. Layne, J. and Kuharsky, D..  Triggering of cryoprotectant synthesis in the woolly bear caterpillar (Pyrrharctia isabella Lepidoptera: Arctiidae)”. Journal of Experimental Zoology. 1 March 2000 Volume 286 Number 4 pp 367–371.

5. https://www.weather.gov/arx/woollybear – The U. S. Weather Service website.

6. https://bygl.osu.edu/index.php/node/1713              

7. http://www.woollyworm.com/     

8. Marshall, S. Insects, Their Natural History and Diversity, Firefly Books Ltd., Buffalo, New York, 2006, pp 174-177, 212-213.

9. Singer, M. et al “Self-Medication as Adaptive Plasticity: Increased Ingestion of Plant Toxins by Parasitized Caterpillars”. PLOS ONE. 10 March 2009 Volume 4 Number 3.

10. Wagner, op cit.

Yarrow

Yarrow is quite common, growing along roads and open fields. Notable for its lacy leaf structure.

Common Name: Yarrow, Woundwort, Milfoil, Staunch weed, Thousand-leaf, Old Man’s Pepper, Bloodwort, Devil’s nettle – Yarrow is the English variant of the Anglo-Saxon word gearwe originally from Old High German garwa. There are many colloquial names in different languages and localities. The German schafgarbe, French herbe à dinde, Swedish rölleka, and Italian achillea are all yarrow by another name.

Scientific Name: Achillea millefolium – The name of Achilles, the Greek hero of Homer’s Iliad, is recognizable as the genus. The species name means “thousand-leaf” in Latin ―  one of the common names. The distinctive appearance is characterized by small fern-like leaves extending outward from the upright stem holding up a white bouquet.

Potpourri: Yarrow is one of the most common and well-known medicinal herbs in the world. With a circumboreal reach, it encompasses the temperate regions of the northern hemisphere spanning both Eurasia and North America across a broad swath of latitudes. It grows in open areas in copious clusters marking its location with flat topped clusters of white flowers, one of three species in the mid-Atlantic region with similar appearance. The other two are white snakeroot, the poisonous weed that killed Abraham Lincoln’s mother and Queen Anne’s lace, the wild carrot, both of which are in the Aster family. Yarrow is a composite flower of the Daisy family, the flower-like “head” consisting of many small flowers. Familiarity and ease of identification favored experimentation by native peoples globally, using it in the treatment of a variety of medical conditions that vary according to tribe and custom.[1] Both Native Americans and the Europeans who ventured westward across the Atlantic independently exploited yarrow’s unusual chemistry according to their widely divergent traditions. Yarrow is one of the few (and possibly the only) plants with that distinction. Other herbals used in the colonial era were either imported from Europe like heal-all or adapted from local Indian usage like black cohosh.

The ancestry of yarrow extends to the origins of Western civilization. Achillea is an eponym for Achilles, the main Greek protagonist in Homer’s Iliad that chronicles the Trojan War. He was killed with an arrow that struck him in the heel left unprotected when he was dipped into the magical river Styx ― Achilles’ heel a consequent metaphor for anything vulnerable. Greek mythology attributes the education of Achilles to the centaur Chiron, an expert in herbal medicine where he learned of the healing powers of yarrow.  Used to staunch the wounds of fallen Greek warriors at Troy, yarrow thereafter became known as herba militaris, Latin for military medicine and retained in the common name woundwort. While the Homeric tale is certainly apocryphal, knowledge of the healing properties of yarrow was well established from the earliest days throughout Europe. This is evident from its continued use as traditional medicine in many countries. [2] Achillea was the obvious choice for the genus when Linnaean taxonomy was first established in the eighteenth century.

Yarrow is a composite flower – the “head” is composed of many small flowers

The extent to which yarrow is accepted in Europe as an effective medicinal herb is evidenced by the publication of an assessment report by the European Medicines Agency in 2020. Citing the “whole or cut, dried flowering tops” as the most effective part of the plant, the prescription  is for a minimum of 2 milliliters per kilogram of “essential oils.”  Based in part on the inclusion of  Achillea millefolium in the Pharmacopoeias (approved drug lists) of Great Britain, France, Hungary, Austria, Romania, and the Czech Republic and also in the German Commission E Monograph, the assessment report lists its accepted medical uses in the various jurisdictions. Not surprisingly, the original Achillean wound use is the most prevalent. Topical application of the essential oil staunches bleeding from the nose or anywhere else, improves wound healing, and reduces skin inflammation. Taken internally, the most common treatment is for gastrointestinal difficulties such as loss of appetite and upset stomach. However, it is also used to treat colds in Great Britain, “cramp-like conditions of psychosomatic origin” in Germany, and spasmodic colitis in France. [3] It is likely that Europeans also had many other local customary uses with their own local historical traditions which gradually coalesced into those few found more effective and reliable across the continent as cities and commerce expanded. One example is its use as snuff from which the name Old Man’s Pepper derives.

On the other side of the North Atlantic, Native Americans employed yarrow more extensively. Applications ranged from specific symptoms to panacea. Cherokee of the Southeast used yarrow mostly for hemorrhages both internally and externally, taking advantage of its astringency. They also smoked the leaves to treat catarrh, a condition resulting from excessive mucous production. Further west the Blackfoot Indians of the Great Lakes region used it as a cure-all, rubbing whatever body part was affected by sickness.  In one of the few documented veterinary applications, they also made an eyewash for their horses. On the Great Plains, the Cheyenne used an infusion of dried leaves and flowers to treat just about anything, including chest pains, nausea, cold, coughs, fevers, and respiratory diseases. [4] There are many other references to the use of yarrow by Native Americans. According to the USDA database “Native Americans used tea made from common yarrow to relieve ear-, tooth-, and headaches; as an eyewash; to reduce swelling; and as a tonic or stimulant.” [5] Since there was minimal intertribal coordination and communication with a fair amount of rivalry and some conflict, there were few opportunities for tribal healers to learn of and employ common remedies for similar ailments.

There is a good reason for the diverse and sometimes contradictory uses of yarrow. It has a complex chemistry with more than 100 biologically active chemicals. [6] According to the European yarrow report, it contains “3-4% condensed and hydrolysable tannins; 0.3-1.4% volatile oils, mostly linalool, borneol, camphor, β-caryophyllene, 1,8-cineole, and sesquiterpene lactones composed of guaianolides, mainly achillicin … and flavonoids (apigenin, luteolin, isorhamnetin, rutin).” This is in addition to an impressive array of amino acids,  fatty acids, vitamins,  alkaloids, bases,  alkanes, saponins,  sterols, sugars, and at least one poison (thujone). Knowing yarrow chemistry is one thing. Knowing what the chemical compounds do is quite another. Limited research attributes the salubrious effects of yarrow to the essential, volatile oils and sesquiterpene lactones. However, every one of the constituents is naturally produced by yarrow for some reason and that must in some cases be to deter browsing animals and sucking insects. The ASPCA lists yarrow as toxic to dogs, cats, and horses, causing vomiting, diarrhea, and dermatitis. [7]  Some birds use yarrow to build their nests, which has been shown to reduce the number of fleas by fifty percent. [8] Conversely, the USDA estimates that 20 percent of cattle and horses and 40 percent of sheep and goats graze on yarrow with no ill effects and evident nutrition.

The contradictory effects of advertent yarrow consumption by humans and domesticated animals can be attributed to chemical differences according to geography, habitat, and hybridization. Many plants form hybrids due to variations in the numbers of chromosomes. Most living things are diploid, having two sets of chromosomes excepting the sex cells that are haploid (23 chromosomes in humans) that join to form the diploid gamete. Having more than two sets is called polyploidy.  Yarrow is in something of a class by itself, with diploid, tetraploid, pentaploid, hexaploid, septaploid, and octoploid variants.[9] Since there is no commercial motivation to fund a detailed study of yarrow’s variability according to genetics (it is a weed after all) there is little scientific data. One of the few studies consisted of testing up to forty yarrow plants from each of sixty six sites to correlate polyploidy with chemistry. Hexaploid yarrows found in dry and nutrient-poor habitats had low levels of achillicin, one of the sesquiterpene lactones. Tetraploid yarrows had high levels of achillicin that correlated to the presence of phosphate, magnesium, and manganese in the soil. Noting that “the concentration varies widely in a population of a species,” the study concluded that “This makes the use of herbal medicine difficult.” [10] It is reasonable to conclude that the diverse and contradictory effects of consuming yarrow as either food or medicine is due to local variations in the quantity and quality of the many chemical compounds.

The fact that yarrow from one field may be different from yarrow in another field has not stopped the herbalists from extolling its virtues indiscriminately. In one herbal characterized as a Gaia original (The Greek personification of Earth), yarrow is noted for its “actions” that include “diaphoretic, febrifuge, peripheral vasodilator, hypotensive, antithrombotic, vulnerary, styptic, emmenagogue, anti-inflammatory, astringent, diuretic, digestive, and antiseptic.” That seems to cover about everything except cancer and athlete’s foot. A “hot infusion” of yarrow lowers fever by causing sweating to eliminate toxins and lowers blood pressure while simultaneously getting rid of  blood clots. Good for the stomach to improve digestion while getting rid of ulcers, yarrow also works on arthritis and rheumatism. In the “stops bleeding” category, reducing excessive menstruation and treating bleeding piles are included. [11] And this is one of the more rational prescriptions, based at least in part on traditional use of some type of yarrow somewhere. For those who adhere to the 17th century Doctrine of Signatures as a basis for establishing medicinal purpose, one gets “the umbel-like umbrella of yarrow betrays its properties to reinforce the protective auric shield.” What this golden (Au is the symbol for the element gold, aurum in Latin) shield is supposed to do is not clear, but a further explanation provides “lacy leaves and umbel flowers represent aeration of the lungs and blood stream.” [12] I wonder why?

Those who favor herbal remedies over pills and potions dispensed by the pharmaceutical industry believe they are on moral high ground. It is certainly true that the only drugs were herbs up until the dawn of the 20th century. There was no aspirin for headache and no erythromycin for strep throat. The apothecary shop contained dried herbs and tinctures of various mixtures … but there were also ingredients like bat wings and tiger pee. Some of the traditional herbal remedies actually worked and have since taken their place on the drug store shelf. Some natural substances like opium, have been synthesized; heroin was originally marketed as a pain reliever until addiction emerged as a serious problem. The subsequent opioid epidemic has forever tainted the reputation of big pharma.  However, absent a scientific trial with an untreated control group to use as a baseline for measuring different outcomes, there can be no confirmation of the benefits of any herbal product beyond anecdote. Trials are expensive and drug companies are only willing (and able) to foot the bill if subsequent profits on sales can pay for the research and development. The placebo effect and different reactions to the same drug by different individuals all add to the confusion. The bottom line is that herbal supplements taken for general health and well being are generally benign, and, if you believe they work, they probably do. However, most people who are really sick go see a doctor who prescribes medications from the pharmacy and not the woods. This would include yarrow, in spite of its historical herbal heritage.

References

1. Niering, W. and Olmstead, N National Audubon Society Field Guide to North American Wildflowers, Alfred A Knopf, New York, 1998, pp 354-355.

2. https://www.botanical.com/botanical/mgmh/y/yarrow02.html      

3. Assessment report on Achillea millefolium L., herba European Medicines Agency, 23 September 2020. https://www.ema.europa.eu/en/documents/herbal-report/final-assessment-report-achillea-millefolium-l-herba-revision-1_en.pdf    

4.  Ethnobotany database for Native American plant medicinal usage. http://naeb.brit.org/uses/search/?string=Achillea+millefolium+   

5. https://www.fs.fed.us/database/feis/plants/forb/achmil/all.html  

6. Foster, S. and Duke, J. Eastern Central Medicinal Plants and Herbs, Houghton Mifflin, Boston, 2000, p 74.        

7.  https://www.aspca.org/pet-care/animal-poison-control/toxic-and-non-toxic-plants/yarrow    

8. Shutler D, Campbell A. “Experimental addition of greenery reduces flea loads in nests of a non-greenery using species, the tree swallow Tachycineta bicolor“. Journal of Avian Biology. 8 September 2007 Volume 38 Number 1 pp 7–12.  

9. http://www.efloras.org/florataxon.aspx?flora_id=1&taxon_id=200023010   

10. Michler, B.  and Arnold, C. . “Predicting Presence of Proazulenes in the Achillea millefolium Group”. Folia Geobotanica. 1999  Volume 34 Number 1 pp 143–161.

11. McIntyre, A. Herbs for Common Ailments, Simon and Shuster, New York, 1992, p 60.

12. Graves, J. The Language of Plants, Lindisfarne Books, Great Barrington, Massachusetts, 2012, p 122.

Bald Eagle

The Bald Eagle in flight is one of the most iconic symbols of natural beauty

Common Name: Bald Eagle, American eagle – The white head feathers convey baldness in contrast to the darker body plumage. Eagle is anglicized from the Latin Aquila.

Scientific Name: Haliaeetus leucocephalus – The generic name is from the Greek hali meaning ‘sea’ and aietos, meaning ‘eagle’ to characterize the riparian habitat of the fish-eating bald eagle. The species name means ‘white head’ in Greek. A white-headed sea-eagle is the intended description.

Potpourri: The resurgence of the bald eagle population from a low of about 500 nesting pairs in the contiguous United States in the second half of the twentieth century to ten times that many today is a promising harbinger for reining in the excesses of our own overpopulation. The seemingly inexorable slide toward extinction of the empyreal symbol of the American experiment was a metaphor for the end of the freedoms that the New World once offered; its restitution offers hope. There is perhaps nothing more inspiring to those who seek nature on the trails than to be able to appreciate the majesty of the bald eagle, as much a preeminent symbol for Native Americans as it is to those of us who later immigrated. As the only eagle that is unique to North America, it is entirely fitting that it was chosen as the symbol for what the newly independent Americans aspired to be: strong and free. The bald eagles have returned to the rivers and lakes of their original, native, home – like a phoenix rising from the ashes. [1]

Accipiters (Latin for hawk) are generally considered to be birds of prey and synonymous with the more loosely defined term raptors. They are assigned to their tree-of-life positions due to their characteristic hooked beaks, curved and grasping talons, and keen vision (eagle-eyed). They include both the bald and golden eagles in addition to hawks, kites and the osprey. The sea eagles are distinguished in a separate genus Haliaeetus (from the Greek hali meaning ‘sea’ and aietos, meaning ‘eagle’) in the family Accipitridae due to their preference for fish as a dietary staple; their habitat along streams and the shores of lakes is a consequence. The closest relative of the bald eagle is the Eurasian white-tailed eagle (H. albicilia); the two parted ways about 15 million years ago as the Atlantic Ocean broadened the separation of the North American from the Eurasian plate. An international team of over 200 researchers recently completed a revision to the avian family tree based on the full genomes of 48 species. In addition to the finding that the ancestor of all birds (the so-called teeth-to-beak transition) lived about 116 million years ago, the study found that the closest relatives to the eagles are the new world vultures. [2] The new world (turkey and black) vultures and the California condor are also sometimes considered raptors even though they eat carrion.   Perhaps this better explains why the bald eagle eats carrion like the vultures and also hunts live prey like the hawks and kites; its refection mostly fish (56 %) but sometimes rodents (14%) and other birds (28%). [3]

The carrion-eating behavior of the bald eagle is typically considered pejorative, a blot on America’s escutcheon. However, it should be noted that the bald eagle is a very large raptor, second only to the California condor in size; it is accordingly at the top of the food chain with no predators other than humans. The consequence of having a large body of about 12 pounds (including 1 pound of feathers and a half pound of bone) is the need to consume a substantial quantity of food and necessitates a wingspan of about 6 feet to provide the needed aerodynamic lift to get its large bulk off the ground. The maneuverability of a drone-sized raptor operating within the physically confining restraints of the eastern forests is limited to mostly open areas, arboreal habitats are only suited for roosts. While the bald eagle can reach speeds of 40 miles per hour in open air and almost 100 miles per hour in a dive, this does not necessarily help in the successful prosecution of predation. However, the Gadarene plunge of a bald eagle directed at any other predator that has successfully concluded a hunt is almost guaranteed to chase it away from its prey. This is undoubtedly a matter of evolutionary survival and logic; why bother to expend all that energy in the likely fruitless enterprise of chasing rabbits when the same result can be achieved vicariously? Rather than condemn the bald eagle for the “cowardly” behavior of carrion eating, we should laud it for its intelligent choice. It is also worth noting that, with the exception of hunters, fishers and some small livestock farmers, all humans who are not vegetarians are also carrion eaters.

The family life of the bald eagle is equally worthy of human emulation and is of particular appeal to those who trend to the conservative side of social perspectives: they are monogamous and normally have two eaglets nurtured to viability in a home that is built to last. Once an eaglet reaches sexual maturity at about five years of age, it advertises its puberty with a change of plumage. Immature eagles have mottled feathers that cover the entire body and head uniformly until molting triggered by hormonal factors engenders the contrasting white-feathered crown and nape of the pubescent adult. With eyes almost as large as a human’s but with vision four times more acute (that would be 20:5), the location of a mate is facilitated by the chiaroscuro effect of the white head in contrast to black body. Mate selection is permanent for the approximate 20 years remaining should both attain average eagle longevity. [4] The couple’s honeymoon enterprise is to build a nest at or near the top of a tall tree that is in close proximity to open water. As a permanent home, the nest is embellished and expanded on a yearly basis to the extent that it can reach Brobdingnagian (Gulliver found giants in addition to Lilliputians) dimensions; the record is 10 feet wide and 20 feet tall weighing about 5,000 pounds. [5] Two to three young eaglets hatch after about a month of incubation in their “McNest “and grow rapidly, fledging at 2 months, reaching full size in 3 months, and departing for a five-year odyssey that culminates in baldness. There is something almost human in the life and times of bald eagles.

The definitive Migratory Bird Treaty Act (16 U.S.C. §§ 703–712), was insufficient to protect important non-migratory species. Due to the transcendent importance of the national emblem and the awareness that their populations were continuing to plummet, the Bald and Golden Eagle Protection Act (16 U.S.C. §§ 668-668c), was enacted in 1940 as a supplemental act to prohibit the “taking” bald eagles, including their parts, nests, or eggs. [6] While restrictive legislation was necessary to save the bald eagle, it was not sufficient; by the 1960’s it was clear that extinction was not only possible but likely. Rachel Carson’s seminal 1962 book Silent Spring provided a possible cause in chapter 8, entitled “And No Birds Sing.” Based on a study of robin fatality that was attributed to eating worms that lived under elm trees sprayed with DDT conducted by Dr. Roy Barker of the Illinois Natural History Survey, she suggested a possibly link between eagle decline and insecticides: “Like the robin, another American bird seems to be on the verge of extinction. This is the national symbol, the eagle. Its populations have dwindled alarmingly with the past decade. The facts suggest that something is at work in the eagle’s environment which has virtually destroyed its ability to reproduce. What this may be is not yet definitely known, but there is some evidence that insecticides are responsible.” Her polemic arguments against the prevailing belief that “nature exists for the convenience of man” were profoundly influential; the Zeitgeist of the environmental movement was the ultimate result and DDT became the prime target, the Environmental Defense Fund its protagonist. [7]

The debate concerning the assignation of DDT as the cause of bird eggshell thinning has been loud, long and sometimes angry.  DDT was originally considered to be a great boon to mankind, so much so that the Swiss scientist Paul Müller was awarded the Nobel Prize in Physiology and Medicine in 1948 for discovering its insecticidal properties. It was used extensively in war-ravaged Europe and Asia to prevent typhus, malaria and dengue fever epidemics. The widespread use of DDT was preceded by a great deal of research on its effects by the United States Government and other institutions including universities and the agrochemical industry – it is a toxin after all. Initial tests, such as those by the Sanitary Corps of the U. S. Army which focused on the efficacy of chlorinated hydrocarbons in the eradication of insect pests such as head lice, concluded that DDT was among the best treatments due primarily to the fact that it persisted for long periods of time which would provide better pest management outcomes. This became the argument of the economic entomologists. It is a very effective insecticide; however, it is also toxic to many marine animals, notably fish.  DDT is an organochloride that readily breaks down into DDE which is a fat-soluble compound that builds up in body fat; it has a half-life of about ten years in human body tissue.  It is especially problematic to raptors like bald eagles that eat copious quantities of fish infested with DDT/DDE. [8] This was Rachel Carson’s thesis, which eventually led to the precipitous ban (with some public health exceptions) of DDT in the United States by the EPA in 1972, and by most other countries by the end of the 20th Century. In 2004, the Stockholm Convention (now ratified by 170 countries) established what would seem to be a reasonable middle ground:  DDT is “restricted for disease vector control.” [9] Whether or not DDT elimination played a significant role in the resurgence of bald eagles will likely never be fully resolved; regardless, it was a part of the change in emphasis toward environmental sustainment that clearly did have that desired effect.  The need for environmental protection that was first evinced by Teddy Roosevelt at the beginning of the 20th Century was made manifest by Rachel Carson at its middle; the century’s end marked a new beginning in the resurrection of the bald eagle; on 28 June 2007, it was removed from the federal list of threatened and endangered species. [10]

The bald eagle was chosen as the cynosure of the Great Seal of the United States by the Continental Congress on 20 June 1782. The symbolic importance of the seal to the idea of national sovereignty is manifest in its history; a committee composed of Benjamin Franklin, Thomas Jefferson and John Adams was appointed to submit a design on the 4th of July 1776, the same day that the Declaration of Independence was signed. The careful scrutiny that the seal design received resulted in six years of deliberation that culminated in the final selection: a bald eagle with a 13-stripe escutcheon clutching 13 arrows to symbolize war and an olive branch to symbolize peace beneath a 13-star constellation in the firmament to symbolize the rise of a new sovereign nation. [11] Paradoxically, the bald eagle, though central to the design, was not included in any preliminary drawings made over the six-year hiatus. It was only added at the very end by Charles Thompson, the Secretary of the Congress who was given all of the previous designs to propose a final version. While the reasons for his choice will never be known, his background as a Latin Instructor at Philadelphia Academy may provide a clue: the eagle has a long, and Latin, history. [12]

The eagle figures prominently in the history of western civilization as a symbol of power and authority. In Greek Mythology, an eagle named Aetos Dios was a companion and messenger to Zeus, the chief deity of their pantheon, immortalized in the constellation Aquila. That the eagle became the companion of Jupiter in the Roman version of theogony is quite likely the reason that the Roman legions chose the eagle, which won out over the boar, the Minotaur, the wolf and the horse, as the symbol borne atop their battle standard. After Rome divided and the Eastern Empire succumbed to the onslaughts of the gothic tribes in the 5th Century, the Western Empire became Byzantium, the bastion of Christianity. The Byzantine emperor adopted the double-headed eagle to symbolically represent the power of the state in matters both secular and religious. The double-headed eagle became the symbol of both the Holy Roman Empire (which Voltaire quipped was neither holy, nor Roman, nor and empire) and the Russian Empire, which considered itself “the third Rome” after Ivan the Great married Princess Zoe Paleologa, a Byzantine princess and niece of the last Byzantine emperor, Constantine XI. [13] And last but not least, the Eagle is the symbol of John the Evangelist and figures prominently as one of the beasts in the Book of Revelation. It is no wonder, really, why Thompson selected the eagle and that Congress, after six years and numerous attempts, voted in favor its centrality as a symbol to the newly united states that now became the United States of America. He had empires of eagles to draw on.

It is widely alleged that Ben Franklin favored the turkey as the national symbol. The calumny against the eagle as symbol is based on the documented opinion written by Franklin in a letter to his daughter Sarah Bache from Paris, France in 1784. He castigated the louche character of the eagle, a “bird of bad moral character” as “too lazy to fish for himself” citing that when a “diligent bird has at length taken a fish” then the “Bald Eagle pursues him and takes it from him.” However, this letter had nothing to do with the eagle of the Great Seal, but rather with the symbol of the Society of the Cincinnati, a newly formed group of served revolutionary war officers. Franklin generally abhorred pomp and circumstance and his trenchant wit as evidenced in his Poor Richard’s Almanac aphorisms, was apparent here. Noting that the Cincinnati eagle looked more like a turkey, he went on, in the same letter, to extol its virtues. In noting that the turkey is a “much more respectable bird” his sarcasm becomes evident with the notion that this “bird of courage … would not hesitate to hesitate to attack a Grenadier of the British Guards who should presume to invade his farmyard with a red coat on.”  [14]

References:

  1. https://defenders.org/bald-eagle/basic-facts
  2. Lewin, S “A Genetic Guide to Birds” Scientific American, April 2015 Vol 312 Issue 4.
  3. Stalmaster, M.  The Bald Eagle. Universe Books, New York, 1987.
  4. http://www.baldeagleinfo.com/
  5. https://journeynorth.org/tm/eagle/NestAbout1.html
  6. https://www.energy.gov/nepa/downloads/bald-and-golden-eagle-protection-act-16-usc-668-668c-and-related-regulations-50-cfr
  7. Carson, R. Silent Spring, Houghton Mifflin Co. Boston, 1962, pp 103-127.
  8. Davis F. Banned. A History of Pesticides and the Science of Technology, Yale University Press, New Haven, Connecticut, 2014.  A source book for the myriad studies of the effects of DDT on the environment.
  9. https://www.epa.gov/ingredients-used-pesticide-products/ddt-brief-history-and-status
  10. https://ecos.fws.gov/ecp0/profile/speciesProfile?spcode=B008
  11. U. S. Department of State Bureau of Public Affairs. The Great Seal of the United States. Available at https://www.state.gov/documents/organization/27807.pdf
  12. Patterson, R. and Dougall R. The Eagle and the Shield, A History of the Great Seal of the United States, U. S. Department of State Publication 8900 Released 1978, pp 92-102.
  13. Dmytryshyn, B. A History of Russia, Prentice-Hall, New Jersey, 1977. pp 148 – 149.
  14. Brands, H. The First American. The Life and Times of Benjamin Franklin. Doubleday, New York, 2000 pp 668 – 670.

Ring-necked Snake

There is no mistaking a Ring-necked Snake.

Common Name: Ring-necked snake, ring snake, baby king snake, red-belly snake, yellow-belly ring snake – Even though the ring around the neck may be interrupted, obscured, or absent altogether, it is the most distinctive feature.

Scientific Name: Diadophis punctatus – The genus name is recognizable as a combination of the Latin word diadema, meaning “royal headband” (a diadem is a crown in English) and ophis, the Greek word for snake. The species name is from the Latin punctum, meaning point. In scientific names, it is used to indicate having small points or dots of color (punctate means “marked with dots or tiny spots” in English). A series of black dots extends along the underbelly.

Potpourri:  There are at least twelve subspecies of ring-necked snake, some of which don’t even have the characteristic ring around the collar. The designation subspecies is assigned when there is a difference in morphology, frequently only in coloration but inclusive of other variations in form or structure, usually resulting from geographic separation. Since subspecies are the same species, they can successfully interbreed but do so only if collocated in captivity. Geographic hybridization of snakes is not unusual, although twelve subspecies is outside the norm. The ring-necked snake’s neck ring can be yellow, cream, or orange, the underbelly can be red, yellow or orange, and the back can be gray, olive, brown, or black. Since none of these variants likely contribute to enhanced survival, random genetic variation amplified by inbreeding of isolated populations must be the main factor. The many subspecies result from a diaspora of the shared ancestral ring-necked snake that ranged across North America from Nova Scotia to the Florida Keys, west to the Pacific Coast, and south to Mexico. [1]

Ring-necked snakes are members of Colubridae, by far the largest family of the suborder Serpentes to which all snakes belong. Latin includes several words for snake, a likely result of the long-standing animus toward snakes only enhanced by the biblical account of Satan tempting Eve in the Garden of Eden. In addition to coluber, serpens names the suborder, vipera names the pit viper family Viperidae, and anguis is a genus of lizards called slow worms that have lost all vestiges of legs.  The range and diversity of colubrids, which comprise three quarters of all snakes in North America and the majority across the globe, was long thought to have been due to a dearth of research on snakes. Those snakes that did not fall into another, more obvious category like constrictors or vipers, were placed in Colubridae by default. However, most recent research using DNA associations has found that colubrids are monophyletic, evolving from a single ancestor. [2] This means that the evolution of the legless body plan that separated the snakes from the lizards must have been enormously successful ― that they took over a new ecological niche. The emergence of colubrids in the Oligocene Epoch about 40 million years ago just after the mammals expanded in range and numbers in the Eocene provides a logical hypothesis. Rodents living in holes breeding large populations of edible protein provided a resource nonpareil. Slithering, hole-diving lizards that lost their legs to become snakes were perfectly suited to exploit the resource. The Cenozoic Era is often called the Age on Mammals. “If the criterion were to be the most rapid adaptive radiation, the latter half of the Cenozoic would have to be called the Age of Snakes.” [3]

Ring-necked snakes are not rodent eaters. With an average length of twenty inches, mice are too large for their undersized maws. Like all snakes, they are obligate carnivores, ingesting their prey whole usually headfirst, down the gullet to the stomach for digestion ― a writhing esophagus. The brutal efficiency of the down the hatch method is impressive. A two-kilogram snake can ingest and digest prey weighing one kilogram. Black rat snakes range from five to eight feet in length … their name and girth reflect a penchant for the large rodents that they consume. Just as larger snakes evolved in perfecting mammal predation, smaller snake variants broke away genetically to exploit alternative food resource niches. Ring-necked snakes expanded across North America by consuming whatever they could find including insects, small lizards, earthworms, and amphibians. Five ring-necked snakes from George Washington National Forest were dissected in 1939 to reveal a diet that was 80 percent salamanders, 15 percent ants, and 5 percent other insects. This may explain why ring-necked snakes are considered the most common snake in Shenandoah National Park … it is well known as an epicenter of salamander diversity. [4] A preference for salamanders extends northward to Pennsylvania, where a more recent study of 58 northern ring-necked snakes (D. p. edwardsii) found that their primary food was plethodontids, lungless salamanders. [5] Salamanders are masters of concealment with cryptic colors and concealed hideaways under rocks … finding them is challenging. Ring-necked snakes employ chemical sensors as vectors to seek them out. They don’t need to be successful too often, since each meal results in a gain of one gram for every three grams consumed. One or two salamanders a month is plenty for the cold-blooded. [6]

Since snakes lacked bodily appendages for tearing and clawing, the mouth became essential as a weapon with only constricting body coils as backup. Trying to pin down a struggling if hapless victim to position them for swallowing without the restraining benefit of clasping limbs is surely daunting if even possible. It is also not without some danger to the predator, as rats are vicious when cornered.  Some snakes evolved muscular bodies, using brute force to literally choke the life out of their prey. Others randomly mutated to produce chemicals in glands surrounding the oral cavity that assisted in some degree toward prey immobilization. In extreme cases, these concoctions are deadly, injected with the fangs that project from the front of the mouths of vipers. Snake venom is a complex of up to 100 proteins that is stored in venom glands that can take several days to replenish once the supply is exhausted.  Since the means to kill is vital to viper survival, venom is meted out with care, apropos to prey size and injected through ducts in piercing front teeth at high pressure. This allows for the real possibility that multiple strikes may be needed to land a coup de grâce bite.[7]  The ring-necked snake is one of many colubrid snakes that are called rear-fanged. Rather than delivering a thrusting, two-tooth attack like their viperous cousins, they deliver a smaller dose of less potent venom. Limited research has been done on the composition of colubrid venoms, but it has been demonstrated that some affect only birds and lizards with little to no effect on rodents. This supports the general hypothesis that the mutation that led to snakes producing the proteins from which venom  is concocted occurred only once and that protein synthesis over time based on types of the prey encountered resulted in specialization. However, the complexity of venom glands and the great diversity of venom composition suggest multiple introductions by different clades. Speculation is the handmaid of scientific study.

Ring-necked snakes produce venom in two glands named for the French zoologist who first noted them during a snake dissection. The full biological function of Duvernoy’s gland is not yet known, although it is certainly for some type of trophic (nutrition related) purpose. There are multiple glands located around a snake’s oral cavity that are necessary to carry out the incongruous process of swallowing oddly shaped objects that can be twice the size of the hole they go into. Lubrication is necessary to slide the jaws slowly forward and digestive enzymes need to start immediately in breaking down skin and muscle tissue. Of course, it helps to kill the prey first.  Duvernoy’s glands are located directly behind the eye near the top of the skull and drain through ducts into grooves in posterior maxillary teeth, the so-called rear fangs, homologous to the venom glands of vipers.  However, rather than the lighting strike stab of vipers, ring-necked snakes use partial constriction to hold their prey while they bite down, injecting venom through multiple puncture wounds.[8] There is ample evidence that ring-necked snake venom is effective. In one experiment, garter snakes were injected with the oral secretions extracted from ring-necked snakes. They all died withing three hours. [9] Humans are not immune. A researcher was handling a ring-necked snake to take a picture when it bit him on the finger. The sharp, sting-like pain was immediate, followed within minutes by swelling of the finger that spread to the entire hand. Over the next 24 hours, redness spread down the finger from the puncture wound which persisted for the next three days. [10] There are about 700 rear-fanged colubrid snakes that produce some kind of venom which means that these so-called “harmless snakes” are not (harmless).

Ring-necked snakes occupy an elevated position in the food chain, but they do have predators. In addition to a variety of other snakes, raccoons, opossums, skunks, owls, and black bears all routinely prey on ring-necked snakes. If juveniles are included, the predator list extends to toads, shrews, and even large spiders and centipedes. Since females lay no more than ten eggs a year with no parental care thereafter, an attrition rate of about ninety percent is nature’s expectation. However, ring-necked snakes are relatively successful in spite of their small size, as evidenced by their widespread radiation across the continent and their relative density in wooded habitats. They are found frequently in communal groups with up to nine individuals. [11] The higher-than-expected survival rate can be attributed to several behaviors employed to ward off predators. The most notable is turning upside down in a writhing corkscrew movement, an eye-catching display of brightly colored red or orange belly scales. The use of bright colors as deterrent is called aposematism, the opposite of camouflage in that it intentionally draws attention rather than conceals. It is typically employed by otherwise defenseless animals that have poisonous secretions like monarch butterflies and red eft juvenile newts. The aposematic use of reds and oranges is intended to deter birds as they have full color vision whereas mammals see only blues and greens (primates are the only exception). Since ring-necked snakes lack poisonous secretions and since most of their predators are mammals and can’t see red anyway, the “colorful corkscrew” defense cannot be aposematic. It is more likely a surprise maneuver meant to throw an assailant off balance, retreating in the face of an uncertain threat. Some of the less colorful ringneck subspecies also play dead and emit mephitic odors as deterrents, relying on the near universal (vultures excepted) aversion to a rotting and possibly toxic meal.

So what about the namesake neck ring? With all of the aforementioned machinations to ward off predation, why would a conspicuous and contrasting bright yellow ring adorn the otherwise cryptic gray-brown of the dorsal surface? It is undeniably an adaptive mutation that must have had purpose that led to its spread and retention in the diverse populations. Group identification and sexual selection are both implicated by behavior. The neck ring could then serve as a clear “friend or foe” visual indication to promote group cohesion. Ring-necked snakes are very sociable, sometimes living in colonies with up to one hundred individuals. A number of larger snakes that are similarly colored but without the ring prey on ring-necked snakes. The benefit that they gain by living in communes is speculative, but it  must be related to enhanced survival comparable to birds in flocks and fish in schools. Larger groups also provide more opportunities for opposite sexes to meet and mate enhancing sexual selection. The neck ring could also function as a colorful beacon of sexual fitness. Male ring-necked snakes are attracted to females releasing fertility pheromones.  While only a small number of sightings of ring-necked snakes mating have been recorded, males have been observed rubbing their closed mouths over the female body followed by biting around the neck ring just before copulation. [12] Foreplay is what some sociable animals do. Sociable snakes?

Lastly we return to the subject of subspecies and zoology. What is the purpose of having over twelve ring-necked snake subspecies?  They differ in morphology attributable to  the geographical radiation of the species. Many are distinguished by the number and distribution of black dots that adorn the underbelly. The black dot configuration may have some physiological function but it is unclear what that might be.[13] The relative rarity of an individual subspecies would not correlate to endangerment because the overarching concern of species extinction is the loss to the biological gene pool. Subspecies can mate with each other to reproduce the baseline DNA codon protein programming of the species. All domestic dogs are Canis familiaris. They range in size and shape from Great Dane to  Chihuahua, and aside from the acrobatics that may be required, they could mate and produce offspring that would be some combination of the two. Using the rules applied to snakes, all dog breeds would all be separate subspecies. And what about Homo sapiens? When Carolinus Linnaeus introduced the binomial name for humans in the tenth edition of Systema Naturae in 1758, he identified four varieties: Europaeus, Asiaticus, Africanus, and Americanus. These were the original subspecies that became the foundation for the current racial distinctions [14]. The idea of subspecies is not an altogether helpful concept sociologically. It is not unreasonable to question its usefulness to biology.

References

1. Behler, J. and King, F. National Audubon Society Field Guide to North American Reptiles and Amphibians, Alfred A. Knopf, New York, 1979, pp 589-679.

2. Zheng, Y., Wiens, J.  “Combining phylogenomic and super matrix approaches, and a time-calibrated phylogeny for squamate reptiles (lizards and snakes) based on 52 genes and 4162 species” Molecular Phylogenetics and Evolution 8 October 2015. http://www.wienslab.com/Publications_files/Zheng_Wiens_2015b_MPE.pdf     

3. Starr, C. and Taggart, R. Biology, Wadsworth Publishing, Belmont, California, 1989, p 585.

4. Linzey, D. and Clifford, M. Snakes of Virginia, University Press of Virginia, Charlottesville, Virginia, 1981, pp73-77.

5. Cathro, Andrew and Lindquist, Erik 2016. Diadophis punctatus edwardsii (Northern Ring-necked Snake) Diet. Herpetological Review 47 (4): 681

6. Henderson, R. “Feeding Behavior, Digestion, and Water Requirements of Diadophis punctatus arnyi Kennicott”. Herpetologica. 1970, Volume 26 Number 4 pp 520–526.

7. Kardong, K.  Colubrid snakes and Duvernoy’s “Venom” Glands .Journal of Toxicology: Toxin Reviews. 6 December 2002, Vol. 21 No. 1 pp 1–15.

8. Mackessy, S. and  Saviola, A. “Understanding Biological Roles of Venoms Among the Caenophidia: The Importance of Rear-Fanged Snakes” Integrative and Comparative Biology. 1 November  2016 Volume 56 Number 5 pp 1004–1021.

9. O’Donnell, R. et al. “Experimental evidence that oral secretions of northwestern ring-necked snakes (Diadophis punctatus occidentalis) are toxic to their prey”. Toxicon. November 2007, Volume 50 No. 6 pp 810–815.

10. Brock, T. & Camp, C.  Diadophis punctatus edwardsii (Northern Ring-necked Snake) Envenomation. Herpetological Review 2018 Volume 49 Number 2 pp 340-341.

11. Blanchard, F. et al “The Eastern Ring-Neck Snake (Diadophis punctatus edwardsii) in Northern Michigan”. Journal of Herpetology. 15 November 1979 Volume 13 No. 4 p 377.

12. Yung, J. Diadophis punctatus University of Michigan Museum of Zoology  https://animaldiversity.org/accounts/Diadophis_punctatus/   

13.  http://reptile-database.reptarium.cz/species?genus=Diadophis&species=punctatus   

14. Graves, J. and Goodman, A. Racism, Not Race, Columbia University Press, New York, 2022. Pp 3-4.