Japanese Beetle

There is no mistaking the brown and green wing covers of the Japanese beetle.

Common Name: Japanese Beetle – Unlike many animals and plants broadly referred to as Asian in origin, there is no doubt that this beetle was inadvertently introduced from Japan to the United States and spread, becoming an agricultural Juggernaut.

Scientific Name: Popillia japonica – The genus is based on a well established Roman surname. Marcus Popillius Laenas was consul, one of the Roman Republic’s two top magistrates, noted for his defeat of the Gauls in 359 BCE. He was the first of a long line of distinguished Roman leaders named Popillius. There is no known connection between any of these descendants and beetles. The species name establishes geographic origin in Japan.

Potpourri:  The Japanese Beetle is a case study in the invasive behavior of an alien species in the life and times of the twentieth century.  Its clandestine point of entry in August 1916 was  New Jersey in the form of beetle larvae ensconced in iris rhizomes imported from Japan as horticultural garden center offerings. [1] Spreading at a rate of about 10 miles per year, the shiny green and brown scourge has ravaged planthood in the eastern half of North America for over a century. The root munching grubs eat voraciously through turf all summer long, despoiling large swaths of lawns, or, if used to hit balls and then go find them, golf courses, only to become adults after wintering over six inches deep. What follows in spring after pupation is a two month feeding and mating frenzy culminating in the turf deposition of some 50 eggs per female to sow the seeds for Malthusian beetle populations. With an annual agricultural loss cost estimated at half a billion dollars, they have spawned a whole industry of eradication and control.

Beetles are by some measures the most successful of earth’s inhabitants. With more than 300,000 species worldwide, they comprise about one fourth of all described animals―a thousand beetles for every primate. This is in part due to an “intelligent” design. The Order Coleoptera  to which they are assigned is literally Greek for ‘sheath wings,’  describing their key taxonomic anatomical similarity. The hardened, chitinous front wings encase the more delicate rear wings with an armored barrier similar in form and function to  a box turtle’s carapace, protecting the beetle from many an unwelcome intruder. These encapsulating forewings, which are called elytra, the plural for elytron which also means ‘sheath’ in Greek, unfold with an elaborate linkage of struts and elbows to release the diaphanous rear wings for flight. The beetle, a six-legged biological version of the bipedal transformer toy, thus converts from a stolid, tank-like ground vehicle into a clumsy but functional airfoil to find food, to find a mate, to escape emergent threats, or simply to gad about on summer days. [2] The aphorism that the Creator must have had an inordinate fondness for beetles because he made so many of them is frequently attributed to Charles Darwin. The more likely source is British entomologist J.B.S. Haldane who wrote that “the Creator would appear as endowed with a passion for stars, on the one hand, and for beetles on the other for the simple reason that there are nearly 300,000 species of beetle known, and perhaps more …” [3] The versatility and resilience of beetles is notable, divine or otherwise.

Japanese beetles are in the family Scarabaeidae, usually referred to as simply scarabs, which comprise one tenth of all beetle species (a mere one hundred per primate). The historical importance of the scarabs is evident in nomenclature. Scarabaeus is Latin for beetle, which probably came from the Greek karabos, meaning horned beetle, with good reason. According to the dated but enduring Linnaean taxonomy, scarabs are distinguished in having the last 3 to 7 sections of their 10 segmented antennae formed into a lamellate or plate-like club; lamellicorn beetle is an alternative name. The notoriety of horn-beetle scarabs is due in part to their relatively large size, and, in many cases “outgrowths on head and thorax” that “produce bizarre forms.” [4] But the more surprising scarab origin story is central to Egyptian mythology. Khepri was one of the names for their Sun god  (along with Ra, Atum and Horus) that is a cognomen taken directly from kheprer, the Egyptian name for the dung-beetle. Many scarabs feed on animal feces and other decaying matter as a nutritional niche. The dung beetle carries this one step further, molding the semi-solid stool into balls that can be rolled along the ground and deposited into a purpose built hole. Here the eggs are laid so that hatched larvae will be provisioned with their first feast. The Egyptian holy men interpreted the dung ball as representing the sun being pushed into the “Other World” at dusk and back over the horizon at dawn. Thus the scarab amulet, a signature Egyptian embellishment and adornment, symbolized “the renewal of life and the idea of eternal existence.” [5] The transubstantiation of the bread and wine of communion into the body and blood of the Christian deity that are then consumed in sacristy is no less outré.

While dung is not on the menu for the Japanese Beetle, just about everything else is, earning it the distinction of being considered polyphytophagous, Greek for “many plant eating.”  While roses and fruit trees are its most notorious targets, the beetle smorgasbord includes at least 435 identified species from 95 families including garden and field crops, ornamental shrubs, and shade trees. The choice of one plant over another is related at least in part to scent.  Research has demonstrated that the phytochemicals eugenol and geraniol are particularly attractive―the fact that roses contain both provides some empirical validation. Exacerbating the beetle invasion problem (beetlemania?) is their tendency to congregate on one plant, creating a writhing mass of coruscating green and brown. Field testing has revealed that twice as many beetles alight to join a party in progress, eschewing adjacent plants of the same species for no apparent reason. Both the quality and quantity of the meal must surely suffer as communality prevails. With a preference for plants in direct sunlight, the banquet starts at the top, stripping the foliage downward by eating between the leaf veins, leaving characteristic lacelike skeletons as remnants. In many cases, the plant is left totally defoliated and dies as a result. In one field test 2,745,600 beetles were collected from 156 peach trees … an average of 17,600 per tree. As half of that population would be female, the ensuing egg deposition in nearby fields would result in a veritable contagion of larval grubs, eating away at the roots of the ecosystem to the detriment of both field and forest. [6] The scourge of the Japanese beetle to an environment unprotected by native predators can be apocalyptic.

Beetle mating mania

Evolutionary success for any animal species requires a minimum of two surviving adults to replace each gravid female. In insects, this is achieved predominantly by depositing large caches of fertilized eggs that hatch to larvae and pupate to adults mating in sufficient numbers to establish perpetuity. Japanese beetles evolved to survive predation and attrition in their native habitat, primarily the grasslands of northern Honshu and the whole of Hokkaido. In the United States they are unchecked, and their sexual drive to survive has produced exorbitant dividends. Male beetles are equipped with a penis-like aedeagus to inject cyst-encapsulated spermatozoa into the female vagina. The instinctual male mating mandate is triggered by the pheromones of emergent virgin females; they descend en masse, forming large clusters called “beetle balls.” One experiment using females in a trap collected almost three thousand males in one hour. Mating attempts persist throughout month-long adult lives. Coitus occurs primarily on leafy foliage that doubles as dining room and can last for several hours. Speaking of balls, one male was observed mating with seven different females in a single day and another was observed mating with at least two different females over five consecutive days.  Females take periodic breaks from the action to dig about three inches into the soil to lay several eggs only to return to remate and repeat, ultimately laying about fifty. [7] A population bomb nonpareil.

The exploding growth of Japanese beetles was noted within two years of their initial introduction in a nurseryman’s refuse pile in Burlington County New Jersey in 1916. By 1920, 1,000 quarts of beetles were collected in one half square mile and two years later, the area had expanded to six square miles. In 1923, when the range had surpassed 700 square miles and extended into Pennsylvania, the clarion call was sounded at the national level. The USDA dispatched scientists to Japan to search for predators and began evaluating pesticides for control and remediation. [8] But it was too little too late and by 1970, the range had reached at least 150,000 square miles and extended over 14 states. Despite extensive efforts to stem the tide, it is now established in 30 states. While it was long thought that the Rocky Mountains and the Great Basin would present an impenetrable barrier to their westward migration, Japanese beetles have recently made landfall in the Pacific northwest. It is postulated that adult beetles hitched a ride on an airplane or that larvae arrived surreptitiously in the root soil of imported plants. [9] The economic costs have grown accordingly. The Japanese beetle larva is the worst turf-grass pest in the United States; control costs are estimated at $460 million annually. This estimate is not inclusive of crop damage and the devastation of ornamental shrubs like rose bushes.  While this is hardly chump change, it pales in comparison to the annual costs of invasive species, which is on the order of $20 billion. The highest invasive species costs are attributed to mammals, primarily due to rodent crop damage. Plants are next due to strangling vines like Amur honeysuckle. Insects place third, led by the red imported fire ant (or RIFA) of the southwest with an annual cost of $1.5 billion. [10] The Japanese beetle has the distinction of being one of the first invaders and one of the most visible if not the most costly. It can only get worse with a warmer climate.

In an attempt to mitigate some of the economic and aesthetic damage, farmers and homeowners usually start with chemical warfare, primarily with pesticides  based on permethrin and carbaryl. The former is known as one of the best deterrents to ticks when applied to clothing for hikers and soldiers but the latter is more widely used because it is cheaper. The insect killing euphemism pesticide captures the insidious effects of chemicals when widely applied to farm fields and home gardens. The extermination of “pest” species like Japanese beetles also eliminates beneficial insects like butterflies and bees. The insect Armageddon of the last several decades is an unsettling result due in no small part to its food chain effect; many birds rely on bugs for protein. There are certainly eco-friendly alternatives based on botanicals, but they are for the most part deterrents that only last for several days. The main effect is to shunt the beetles temporarily to another location like your neighbor’s garden. A second line of defense utilizes Japanese beetle traps that emanate vapors made from a combination of virgin female pheromones and a treacly blend of fruits. The problem is that the traps are much more effective at attracting beetles (especially males) than they are at capturing them. The end result follows the law of unintended consequences: more traps, more beetles.[11]

The obvious but complicated alternative is biological control. Difficulties arise not only in the identification of the appropriate control organism  but also in ensuring that the cure does not become a curse. It is a factual matter that invasive species come from somewhere where they are not invasive … held in check by their native evolved ecology. While the first step is to scour home turf for potential predator imports, an assessment of  viability to the new environment is equally mandatory. Among the notable failed biological control attempts was the introduction of mongooses to Hawaii to kill crop-eating rats. The diurnal mongooses never hunted the nocturnal rats, decimating the bird population instead. In the case of beetles, the task is not as onerous since many wasps are masters of insect parasitism and, not infrequently, one species of wasps specializes in one species of beetle. The Spring Tiphiid Wasp (Tiphia vernalis) was introduced to North America in the 1920s for its known parasitism of Japanese beetles. As one of natures more insidious predators, the female wasp burrows into the soil to locate a beetle grub, paralyzes it with a sting, and lays its egg that hatches to a larva that feeds on the now immobilized carcass. While effective, the tiphiid wasps alone have failed to check the Japanese beetle onslaught and other controls have been identified. The Winsome fly (Istocheta aldrichi) was also imported from Japan as a control vector. It deposits eggs on the thorax of adult female beetles which hatch to maggots that burrow under the outer wing covers to consume the softer body parts. There are also insect eating nematodes and several types of bacteria that are employed in the never ending battle to thwart the Japaneses beetle invasion. But so far, it is at best a standoff. [12]

 The impracticality of eradicating an invasive species like the Japanese beetle renders damage control as the only feasible alternative. The ounce of prevention as a pound of cure method is to establish protocols to halt the human-assisted migration of beetles from an infested part of the country to new territory. Nine western states have signed on to the USDA Animal and Plant Health Inspection Service (APHIS) Plant Protection and Quarantine (PPQ) program to monitor Japanese beetle populations and stop migration. Airports are assessed for local beetle populations and aircraft are treated to minimize the chances for the spread from infested areas to the protected states. [13] While this will lower the risk, it will not eliminate it. With some irony, it has been pointed out that, for all the human chemical, control, and programmatic efforts, the Japanese beetle has outsmarted us. Therefore, the first rule of Japanese beetle control is that you can’t control Japanese beetles. It is possible to reduce the damage by using chemical sprays selectively on their favorite plants like roses and kill enough to prevent their spread to other plants, a process called trap cropping. Another possibility is to encourage limited growth of plant invasives such as multiflora rose and Japanese knotweed that Japanese beetles demonstrably prefer. But be ever mindful of who is in charge. The final rule of Japanese beetle control is that they will “seek revenge for their dead relatives.” [14]


1. Milne, L. and Milne, M. National Audubon Field Guide to North American Insects and Spiders, Alfred A. Knopf, New York, 1980, pp 561-562

2. Marshall, S. Insects, Their Natural History and Diversity, Firefly Books, Buffalo, New York, 2006, pp 257-258.

3. Haldane, J.B.S. What is life?  The Layman’s View of Nature,  L. Drummond, London,1949,  p 258.

4. Gressitt, J. “Coleoptera”, Encyclopedia Britannica, 15th edition  Macropedia Volume 4 William Benton Publisher, Chicago, Illinois, pp 828-837.

5. Viaud, J. “Egyptian Mythology” New Larousse Encyclopedia of Mythology, Hamilton Publishing Group, Ltd. London, 1973, pp 9-43.

6. Fleming, W. “Biology of the Japanese Beetle” USDA Technical Bulletin Number 1449, July 1972. https://naldc.nal.usda.gov/download/CAT87201410/pdf     

7. Gyeltshen, J. et al “Japanese Beetle” University of Florida. https://entnemdept.ufl.edu/creatures/orn/beetles/japanese_beetle.htm     

8. “Japanese Beetle Ravages”, Reading Eagle Newspaper Article 22 July 1923 extracted from New York Herald.

9. Betts, A. “Japanese beetle count passes 20,000” Washington State Department of Agriculture Ag Briefs. 3 September 2021. https://wastatedeptag.blogspot.com/2021/09/japanese-beetle-count-passes-20000.html     

10. Fantle-Lepczyk, J. et al “Economic costs of biological invasions in the United States” Science of the Total Environment, Volume 806, Part 3, 1 February 2022. https://www.sciencedirect.com/science/article/pii/S0048969721063968?via%3Dihub   

11. Potter, D. et al “Japanese Beetles in the Urban Landscape” University of Kentucky College of Agriculture, Food, and Environment Entomology Department. https://entomology.ca.uky.edu/ef451

12 “Managing the Japanese Beetle: A Homeowner’s Handbook. USDA, Washington, DC https://www.aphis.usda.gov/plant_health/plant_pest_info/jb/downloads/JBhandbook.pdf   

13. USDA Animal and Plant Health Inspection Service (APHIS) Japanese Beetle Handbook https://www.aphis.usda.gov/import_export/plants/manuals/domestic/downloads/japanese_beetle.pdf

14. Gillman, J.  “Disney and Japanese Beetles”. Washington State University, 18 March 2010

Rosebay Rhododendron

Common Name: Rosebay Rhododendron, Rosebay, Great-laurel –  Rosebay is used as a descriptive name for several plants with characteristic rose-like blossoms. Rhododendron is one of the few plants with its genus as common name.

Scientific Name: Rhododendron maximum – The genus is a combination of the Greek words rose (rhodon) and tree (dendron), an indication of its well established association with civilizations in antiquity, “rose tree” being an apt description. Maximum is a widely used term especially in mathematics as an adjective to convey the largest in size or quantity. It is derived from the Latin magnus, meaning great or big, which, for a rhododendron, it is.  

Potpourri: The lush, dense thickets of rhododendron that dominate the understory of upland elevations are testimony to an evolutionary path that produced a competitive combination of successful traits. It is one of the relatively few broad-leaved flowering plants of the angiosperm (enclosed seed) clade that is evergreen, retaining foliage year-round like the largely needle-leaved gymnosperm (naked seed) clade. The prominent rose-like blossoms that extend from the end of nearly every branch are a bouquet to attract pollinators, mostly bees, that flit from one to the other collecting nectar and pollen. With successful fertilization, an elongated, egg-shaped fruit with five cells splits open to release thousands of seeds that scatter to extend the grove ever outward. The lack of any evidence of insect damage or animal browse is a matter of chemistry. Rhododendrons, like many other members of the Ericaceae or Heath family, evolved strong chemicals to deter predation. Most animals give it a wide berth; deer supposedly are able to browse without harm but there is little evidence that they do so regularly. Rhododendron leaves can be fatally toxic to cattle and sheep. [1]

There are over one thousand species of the Rhododendron genus that extend globally across the temperate climates of the northern hemisphere. Based on fossilized pollen found in strata dating from near the end of the Cretaceous Period and fossil leaves from the beginning of the Tertiary Period, it is postulated that Rhododendrons first appeared just after the breakup of Pangaea in southeastern Asia about 50 million years ago. Speciation spread globally across a wide band of latitude during the pre-glacial epochs when a warmer climate prevailed. It is probable that the subsequent glacial cooling cycles of the current Quaternary Period resulted in the isolation of rhododendron populations in remote mountainous regions, just as Balsam Firs are isolated in elevated areas of the Appalachians. This explains the rich diversity found on the slopes of deep valleys in southeast Asia in a band extending from just east of the Himalayas through the Malaysian archipelago and the single species rhododendron diaspora to Japan, the Appalachian Mountains, and the Caucasus region of eastern Europe. [2]

While “mad honey” may also evoke marital discord, it has historical rhododendron relevance. Mesopotamia, the land between the Tigris and Euphrates rivers, is where western civilization arose from Neolithic farm villages that planted the first tentative crops. Rhododendron had spread across the Anatolian peninsula that is now Türkiye from their epicenter in the Caucasus, attracting swarms of honeybees.  In 400 BCE, the soldier of fortune Xenophon led his mercenary Greek army eulogized as the “ten thousand” on a forced march of 1500 kilometers westward  through hostile territory of present day Kurdistan and Armenia to the Black Sea. Lacking adequate provisions, they lived off the land, raiding bee hives for honey. As Xenophon later recorded in his chronicle Anabasis,  “the soldiers who ate the honey went off their heads, and suffered from vomiting and diarrhea … so they lay there in great numbers as though the army had suffered a defeat, and great despondency prevailed.” [3] While no one died, the debilitating effects of rhododendron honey were put to nefarious use near this same location three centuries later.

Mithridates Eupator became the ruler of Pontus in 115 BCE when his mother, who had tried to kill him as a youth, was deposed in a coup d’état. To protect himself against the conspiracies inherent to governance of that era, he followed a regimen of microdoses of poison to acquire immunity over time, becoming an expert on toxins and their antidotes. Uniting the diverse population of Greeks, Persians, and Thracians along the northern tier of Asia Minor, he became a serious rival to the Romans encroaching ever eastward. In the First Mithridatic War (88-84 BCE), his navy of 400 ships and army of 290,000 took over the Black Sea and the Greek cities on its banks, putting an end to the flow of tribute money to Rome and nearly bankrupting their economy. [4] The Romans rallied in two ensuing wars that eventually drove Mithridates from power as the last major eastern threat to their burgeoning empire, but not before he had tricked them on at least one occasion with mad honey of the rhododendron. In 67 BCE, the Roman general Pompey was advancing eastward along the Black Sea coast near Trabzon to engage the Pontic forces. Mithridates, employing his mastery of poisons, placed bee hives in clay pots along their route. Three squadrons of Roman soldiers succumbed and were slaughtered in their stupor. In spite of this tactical success, the forces of Rome eventually prevailed, and Mithridates was deposed and exiled to Crimea, where he was stabbed to death by the agents of his son since poisoning was not an option. [5]  The genus Eupatorium which includes the poisonous white snakeroot and the medicinal boneset is named for Mithridates Eupator in recognition of his contribution to toxicology.

Honey from rosebay rhododendron in North America is not nearly as common nor as virulent as the legendary Caucasian rhododendron of Xenophon and Mithridates. Nonetheless, the mad honey trope persists. In 1801, an account of rhododendron honey inducing nausea, muscle spasms, and blurred vision was published in the Transactions of the American Philosophical Society.[6]  A report in the most venerable scholarly publication in the Americas (established in 1771 when the states yet to be united were still colonies) affords some credence to this assertion. However, there is little evidence of any significant incidence of what is sometimes euphemistically called “honey intoxication” in North America. There are several reasons for this. R. maximum is neither as toxic as R. ponticum, the plant eponymously named for Mithidates’ Pontus homeland, nor as widely dispersed. There is also the fact that honey bees are indigenous to Europe as native pollinators for many wild plants. They were introduced to the Americas for crop pollination and are largely relegated to that role, even as some have become naturalized. The few reports of  mad honey illnesses in the US are at least in part attributable to an alternative medicine herbal treatment that links “sexual performance enhancement” to the consumption of bespoke beekeeper-induced rhododendron-mad honey.  Of twenty one honey-related emergency room visits due to symptoms that included dizziness, nausea, vomiting, and syncope (loss of consciousness due to low blood pressure), most were men of middle age who sought to regain virility [7]―another good reason to call it mad honey.

Rhododendron maximum produces a  poison named grayanotoxin which has been and is sometimes still referred to as andromedotoxin, acetylandromedol, or rhodotoxin (from the genus). While concentrated in honey, it also permeates the leaves and flowers. Grayana is an Asian Heath Family plant species from which the toxin was first extracted and analyzed. The name derives from the American botanist Asa Gray who supported Darwin’s work with the observation that many plants in eastern North America were similar to those of east Asia (like rhododendron), indicating similar evolutionary progressions. Grayanotoxin interferes with the operation of neurons by disrupting “voltage-gated sodium channels.” The effect is that  the neurons that carry the signals from one part of the body to another that make everything happen … from the beating of the heart to the thinking of the brain … can no longer do so in the prescribed order with proper timing. [8] The mechanism employed by neurons to carry out their quintessential task is electrochemical.  Electrical signals travel along the neuron from the dendrites at one end to the axon at the other where they pass over a gap called the synapse to the next neuron in the sequence path using sodium ions as transport. This is the main reason that electrolytes (ionic fluids) are so important and that hyponatremia (low sodium) can be fatal. It has been long been established as most likely that this ionic neural mechanism was a random (Darwinian) mutation that evolved only once, and, owing to its sensory and mobility efficacy, was replicated in every animal ever since.  However, it may be much more complex than that as sea sponges, which have no neurons, and comb jellies, which do, have DNA similarities. [9] The details of evolution are still evolving.

The effects of rhododendron grayanotoxin poisoning are what one might expect considering the disruption of nerve function as its cause.  Dizziness, confusion, and blurred vision are sure to follow a diminution of neuron signaling in the brain. Likewise, insidious side effects on autonomous systems take a toll; the heart beats more slowly and  blood pressure can drop to induce a loss of consciousness. Since nerves do everything, a panoply of effects have been reported that range from numbness around the mouth and excessive salivation to vomiting and diarrhea. Since humans don’t as a rule eat leaves and flowers, most reported human health effects concern the consumption of  toxic honey, which is brown and bitter and not golden and sweet. Since bitter is a  taste sensor variant to protect against inadvertently consuming poisons, it is unclear why anyone would eat tainted honey in the first place (excepting virility which trumps reason). However, cattle, sheep, goats, and donkeys do eat rhododendron leaves and consequently fall victim to its poison. The toxic dose for cows is 0.2 percent body-weight (about one kilogram) with symptoms appearing about three hours later that last for several days. Fatalities are not uncommon in part due to the ruminating mastication of cows; chewing toxic cud can only release more poison. Domestic cats and dogs will on occasion consume the azalea type of rhododendron that is widely planted in gardens, the characteristic symptoms of gastrointestinal distress result. [10]

Plants create toxic chemicals for a reason – usually to deter animal predation. Heath Family plants are no exception. Grayanotoxin likely was an evolutionary mutation that kept herbivorous animals at bay, which it does. In some cases, a priori plant chemical defenses can be co-opted by humans to take advantage of their toxicity. This is especially true when a plant (or fungus) has evolved to ward off microbes or bacteria that are equally threats to the health of humans, becoming an antibiotic. In that grayanotoxin acts to disrupt neural activity, it would seem an unlikely candidate for medicinal use owing to its profound, disturbing effects. However, there is ample evidence that it was used by Native Americans for a variety of applications. [11] Cherokee used it both as an external poultice for rheumatic pain and as treatment for skin abrasion. This may merely have been a placebo that was thought to work, so it did. The rhododendron was apparently also used for various purposes having nothing to do with health, such as to “throw clumps of leaves into a fire and dance around it to bring cold weather.”  [12] It is also reported that Native Americans made a tea from the leaves that was “taken internally in controlled doses for heart ailments.” The same guide notes “leaves toxic, ingestion may cause convulsions and coma.” [13] There has been some recent research concerning the use of rhododendron compounds for specific ailments. For example, diabetic rats treated with grayanotoxin produced more insulin, presumably due to some form of nerve stimulation. All things considered, it is probably best to avoid it altogether, in spite of any number of herbal remedies containing rhododendron extract that supposedly produce salubrious affects. [14]

Heath Family shrubs (Ericaceae) are masters of their chosen environments that include the    understory of trees at higher elevations and craggy, berry bogs. They have help in the form of specialized fungal partners that envelop their roots, providing soil nutrients like phosphorus and nitrogen in exchange for the sugars generated by photosynthesis. This relationship is called mycorrhizal derived from the Greek words for fungus mykes and root rhiza, literally “fungus root.” While almost all (~90 percent) plants have mycorrhizal fungal partners, most are either in the form of fungal sheathes surrounding the outside/ecto of the root (ectomycorrhizal – mostly trees), or fungal branches that penetrate into/endo root cells (endomycorrhizal – other plants) to form little tree-like structures called arbuscules. Ericoid mycorrhizas combine the two forms in that they both surround the roots and penetrate the cells so that this effect is even more efficacious. It is now well established that trees and shrubs (like rhododendron) share and balance nutrients to maintain a healthy ecosystem through their interconnecting fungal-root networks; facetiously the “wood wide web.” [15] The effectiveness of the outer and inner “ectendomycorrhizas” of heaths in promoting interconnected communities is such that they can and do completely take over a habitat. This can be a problem when rhododendron are introduced to non-native environments. For example Rhododendron ponticum was introduced to the UK from Iberia in 1763 and has spread to crowd out native trees, covering over three percent of all woodlands. Once established, it is almost impossible to extirpate. [16]

Rhododendrons, in spite of invasive tendencies in some regions, are among the most popular horticultural plants. It is the most diverse genus of the Heath Family with more than a thousand identified species. There is a Global Conservation Consortium for Rhododendron that seeks to promote and protect all species from extinction. Their relevance to the ecosystems is of particular importance to “underpin livelihoods in regions where they protect watersheds and stabilize steep mountain slopes in the areas where some of the most significant river systems in Asia begin.” [17] The rhododendron collection at the renowned Royal Botanical Gardens at Kew is among its most cherished, with over 3,000 species of which 300 are threatened with extinction. They were in many cases discovered, named, bred, and donated by the generation of British plant hunters that plied the globe during the nineteenth century. [18] So far as is known, none of them were affected by mad honey, their virility apparently well established.

A near impenetrable stand of rhododendron crowd out all other vegetation


1. Brown, R. and Brown, M. Woody Plants of Maryland, Port City Press, Baltimore, Maryland, 1999, pp 247-254.

2. Irving, E. and Hebda, R.  “Concerning the Origin and Distribution of Rhododendrons”. Journal of the American Rhododendron Society. 1993 Volume 47 Number 3.

3. Xenophon. “4.8.19–21”. In Brownson CL (ed.). Anabasis. Perseus Hopper. Department of Classics, Tufts University. https://www.perseus.tufts.edu/hopper/text?doc=Xen.%20Anab.%204.8&lang=original

4. Durant, W. Caesar and Christ, The Story of Civilization Volume 3, Simon and Schuster, New York, 1944, pp 516-519.

5. Lane R. and Borzelleca J. “Harming and Helping Through Time: The History of Toxicology”. In Hayes AW (ed.). Principles and methods of toxicology (5th ed.). 2007, Boca Raton: Taylor & Francis.

6. Harris, M. Botanica North America, Harper-Collins, New York, 2003, pp 60-61

7. Demircan A. et al. “Mad honey sex: therapeutic misadventures from an ancient biological weapon”. Annals of Emergency Medicine. 15 August 2009 Volume 54 Number 6 pp 824–829

8. “Grayanotoxins”  Bad Bug Book: Handbook of foodborne pathogenic microorganisms and natural toxins (2nd ed.). Food and Drug Administration. 2012. https://www.fda.gov/media/83271/download   

9. Dunn, C. “Neurons that connect without synapses” Science 21 April 2023, Volume 280, Issue 6642, , p.241, 293.

10. Jansen S et al . “Grayanotoxin poisoning: ‘mad honey disease’ and beyond”. Cardiovascular Toxicology. 19 April 2012 Volume 12 Number 3 pp 208–215.

11. Popescu, R and Kopp, B. “The genus Rhododendron: an ethnopharmacological and toxicological review”. Journal of Ethnopharmacology Volume 2 May 2013, 147 Number 1 pp 42–62.

12. Ethnobotany database at http://naeb.brit.org/uses/search/?string=rhododendron

13. Duke, J. and Foster, S. Medicinal Plants and Herbs, Houghton-Mifflin, Boston 2000, p. 260.  

14. Jansen, op. cit.

15. Kendrick, B. The Fifth Kingdom, Third Edition, Focus Publishing, Newburyport, Massachusetts, 2000, pp 257-278.

16. Simons, P. “A spectacular thug is out of control”. The Guardian. 16 April 2017

17. https://www.globalconservationconsortia.org/gcc/rhododendron/  

18. https://www.kew.org/


Common Name: Starling, European Starling, Common Starling – The vocal, gregarious songbird extended across broad swaths of Eurasia even as the Indo-European language groups were differentiating. The Old English stærlinc was probably derived from stearn, a type of tern. The similarity to the Old German stara and the Prussian starnite  are indicative of a pan-European origin without any meaning beyond that of the well known bird.

Scientific Name: Sturnus vulgaris – The Latin name for the starling is sturnus with similar Indo-European origins. Vulgaris means “common’ in Latin, as the epithet vulgar suggests.

Potpourri: The European or common starling was intentionally introduced to North America in the nineteenth century as part of a cultural movement that sought to ameliorate habitats from both an aesthetic and a practical perspective. This practice extended to medicinal plants and herbs like coltsfoot and plantain but was expressly focused on birds. The starling, noted for its ravenous consumption of insects, was considered a boon to farmers in the extirpation of crop pests prior to the adaptation of chemical pesticides in the middle of the last century. It was also considered a cultural icon in Europe for its prodigious and varied song, frequently mimicking other birds, and, as pets, human speech. What’s not to like? The starling has thrived to the extent that it has become a problem on a scale comparable to pigeons in the park and Canada geese on the golf course. Bird as pest is a contradiction in terms. While society bemoans the loss of birds to glass buildings and wind farms, urban jurisdictions must manage huge starling flocks with acres of droppings and rural agronomists must account for purloined produce. It is a complicated story that begins in New York City’s Central Park.

The hackneyed version of starling invasion blames a wealthy patrician from Manhattan who had made his money in drugs, presumably legal, named Eugene Schieffelin. As an amateur ornithologist, he became a member of the American Acclimatization Society with the stated goal of introducing every one of the 600 avian species included in the copious works of William Shakespeare. To that end, Schieffelin released approximately 100 starlings in Central Park between 1890 and 1891. This initial introduction incontrovertibly resulted in the 200 million starlings flocking from coast to coast, wreaking havoc to harvests and despoiling city streets. Accounts typically include a passage from Shakespeare’s Henry IV in which a bothersome rebel named Hotspur proposes to disturb the king’s sleep by teaching a starling to say the name “Mortimer,” an earl Henry distrusted (Henry IV, Part I, act 1, scene 3). [1] The account of Schiefflin’s starlings is usually trundled out to lambast the arrogance and ignorance of the powerful elite of the past in instigating environmental disasters of the present.

Histories that fail to account for the culture and knowledge of the time and place then and there are sophistry. The Schieffelin account is true so far as the act of starling release but widely misses the mark as to motivation and expectation. The exchange of flora and fauna between Eurasia and the Americas had been going on for over four hundred years by 1890, sometimes intentional and beneficial but frequently happenstance and harmful. Horses, wheat, and cattle were introduced by colonists for work, transport and food. Influenza, smallpox, and diphtheria stealthily disembarked decimating native populations. In return, turkeys, potatoes, and tobacco offered new and exotic tastes and temptations to the Old World. Syphilis was purportedly carried back to Spain by Columbus’s sailors and spread throughout Europe as the “French Disease.” [2] By the nineteenth century, global integration had run its course with largely benign results.

The acclimatization movement arose in France in the 1850’s as an idea proposed by the naturalist Isidore Geoffroy Saint-Hilaire. The introduction of new species from one continent to another in order to better understand the adaptation of new species to new environments was one of its primary enterprises. The American Acclimatization Society was organized in New York in the 1860s with a more nuanced goal of improving beauty and diversity with an emphasis on birds. In 1877, a Mr. Conklin of the Central Park Museum reported at a meeting of the society that the commissioners of Central Park had released 50 pairs of English sparrows and that they had “multiplied amazingly.” They also freed some starlings because these birds were “useful to the farmer and contributed to the beauty of the groves and fields.” [3] This was just one of  numerous attempts on both coasts to acclimatize the starling to the New World.

Problems with species introduced to a new region absent the checks and balances of native predation and other environmental limits first became manifest in the late nineteenth century. In 1886, Clinton Merriam, the first Chief of the USDA Division of Ornithology and Mammalogy warned of the damage to grain, seed, and vegetable crops caused by the importation of harmful birds (notably English sparrows) and mammals (notably European rabbits). Ten years later, Theodore Parker, the Assistant Chief of the USDA Biological Survey advocated for federal legislation, because  “the animals and birds which have thus far become most troublesome when introduced into foreign lands are nearly all natives of the Old World,” specifically calling out the European starling for crowding out benign insectivorous native birds in addition to eating farmed crops as food. The Lacey Act of 1900 was the first major Federal legislation concerning  wildlife management, named for its originator, a representative of the farmers of Iowa. Introducing the term “injurious” as a type of animal behavior, its intent was to “regulate the introduction of American or foreign birds or animals in localities where they have not heretofore existed.” [4]  It is still in force to this day; invasive has supplanted injurious as the pejorative of choice.  

What about Shakespeare? Schieffelin’s contribution to starling scatology would have escaped notice altogether had he not been named as perpetrator by Frank Chapman, a preeminent American ornithologist who initiated Audubon Magazine and the annual Christmas bird count. During his long career at New York’s American Museum of Natural History, he came to know Schieffelin who would periodically stop by to check on the status of starlings. In the seminal 1895 Handbook of Birds of Eastern North America, Chapman effected Schiefflin’s responsibility for their introduction. Fifty years later, the nature writer Edwin Teal published an account stating unequivocally that Schlieffen’s “… curious hobby was the introduction into America of all the birds mentioned in the works of William Shakespeare.” This assertion was apparently an extrapolation of the development of a garden in Central Park where plants associated with the bard were planted … starting in 1916, ten years after Schlieffen’s death. [5] The attribution of starling introduction to Henry IV is surely poppycock.

There is an aesthetic aspect of starlings that has been overshadowed by the cacophony of their massive flocks―they are mimics nonpareil.  According to the diary of Wolfgang Amadeus Mozart, he purchased a pet starling on 27 May 1784, annotating the entry with the musical transcription of its whistled song. Three years later he led a funeral procession of dirge-singing mourners and eulogized his avian companion’s death at its gravesite with poesy: “A little fool lies here whom I held dear, a starling in the prime of his brief time, not naughty quite, but gay and bright, and under all his brag, a foolish wag.” The starling’s tune as recorded for posterity by Mozart was nearly identical to the final movement of his Piano Concerto in G Major, K. 453 he composed at about the same time as he adopted the starling. This factual yet eerie account can only have occurred if the starling had learned the tune from Mozart who truly admired its musicality and therefore mourned its death. In all probability, Mozart strolled about Vienna whistling his compositions and wandered into a pet shop, perhaps more than once. The starling therein can only have learned the tune from him, earning the eternal sobriquet as “Mozart’s Starling.” Circumstantial evidence of Mozart’s reputation for whistling tunes as they came to his head and his fondness for birdsong … he had a canary as a youth … support this thesis.

The vocalization skills of the starling were well known to the Romans and certainly also to the Greeks whose culture they absorbed. The naturalist Gaius Plinius Secundus known as Pliny the Elder wrote that starlings “practiced diligently and spoke new phrases every day, in still longer sentences” in both Latin and Greek. Certainly Shakespeare and his sixteenth century audience were well aware of the tonal dexterity of the mimicking starling that could be taught to invoke the name “Mortimer”―the jest would otherwise fall flat. In a recent quasi-scientific  experiment with a group of starlings sharing a house with a small group of bird researchers, their innate audio habits were manifest: various birds repeated phrases including “we’ll see you soon,” “give me a kiss,” and fragments of the Star-Spangled Banner. Mozart composed a piece called A Musical Joke (K 522) shortly after the death of his pet. It is described as “awkward, unproportioned, and illogical” that goes on interminably to end in “a comical deep pizzicato (plucking) note.” This would also be a good description for the starling’s repertoire of screeches, clicks, and whistles from which it concocts a verisimilitude of human speech. Was this Mozart’s epitaph to his pet starling? It is more than a possibility, as he is otherwise known for melodic virtuosity. [6]

The starling of Mozart’s affection and Schlieffen’s obsession morphed into the scurrilous scavenger of the twenty-first century by being a too successful species.  In 1915 the USDA launched a comprehensive survey of the effects of the starling in North America that included surveys of farmers and the examination of the stomach contents of thousands of birds. Based on the findings that starlings ate more pests and consumed fewer crops than native birds, the researchers concluded that “the starling possesses an almost unlimited capacity for good.”  After over a century of profligacy, the limits of starling goodness have become manifest. According to an updated USDA study, starlings consume or otherwise despoil $800 million worth of agricultural crops every year, spread infectious diseases to both humans and farm animals that result in an additional $800 million, and crowd native birds out of nesting sites. A database of starling migration paths was recommended to track nuisance concentrations to allow for targeting them with “improved baits and baiting strategies,” clearly a euphemism for poisoning. Starlicide is a  USDA approved product to control starlings and blackbirds even though it is “toxic to other types of birds in differing amounts. But this is supposedly all right because the birds experience a “slow, nonviolent death.” [7] This policy begs a research project to assess its efficacy. Adding poisons to the environment to control highly adaptable birds that will evolve to avoid or tolerate it cannot be good public policy.

A flock of starlings is called a murmuration, not so unusual as bird collectives go―convocations of eagles and parliaments of owls among them. The name is an onomatopoeia for the sound made by careening masses of starlings maneuvering in giant formations with wings flapping and muted calls creating low, indistinct noises. These individual starling murmurs combine to create a murmuration that can comprise well over half a million birds. Rising in the late afternoon, murmurations pulsate in amorphous blobs of organized chaos that has long intrigued ornithologists. The prevalent theory is that it is driven by instinctive group behavior motivated by safety in numbers to attract outliers to join so all  can more safely settle on a place to roost for the night. Using multiple cameras from different angles to track individual birds and combining them in 3D computer models, it emerges that there is no leader, each bird synchronizing with its seven nearest neighbors. The undulating bulges of birds correlate to perturbations attributed to the “selfish herd effect” as birds on the edges move inward to the safety of the center. After about an hour, they descend en masse. [8]  

According to the International Union for the Conservation of Nature (IUCN), the starling (along with the myna in the Starling family Sturnidae) is among the world’s 100 worst invasive species based on its  “serious impacts on biological diversity and/or human activities.” [9]  Rome, Italy is the epicenter of roosting starlings that  have been coming south from all over Europe to overwinter in its balmy Mediterranean climate since the 1920s. Spending days feasting in groves of olive trees and farmland of  the surrounding countryside, starlings congregate in the late afternoon to meet-up for the nightly roost. Once situated, they relieve themselves of excrement that coats whatever lies below with a slick mass of olive-oily slime. Street closures must be invoked to prevent motor bike crashes. Parked cars are encased with an implacable sarcophagus of starling scat. Attempts to stem the avian tide ranging from outright poisoning to the introduction of predatory raptors like hawks have failed due to the adaptability of starlings, the reason for their ubiquity. The only effective strategy has been relocation. Rome’s environmental department devised a technique employing a recording of a starling screeching in distress (induced in a laboratory) that is broadcast with amplified bullhorns to disrupt the roost. Generally, after the third day of being chased away, starlings opt for a less contested and congested roost as bird-man compromise. [10]

The starling’s overwhelming success as an individual species is a serendipitous result of natural selection. Other than proliferation and vocalization, they are undistinguished as just one of about 6,500 species of the order Passeriformes that make up about half of all species of the class Aves.  Usually called song birds, they are classified in taxonomy according to the configuration of their feet. Three claws forward and one back promote grasping and perching on tree branches―they may best be thought of as perching birds that sing. [11] Like almost all other birds, starlings are monogamous, sharing parental duties in nest building, egg incubating, and chick feeding (up to 20 times per hour). In fact, there is some evidence that the male and female birds coordinate these activities so that they share equally. [12]  Depending on latitude, they produce up to two clutches of six eggs every year with a success rate of up to 80 percent. While this would nominally result in a Malthusian progression of an additional ten birds per couple every year, only about 20 percent of the chicks survive to reproductive age. Two chicks per couple annually is still enough for a population explosion. Starlings are omnivores, with a daily consumption of about 15 grams of mostly insect animal food and 30 grams of plant food. Foraging in locations that range from orchards and feed lots to urban landfills, they can readily provision their nests,  typically tucked away in nooks of man-made structures. [13]

Starlings have figured out how to make a living in a world otherwise overrun by humans, taking advantage of the terraforming that defines our habitats. While they have become invasive, one might offer the same assessment of Homo sapiens. It should come as no surprise that the class Aves produces individual species that manage to overcome the most challenging environments with unsurpassed survival skills; the penguins of Antarctica and the goony birds of Midway Island among many others. Avian survival of the meteor impact darkness of  the Cretaceous-Paleocene Extinction 66 million years ago as the only representatives of the dinosaurs established a genetic heritage of resilience. According to recent DNA evidence, the starling family emerged about 6 million years ago during a less dramatic but equally challenging global climate transition. Originating in Asia, they spread during at about the same time as C3 plants were being replaced by C4 plants that characterize a drying climate as a part of the global carbon cycle. These plants, like corn or maize, sedges, and sugar cane, are more efficient in conditions with high levels of carbon dioxide. It is likely that the peculiar starling jaw muscles first evolved to meet the C4 food challenge. Unlike most birds with strong muscles to close the bill, starlings have the opposite with protractor muscles to open the bill. This provides the ability and propensity to penetrate narrow slits and prying them open expose the plant or animal food otherwise protected. The clever and adapted starlings radiated westward, becoming the European starling. [14]

Cities are the anthropomorphic monuments of civilization. The natural world is buried beneath megatons of concrete interwoven with tunnels for trains, sewers, water, and electricity. The plants and animals that were displaced are banished to waste areas if they survive at all. In the grim and gray concrete canyons, there is no life other than planted trees, manicured lawns, and an occasional park to remind the humans that abide therein that nature really does exist. The few animals like birds and squirrels  that have learned to live with the hubris of human occupation are, if anything, a blessing. Aside from providing a reminder that we are not really alone, they offer the beneficent function of clearing the streets of the uneaten bread crumbs sourced from food trucks and tossed aside as a measure of disdain for the earth we live on. The stolid starlings do not let it go to waste, following their exceptional bird survival skills.

Starlings scramble after breadcrumbs on Pennsylvania Avenue in Washington DC


1. Mirsky, S. “Antigravity: Call of the Reviled.” Scientific American, June 2008

2. Smithsonian History of the World Map by Map, Random House, London, 2018, pp 158-159

3. “American Acclimatization Society” New York Times, 15 November 1877.

4. Jewell S.“A century of injurious wildlife listing under the Lacey Act: a history”. Management of Biological Invasions  Volume 11 Issue 3 pp 356–371. https://www.reabic.net/journals/mbi/2020/3/MBI_2020_Jewell.pdf   

5. Miller, J. “Shakespeare’s Starlings: Literary History and the Fictions of Invasiveness.” Environmental Humanities 1 November 2021.  Volume 13 Number 2 pp 301–322. Shakespeare’s Starlings | Environmental Humanities | Duke University Press (dukeupress.edu)   

6. West, M. and King, A. “Mozart’s Starling”  American Scientist. March–April 1990.  Volume 78 Number 2 pp 106–114.

7. Linz, G. et al  “European starlings: a review of an invasive species with far-reaching impacts”Managing Vertebrate Invasive Species. USDA  Paper 24 pp 378–386.

8. Langen, T. “Why do flocks of birds swirl in the sky?” Washington Post, 12 April 2022.

9. http://www.iucngisd.org/gisd/search.php  

10. Harlan, C. and Pitrelli, S. “A stunning spectacle – and a huge mess.” Washington Post, 15 January 2023

11. Alderfer, J. ed  Complete Birds of North America, National Geographic Society, Washington, DC, 2006, pp 502-504.

12. Enns, J. “Paying attention but not coordinating: parental care in European starlings, Sturnus vulgaris” Animal Behavior 2022. USDA Agricultural Publication.

13. Linz, Ibid.

14. Zuccon, D. et al. “Phylogenetic relationships among Palearctic – Oriental starlings and mynas”  Zoologica Scripta 10 April 2008 Volume 37 No. 5 pp 469–481.

Narcissus (aka Daffodil)

The Harbinger of Spring

Common Name: Daffodil  – The origin of the word daffodil is obscure. The prevalent theory is that it is a corruption of asphodel (from the Greek asphodelos which has no etymology beyond the name of the flower), a wild flower native to Eurasia noted for its association with the underworld in Greek mythology. The addition of the letter “d” is attributed to the French use of “de” as in Charles de Gaulle to indicate origin. When this precedes a vowel, then the apostrophized  version is used, as in D’Artagnan, the fourth of Dumas’ Three Musketeers. Presumably daffodil started as “d’asphodel” and was gradually Anglicized.  

Scientific Name: Narcissus spp – The genus name is the common name for the flower outside the lingua franca influence of the nineteenth century British Empire where daffodil prevailed. Spp is an abbreviation for species when the subject at hand is all of the species within a genus. Narcissus is derived from the Greek narkissos which is a variant of narke, meaning numbness. This is attributed to the use of the plant for its narcotic properties, a word with the same narke etymology.

Potpourri: The daffodil or narcissus is one of the most well known, storied, and beloved flowers of the Mediterranean Basin. It is not considered a wild flower in North America because it was introduced from Europe, becoming naturalized over the ensuing centuries. It is just as much a native of the Americas as are European, Asian, and African Americans whose ancestors also came from abroad. It is nonetheless wrongfully shunned by wild flower aficionados as a horticultural imposter, meant for gardens but not nature. As if in spite, the flowers have spread far and wide from their initial inception by settlers during the diaspora inland from coastal colonies. Daffodils frequently are found in isolated forest tracts as vestige of antebellum homesteads long since abandoned, marking their location in perpetuity, a microcosm  of the comely Narcissus of Greek mythology.

The Oreads of Greek mythology were nymphal deities of forests and mountains, noted  for their charm and beauty in contrast to their bestial counterparts, the goat-bearded satyrs and horse-tailed centaurs. An Oread named Echo was an attendant of Hera,  chattering incessantly to distract her from curtailing the sexual exploits for her husband Zeus. In punishment, Hera rendered Echo mute except only to repeat the last syllable of a word spoken to her. Echo fell in love with a young Thespian (from Thespis, a Greek poet and allegedly the first actor)  named Narcissus who haughtily spurned her affections. In grief, she fled to a lonely cavern, where she perished, leaving only her voice as echo. Narcissus was punished by the gods for his hubris with an ironically appropriate curse … to fall hopelessly in love with his own image. While leaning over the reflecting surface of a mountain spring, he was so stricken that he could not tear himself away and expired, the flower there to sprout as his namesake. [1] In the words of Ovid:

Narcissus on the grassy verdure lies

But whilst within the crystal font he tries

To quench his thirst, he feels new thirst arise

For as his own bright image he surveyed

He fell in love with the fantastic shade

And o’er the fair resemblance hung unmoved

Nor knew, fair youth! It was himself he loved. [2]

It is generally thought that the myth of Narcissus gave rise the flower named narcissus; in all probability it was the other way around. The word narcissus has the same etymology as narcotic and was almost certainly first applied to the flower for its use as an herbal remedy. The name Narcissus was not all that unusual. The Roman emperor Claudius was an able administrator who ruled with distinction, winning the admiration and affection of the citizens of Rome. Following the practices of Caesar and Augustus, he appointed ex-slave freedmen to administrative positions. Narcissus was the most prominent as ab epistulis (meaning “for communications”), essentially secretary of state. He became the richest man in Rome with a net worth of 400 million sesterces ($60B) ill-gained through extortion and coercion. When Claudius’s fifth and final wife Agrippina gained control from her aging husband and convinced him to adopt Nero, her son from a previous marriage, as his heir apparent, the days of Narcissus were numbered. As codicil to the sordid tale, Agrippina did Clausius in with a poisonous mushroom―Nero as subsequent emperor concluded that “mushrooms must be the food of the gods, since by eating them Claudius had become divine.” [3] Narcissus was stripped of wealth and power and ended up in a (flowerless) dungeon.

Narcissus of reflecting pool fame has retained name recognition in the modern era as a term rooted in psychiatry. The classification of mental illnesses is a greater challenge than physical illnesses because there are essentially no quantitative measures. In almost every case, diagnosis must primarily be inferred qualitatively from what an afflicted patient says with some correlation to observed behaviors. The American Psychiatric Association (APA) first established “a statistical classification of institutionalized mental patients’ in 1844 to “improve communications about the types of patients.”  The Diagnostic and Statistical Manual of Mental Disorders, generally abbreviated as DSM, was started after the Second World War and is now in its fifth edition to include everything from ADHD to Voyeurism Disorder (12% in males and 4% in females). Narcissistic Personality Disorder is defined as a “pervasive pattern of grandiosity, need for admiration, and lack of empathy.” Among its indications are exaggeration of achievements (inaugural crowd), fantasies of brilliance (stable genius), and a sense of entitlement (6 January 2020). [4] No flowers there either.

The beauty and tantalizing attraction of the floral narcissus was well established in Ancient Greece. The poet Theocritus wrote of the fair Europa who entered with her nymphs into a meadow to gather the sweet-smelling narcissus. There she spotted a gentle and majestic bull. As he graciously offered his back, she climbed on, festooning his horns with flowers. She was unwittingly abducted and carried across a vast sea. The rape of Europa by the taurine Zeus on this far-flung shore is the unlikely source of its name, the flowers must surely have been narcissi. A more telling mythological account of narcissus provides a direct association to its narcotic origins. Persephone was the daughter of (the undisguised) Zeus and Demeter, the goddess of the soil and the original “Earth Mother.” According to one version of the “abduction of Persephone,” she was lured to a field by the presence of striking yellow flowers created for that purpose by Hades, god of the underworld. Taking advantage of her floral distraction, Hades pounced, abducting her to his realm deep within the bowels of the earth to become his wife. The narcissus thus became both the flower of deceit and the flower of imminent death. The choice of narcissus as the flower of the goddess of the underworld was indicative of its widely known and potentially deadly toxic properties. [5] It is of equal note that asphodel, the flower that gave rise to the name of the daffodil, is also associated with the mythological underworld.

The medicinal properties of the bulbs of plants in the Amaryllis family to which the genus Narcissus belongs were well established in antiquity. The choice of medicinal as opposed to toxic is intentional, as many herbals used in treatments against disease are effective because they are toxic to some organisms or cells. Dosage for a specific application is critical, overdose often associated with demise. Hippocrates (460-370 BCE), the father of medicine and the alleged originator of the “first do no harm” oath as physician’s touchstone, used narcissus in his practice as a treatment for tumors. Narcissus as chemotherapy to eradicate cancerous cells was still in use four centuries later by Pedanius Dioscorides and included in De Materia Medica, the first pharmacopeia. [6] The physicians of the subsequent Roman Empire spread the use of narcissus throughout the Mediterranean Basin north to Gaul and Britain. Gaius Plinius Secundus, known as Pliny the Elder, extended the use of narcissus to the treatment of sixteen different conditions ranging from the original tumors to burns and “cure of contusions and blows inflicted by stone.” He further points out that narcissus is “injurious to the stomach and hence it is that it acts both as an emetic and as a purgative. It is prejudicial also to the sinews and produces dull, heavy pains in the head.” Because of this, Pliny asserts that “it has received its name from “narce” and not from the youth Narcissus, mentioned in fable.” [7]

The Dark Ages that followed Pax Romana were noted for religiosity absent the humanism of Greece. Petrarch’s perusal of Cicero’s letters launched the Italian Renaissance and the eventual rediscovery of medicine as science supplanting superstition. By the late sixteenth century, Greco-Roman treatments were dutifully transcribed into various publications and made widely available. John Gerard’s Herball attributes his information on narcissus to Galen, physician to several Roman emperors, as “having such wonderful qualities in drying that they consound and glew (sic) together very great wounds.” [8] It took another three centuries for the maturation of the scientific method to rescue the suffering population from bloodletting and quacks with magic potions. In the late nineteenth century, a smelly, yellow extraction named, appropriately enough, narcissine, was extracted from the flowers and an alkaloid named pseudo-narcissine was isolated from the bulbs. While narcissus extracts were noted for their use as emetics and narcotic in the treatment of a range of conditions including fever, diarrhea, and worms, it was considered to be, in large doses “an active and even dangerous article.” Several grains of the powder were enough to induce vomiting. [9]

Modern chemical and laboratory methods have revealed that plants from the Amaryllis family have over 300 alkaloids most of which are unique. One third of these compounds are found in the genus Narcissus. Alkaloids are amine (nitrogen containing) bases produced by many plants; many are toxic.  Due in part to the extensive history of the use of narcissus in the treatment of various diseases, which in some cases must certainly have been effective, there has been some academic and even pharmaceutical interest in characterizing them. About forty species of wild narcissus have been assayed, revealing that each species has a predominantly different group of related alkaloids. [10] Some clinical research has been conducted using modern methods and protocols to demonstrate that, in fact, Hippocrates was right. Lycorine, the very first compound extracted from narcissus in 1877, has been shown to be effective in the treatment of cancers, notably leukemia and melanoma. The largely surgical and chemotherapy cancer treatments of the past are increasingly being supplanted by plants. In the last three decades, a full 80 percent of all new cancer drugs have been derived from natural products. [11] This goes well beyond cancer. Narcissus extracts have been shown to be antiviral, antibacterial, antifungal, antimalarial, insecticidal,  emetic, antifertility, pheromone, and last but not least, plant growth inhibitors (to stifle competition). [12]

The alkaloid diversity of the genus Narcissus suggests that each species independently generates compounds that must in some way be related to habitat and physiology. The phenological growth of narcissus in early spring when there are few other food sources for the hungry animals that survived winter can only have been possible by being unpalatable. It certainly makes sense that plants that have struggled to survive for millennia must have done so through trial and error of random mutation and natural selection. The proliferation of amaryllids in general and narcissus in particular is indicative of a successful evolutionary path that has expanded their numbers in kind and in quantity. Daffodils are everywhere. There is a very good reason. They readily hybridize and reproduce, using both seeds and roots to expand radially from the epicenter of a single bulb. [13] The Royal Horticultural Society of Great Britain lists 162 cultivars, ranging from ‘Abigail Collette’ named for registrant’s granddaughter to ‘Zara’s Delight’ named for registrant’s daughter and including ‘Grumpy Penguin’ named for the characterization of the registrant in a video made by his grandson Jake. [14]

Narcissi cum daffodils are long-term survivors in nature’s combative arena. This is evident not only in their geographic reach outward from Iberia, but also in their persistence once established―they not only flourish but expand radially over time. This is in part due to the alkaloids that are repellent to bulb-digging mammals and gnawing insects. It is also due to extraordinary reproductive diversity. The color, shape, and scent of flowers has nothing to do with human perception.  Flowers rather function to attract mobile pollinators to transport male pollen from the stamens of one flower to the female pistil of another. The cross pollination that ensues is Darwin’s random mutation for choices by natural selection and why flowering angiosperm plants have been so successful. Narcissus is a master of floral diversity, having styles, the connecting tubes of the pistil that lead to the ovary, that vary both in length and in number, technically heterostylous polymorphism. There can be no doubt that these mutations were the result of different pollinators in different habitats having different behaviors. The overall design of narcissi is to attract long-tongued solitary bees. [15] And should the bees never arrive, there is a work around. The narcissus is self-pollinating, an adaptation that virtually guarantees fertilization from its own pollen, sacrificing diversity for survival.

Spread of Narcissus in Shenandoah National Park marking a homestead long abandoned.

The narcissus is a bulbous perennial that can also reproduce asexually. Starting in spring from a germinating seed, roots extend downward to form a small bulb where food reserves from the photosynthetic leaves are stored. At the end of the first year, the roots and stem detach, leaving only the bulb to overwinter. Bulb growth continues in the second year and the plant initiates production of calcium oxalate crystals called raphides which renders it unpalatable and therefore protected from ground dwelling animals. Bulb growth continues for the next several years as the narcissus has only leaves and no inflorescence. At full maturity which occurs between five and seven years, the bulb has enough stored energy to create the stalk and blossom for sexual reproduction. Full maturity also results in the formation of a lateral shoot that extends horizontally, eventually developing its own roots and breaking away as a separate, cloned bulb. This is the mechanism whereby one bulb and one flower become many bulbs and a garden of flowers over the years. To make sure that the bulb is at the correct depth in the soil for optimal growth potential, the roots that extend from the bulb are contractile, pulling it downward as needed. [15] So what could be better for a flower to festoon human habitations? The narcissus is almost indestructible, its golden flutes the harbingers of the renaissance of spring.

Solar Energy

It is tempting to think of solar energy as panacea for the climate change problem in providing limitless carbon free electricity. The sun has been radiating the power of fusion for 4.6 billion years. While only a small percentage is directed toward the Earth, it is enough to have sparked the evolution of life that it has since sustained. It is the source of the energy of coal, natural gas and oil and the font of photosynthesis on which plants, fungi, and animals all ultimately depend. The Panglossian fix to the global warming problem is to stop using the sunlight energy stored as fossil fuel and start collecting sunlight energy directly. Back of the envelope calculations provide that the sun’s nominal one kilowatt per square meter power provided globally is more than adequate. Take any large tract of sun-drenched desert, fill it with solar panels―voila, case closed. The Sahara is usually the desert of choice due to its size and sunlight extremes. The energy falling within its torrid borders is three million billon watts per day, two hundred times the current global energy demand. Similarly, the western deserts of North America could be empaneled to produce fifty times the energy needs of the United States. [1]

Why this is not really the case is a matter of chemistry, physics, engineering, and economics. Solar panels are called photovoltaic (PV for short) because they collect sunlight energy (photo) and convert it to electrical current generated by a voltage gradient (voltaic). The individual solar cell is the sine qua non for a solar system of energy to supply electricity to the grid. Solar cells have their origin in research into the properties of semiconductors that also led to the development of transistors in the 1950s. A semiconductor is any material that has a conductivity between that of metals such as copper and insulators such as glass. The former conducts electrons readily and the latter impedes their movement. Resistance is the inverse of conductance; metals have low resistance and insulators have high resistance. Semiconductors are elements, notably silicon and germanium, that have the number and arrangement of electrons that is favorable to the generation and transport of a relatively small electrical current that can be controlled with high precision.

The chemistry of semiconductors is established by electrons. A fundamental property of science is that the components of any system will gravitate to a condition of greater stability, which is generally at the lowest energy level. This propensity is manifest in the chemical bond, as the electrons in the outermost or valence subshell of an atom seek to establish a stable state. The idea that stability at the ground, lowest energy state was the basis for all chemical bonding was suggested by the noble or inert gases (helium, neon, argon, krypton, xenon, and radon) that don’t combine with anything else. Argon, the first inert gas to be discovered in 1894 by Lord Rayleigh and Sir William Ramsay as a mysterious trace element in air which is otherwise nitrogen and oxygen, was named for the Greek word argos, which means “lazy.”  In 1923, the American chemist Gilbert Lewis proffered the eponymous Lewis theory of chemical bonding that has four fundamental tenets: (1) elements enter into compounds so as to share or exchange electrons; (2) in some cases, the electrons are transferred from one atom to another (an ionic bond); (3) in some cases, the electrons are shared between the two atoms (a covalent bond); and (4) each of the constituent atoms ends up with an “inert gas” outermost, or valence, electron shell.

The periodic table is arranged according to the progressive filling of electron shells with elements exhibiting similar characteristics in vertical columns called Groups numbered left to right from I to VIII (1 to 8). The inert gases are located on the far right. The elements that range across the middle are called metals and those that are near the inert gases on the right are called non-metals. In between metals and nonmetals are a smaller group of transition elements called the metalloids that exhibit both metal and non-metal properties. [2] The semiconductors are metalloids in the same group as carbon (Group IV) with the same bonding characteristics. Carbon is perhaps the most versatile of all elements due to its need for four electrons to complete its outer shell to the inert and stable configuration. It must therefore combine with four other elements by sharing electrons in covalent bonds. The entire field of organic chemistry concerns carbon compounds, the basis for life. If the four combining elements are also carbon atoms, the resultant combination is diamond, the hardest natural material known. The versatility of carbon bonding is shared by the semiconductors silicon and germanium that lie just below it in the periodic table―they also form four covalent bonds. Since the shells that contain the valence electrons in these elements are further away from the nucleus (higher energy states) than carbon, they can more readily be moved into a conducting state. The propensity of semiconductors to release an electron for use in and electrical circuit is enhanced by the addition of elements on either side (Group III or V) into a bonding arrangement, a process called doping. [3] Solar cells are made from doped semiconductors.  

The physics of solar cell semiconductors is based on the observed phenomenon that radiant energy in the form of photons impinging on some surfaces will result in a flow of electrons. The  German physicist Heinrich Hertz named it the photoelectric effect in 1887 after observing that ultraviolet light changed the voltage at which sparking occurred between a pair of metallic electrodes. By the early 1900s, it was determined through further experimentation that the number of electrons released was proportional to light intensity (measured in candlepower- now the candela) and that the energy of the electrons was dependent on the incident light frequency f (or wavelength λ as they are related by the equation f = c/λ where c is the speed of light). That this could not be explained by classical physics was the impetus for Albert Einstein to propose what is now the fundamental theory of light. He posited that light could be considered as particles (now called photons) instead of waves and that these particles could penetrate an atom and collide with and impart enough energy to its electrons for them to escape from their orbit around the nucleus. The paper he wrote in 1905, entitled ”On a Heuristic Viewpoint Concerning the Production and Transformation of Light” was the basis for the award for the Nobel Prize in physics in 1922. His work stimulated the then nascent field of quantum theory promoted by the Danish physicist Niels Bohr who conceived the atomic model of electrons orbiting the nucleus in discrete energy levels called quanta. [4]

Physics also establishes the inherent limitations of solar panels because the photoelectric effect only occurs according to inviolate rules. Incoming solar energy must have sufficient intensity at the appropriate frequency to remove one of the  outer shell or valence electrons of an atom to become part of the electrical current flow output of the solar panel. Electrons occur around the nucleus in discrete orbits that are separated into discrete quantum energy levels. The photoelectric effect in semiconductors can only be understood in terms according to the rules of quantum mechanics. An incoming photon of light with sufficient frequency and intensity strikes an electron, knocking it from the valence energy band into the conduction energy band; literally a quantum leap. However, the electron must then make its way through the rest of the atoms in the panel to reach the surface, expending energy with every encounter. Einstein called this the work function with the symbol omega (ω). The work function varies with many factors, notably the surface condition of the material, its purity, and what is called the packing arrangement of its atoms in crystalline form. The optimization of the amount of electricity that can be extracted from sunlight must take these factors into account. [5]   

The use of the chemistry of semiconductors and the physics of the photoelectric effect to produce electricity requires engineering, the practice of putting scientific knowledge to practical use. Engineering is the bridge from the laboratory solar or photovoltaic cell to a practical solar cell that can be used as part of a fielded electrical power supply system. The era of solid state electronics started at Bell Laboratories in the 1950s, the epitome of electrical research and development rivaled only by Thomas Edison’s Menlo Park for its relevance to modernity. William Shockley was hired just after World War Two to lead the efforts to expand on prewar research that had led to the discovery of what were called P type for “positive” and N type for “negative” silicon semiconductive materials. The serendipity of chance  discovery here played a key scientific role. Shipments of silicon that had been received at Bell Labs from various manufacturers were found to have different properties leading to the hypothesis that the differences were caused by impurities. Further experimentation revealed that P type silicon was contaminated with boron, just below silicon in the periodic table (Group III) and that the N type silicon had phosphorous, just above silicon in Group V. On Friday the 13th of April 1945, Shockley drew a diagram in his lab notebook for a P-N junction which he called “a solid state valve drawing small control current” that could be used for “controlling the flow of electricity in a conducting path.” [6] The solid state transistor to control current in an electrical circuit that he imagined was the harbinger of the information age.

Solar cells followed using the same P-N junction principle. Here the object is not to amplify and otherwise control electrical current but to make electricity out of sunlight. The key to doing this was a matter of materials engineering using different combinations of semiconductor materials with different additives called dopants to improve efficiency―the amount of electrical energy out relative to the amount of sunlight energy in. For single silicon cells, the maximum theoretical efficiency of 33.7 percent imposed by physics is called the Shockley-Queisser Limit with more than 50 percent of the sun’s energy lost as heat. The importance of doping is straightforward. Antimony from Group V with five valence electrons added to Silicon with four valence electrons yields one extra electron that can then be readily removed as current with both atoms having their “inert” configuration of covalent bonds. Bell Labs produced the first operating silicon solar cell in 1954 with an efficiency of 6 percent. [7]

Solar cells for spacecraft became the first practical application of photovoltaic technology. The International Geophysical Year of 1957 to 1958 was initiated in 1950 by scientists from across the globe to promote scientific cooperation.  The US and the USSR announced plans to launch earth satellites in 1955. The US program consisted of two publicly announced and progressed projects: Vanguard, a three-stage rocket designed by the Naval Research Laboratory and Explorer to be launched on a missile designed by the US Army Ballistic Missile Agency. The Soviets were mum until the surprise launch of Sputnik, the world’s first artificial satellite, on 4 October 1957 followed by Sputnik 2 one month later carrying a dog named Laika. The Vanguard I launch collapsed in a huge fireball, which the press dubbed “Flopnik” in December. The Explorer was launched successfully in January. The new and improved solar cell powered Vanguard II was launched on 17 March 1958; it is still in orbit. [8] The Vanguard solar cells had a  total power of one tenth of a watt in an array of one tenth of a square meter, the equivalent of 1 watt/m2 with and efficiency of 10 percent. They only work out as far as the orbit of Jupiter where the sun’s radiant energy fades. Beyond that, nuclear cells that produce their own radiation from radioactive decay become necessary.

Solar cell technology advanced as an integral part of the space race between the US and the USSR in the second half of the twentieth century. With design constraints that necessitated minimum weight and surface area due to payload launch constraints, aerospace applications favored higher power density cells without regard to unit cost. The key parameter is specific power, which is watts per kilogram.  By using multiple layers of solar cells with different materials to take advantage of different wave lengths of incident solar radiation, efficiencies of over 45 percent have been achieved. The subsequent world-wide roll out of solar cell technology was precipitated by the need for  stand-alone powering capabilities where transmission lines would not reach or where batteries were too expensive to install and maintain. Ironically, the oil and gas behemoth Exxon-Mobil provided part of the funding to develop affordable solar cells using lower grade silicon and cheaper materials to drive the cost from $100 to $20 per watt. The motivation was to provide power for remote pumping stations and off shore rigs primarily for signal and alarm systems. The cheaper cells made it cost effective for the US Coast Guard to implement solar cells to replace batteries on ocean buoys and for railroads to upgrade to wireless solar cell signaling systems.  The closing decades of the twentieth century raised the ante for solar cells with the advent of roof-top panels for buildings and solar powered pumps for irrigating far flung fields. [9]

The twenty-first century opened with the inconvenient truth that the Industrial Revolution had an unintended consequence. The United Nations Environmental Programme established the Intergovernmental Panel on Climate Change (IPCC) in 1988 to provide “an assessment of the understanding of all aspects of climate change, including how human activities can cause such changes and can be impacted by them.” The Third IPCC Assessment Report at the turn of the century was clarion call to action, confirming that over the course of the twentieth century, temperature had risen 0.6°C, snow and ice cover had fallen by 10 percent leading to an average sea level rise of 15 centimeters, and that precipitation had increased by 5 percent. [10] The search for carbon free energy on a global scale was on and photovoltaics was in the crosshairs of innovative engineering. Cost would be the determinant figure of merit. To manufacture and install solar panels in acres of arrays to generate electricity at a cost per watt comparable to fossil fuel became the “over the rainbow” goal. After a decade delay that followed the post 9/11 global war on terror and the financial meltdown that followed, the US government was finally able to focus on climate.   

The crux of the economics issue is that high efficiency solar cells are expensive and cheap solar cells are inefficient. To be affordable as an integral part of an energy grid of the future requires solar cells to be both cheap and efficient. Silicon is the semiconductor of choice because it is abundant and therefore cheap; it is second only to oxygen as the most common element in the earth’s crust (28.2 percent). However, raw silica must be chemically treated to convert it into a crystalline form that will conduct electricity. Silicon PV cells are made by cutting crystalline silica into thin slices that are doped to produce the PN junction of a diode with metallic contacts to conduct the photon generated current flow. The crystal structure determines the efficiency of the cell. Single crystal cells are the most efficient but they are more expensive to manufacture than cells with multiple crystals. The efficiency of the best commercial solar cells using single crystal cells is about 20 percent. This can be improved by adding additional cells that are designed to capture photons at different frequencies. When these are combined in a single panel, known as multijunction cells, efficiencies of nearly 50 percent can be achieved. These PV cells are at the efficient but expensive end of the spectrum. At the opposite end of the spectrum are thin film solar cells that are applied to a substrate of metal, glass, or plastic that can be flexible to allow for contoured surface installations. Thin film solar cells trade off efficiency for expense. The ultimate goal of any successful solar cell is to produce electricity at the lowest dollar per watt value after all factors, including installation, maintenance, replacement, and materials cost, are included in the calculation.[11]

That achieving the right balance between cost and efficiency would be difficult was evident early on. The Energy Policy Act of 2005 empowered the Department of Energy (DOE) to “spur commercial investments in clean energy policies that use innovative technologies” through the use of federal loan guarantees to private companies. Solyndra, a California company that had developed copper indium gallium di-selenide thin-film solar cell technology seemed a sure bet and was richly endowed with federal funding. The technology worked to reduce power cost but the economics didn’t―they were unable to compete with conventional, flat silicon solar panels. When the company went bankrupt two years later it was considered the “first serious financial scandal of the Obama Administration.” When the dust settled, it was generally concluded that the government’s ability to pick technology winners was inherently flawed; federal funding should be directed at research and development with the marketplace promoting viable technologies. Bloomberg News concluded that “If the Solyndra debacle gets U.S. policy pointed in the right direction, the loan-guarantee losses won’t have been totally in vain.” [12]

 The Advanced Research Projects Agency for Energy (ARPA-E) was established and funded in 2009 to advance “high-potential, high-impact energy technologies that are too early for private-sector investment” using the Defense Department DARPA model that pioneered the Internet. Of the 46 ARPA-E energy research centers funded in 2010, 24 were working on solar energy issues. These initiatives are rightly in the areas of basic research to try to develop a solar cell that is easy to manufacture from cheap materials with sufficient efficiency to be cost competitive. Basic research is long term by its nature with failures outnumbering the rare success by an order of magnitude. Among the programs in the works are Solar Agile Delivery of Electrical Power Technology (ADEPT) to improve PV performance to Full-Spectrum Optimized Conversion and Utilization of Sunlight (FOCUS) to expand the range of solar cells to encompass a broader bandwidth of solar radiation frequencies. It goes without saying that a mnemonic acronym is nearly a prerequisite for government funded programs. While there have been no eureka breakthroughs to date, there is every reason to hope that there will be. [13]

While a super solar cell may be in the offing at some point, there is something to be said for Adam Smith’s tried and true economies of scale. Spaceship Earth is not payload limited like Vanguard rockets. Manufacturing myriad, large, cheap solar panels in an assembly line manner to cover large swaths of surface area is sure to drive the cost per unit down, just as it did pins according to Smith’s dictum. In 2006, one of the world’s largest semi-conductor manufacturers embarked on a program to manufacture garage door sized glass panels coated with thin films of amorphous silicon, a focused attempt to sacrifice efficiency for size to lower the dollar per watt cost. The assembly line process started with 60 ft2 glass panels precoated with a thin metal oxide film run on a conveyor belt through an automatic laser scribe to define the boundaries of 216 individual cell panels. Three layers of amorphous silicon that each absorb light from different parts of the spectrum were robotically added sequentially using vapor deposition. With the addition of metal contacts and a junction box, the panels were ready for shipment. With a cost of $3.50 per watt that was projected to decrease to $1.00 per watt as production ramped up, the prognosis for large scale arrays was sanguine. [14]  The company shut down the assembly line in 2010 due to lack of demand. [15]

The difficulties with manufacturing solar cells in the United States to satisfy market demand at both the high efficiency, high cost and low efficiency, low cost ends of the spectrum is indicative of a global economic megatrend. Solar panel supply and demand imbalance is a microcosm of the effects of China’s manufacturing juggernaut. The Chinese produced 85 percent of all solar panels sold across the world in 2022 with almost the entire balance from other Asian-Pacific (APAC) nations, mostly Vietnam. The United States and Europe produced less than one percent each. This contrasts with the total of 1,000 terawatts (TW) of PV panels installed globally in 2022 with 50 percent in China and APAC and about 17 percent in both Europe and in North America. This sounds like a lot of power, but it is only about 15 percent of the total renewable capacity of 7,500 TW which is only about 10 percent of the global energy supply. This means that solar energy comprises only one percent of world total electrical generation.  The 150 gigawatts of solar energy added in 2021 was a record amount; it is one third of the average annual addition in PV power needed annually over the next decade to meet the goal of carbon neutrality in 2030. [16]

Returning now to the original thesis that the sun produces ample energy to empower human enterprise many times over. Even if PV cell chemistry and physics could be engineered imaginatively into  cheap and efficient solar panels, two intractable problems remain: diurnal and seasonal  sunlight as an energy source and a lack of a repository to store electricity generated as supply that exceeds immediate use demand. Most of the industrial world is geographically situated between 30 and 50 degrees north of the equator. This means that the 1,000 watts per square meter that falls on the equator at midday is reduced to about 600 watts per meter in the industrial zone. It is only midday at noon, so the overall energy delivered must be also discounted by half to 300 watts per meter to account for mornings and afternoons. Cloud cover, which in some locations like the UK amounts to more than half the day on average, results in a diminution of solar cell production by an additional factor of ten. The net effect is that the actual amount of solar energy that impinges on panels ranges from about 100 watts per square meter in Germany and New York  to 200 watts per square meter in Spain and Texas. With commercial solar cell efficiency at 10 percent and unlikely to ever exceed 20 percent, the output electricity is only about 10 to 20 watts per square meter. [17] This means that Gargantuan solar panel “farms” are needed to provide for a city size load in the gigawatt (GW – billions of watts) range. These are only likely to be economical in those areas that are closer to the equator and are relatively near to the cities they supply. The largest solar farm in the world is in the desert state of Rajasthan just west of Delhi, India covering 14,000 acres producing just over two gigawatts. The largest facility in the United States is in California with 579 MW (0.6 GW) covering 3,000 acres. The largest solar farm in the state of Delaware produces 15 MW on 80 acres.

It is conceivable that enough solar panel mega farms could be built in some places to make enough electricity to meet demand when the sun shines. But what do you do at night and during winter? And what do you do when PV power supply exceeds grid demand? The answer to both questions is energy storage. Saving the excess current of PV cells during cloudless, sunny days in summer to be used at night and over the winter is the Achilles’ heel of renewable energy. Long-duration energy storage (LDES) is the collective name for methods, both real and imagined, that seek to alleviate the renewable storage problem. Rechargeable batteries cannot store energy on a large enough scale because they have low energy density, a short life cycle, and, ultimately, cost too much. The most well-established LDES technology is pumped-storage hydropower, the name a literal description of its modus operandi. Excess renewable electricity produced is used to pump water from a low elevation catch basin to an elevated reservoir. The stored potential energy is converted back to electricity on windless nights by water turbines. There are also proposals to use the excess solar power electricity to make hydrogen gas with electrolysis. One may conclude that, while solar energy may be one of many technologies that will need to be employed to reduce fossil fuel demand, it is hardly a panacea.


1. Laughlin, R. Powering the Future, Basic Books, New York, 2011, pp 91-93.

2. Petrucci, R. General Chemistry, Principles and Modern Applications, Macmillan Publishing Company, New York, 1985. Pp 198-203, 364-401.

3. Semiconductors and Insulators, Theory of, Encyclopedia Britannica, Macropedia 15th Edition  William Benton, Chicago, Illinois, 1974, Volume 16, pp 522-529.

4. Marton, L. “Photelectric Effect” Encyclopedia Britannica, Macropedia 15th Edition  William Benton, Chicago, Illinois, 1974, Volume 14, pp 296-300.

5. Neamon, D. Semiconductor Physics and Devices, McGraw Hill Boston, MA, 2003. p 104-106. http://www.fulviofrisone.com/attachments/article/403/Semiconductor%20Physics%20And%20Devices%20-%20Donald%20Neamen.pdf

6. Riordan, M. Crystal fire: the invention of the transistor and the birth of the information age, W.W. Norton Company, New York, 1988. pp 97-113.

7. Smil, V. Energy in Nature and Society, MIT Press, Cambridge, MA, 2008, pp 255-257

8. https://www.nasa.gov/feature/65-years-ago-the-international-geophysical-year-begins

9. Perlin, J. “Late 1950s – Saved by the Space Race”. SOLAR EVOLUTION – The History of Solar Energy. The Rahus Institute. http://californiasolarcenter.org/old-pages-with-inbound-links/history-pv/

10. Climate Change 2001 Synthesis Report, Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, UK. 2001.

11. https://www.energy.gov/eere/solar/solar-photovoltaic-technology-basics     

12. Lott, M. “Solyndra — Illuminating Energy Funding Flaws?” Scientific American.  September 27, 2011.

13. https://arpa-e.energy.gov/technologies/programs

14. Bourzac, K. “Scaling up Solar Power” MIT Technology Review, March/April 2010, pp 84-86. 

15. Kanellos, M. “Applied Materials Kills its SunFab Solar Business”. Greentech Media 21 July 2010.

16. https://origin.iea.org/data-and-statistics/charts/solar-pv-manufacturing-capacity-by-country-and-region-2021

17. MacKay, D. Sustainable Energy – without the hot air UIT, Cambridge, UK, 2009 pp 38-49.

Greenhouse Effect and Global Warming Gases

The greenhouse effect is the warming of the Earth due to its atmosphere. Solar radiation passes through the atmosphere like the panes of glass forming the roof and walls of a greenhouse. Radiant energy impinging on Earth’s surface and the floor of the greenhouse causes them to heat up. Since heat flows from hot to cold as a matter of basic physics, both of the now warmer surfaces heat the surrounding air by radiating upward. The greenhouse effect results because the solar radiation passes through the atmosphere and the glass with little absorption, but the surface heat radiation is partially absorbed as it seeks to escape. The reason for the difference is that the wavelengths of electromagnetic energy of the two are different. Solar radiation that reaches the Earth is shorter wave ultraviolet and visual. The heat radiation emanating outward from the surface is longer wave infrared. The terms ultraviolet and infrared refer to the wavelengths that are shorter than, or “beyond” the violet end of the visible spectrum and those that are longer than or “below” the red end (keeping ROY G BIV in mind). The significance of different wavelengths should come as no surprise. The microwaves used to heat up lunch while listening to the radio waves of music broadcast remotely are part of the same electromagnetic spectrum.

As diagrammed above, incoming solar radiation that reaches the top of the atmosphere is 342 watts per square meter (Wm-2 is shorthand for W/m2). Watt is the eponymous unit of power familiar from light bulb notoriety to honor James Watt, the inventor of the condensing steam engine. He coined the term horsepower so that people would understand what a steam engine could do; one horsepower is about 746 watts. Only 168 Wm-2 is absorbed by and heats the surface of the earth as 77 Wm-2 is reflected by clouds, aerosols, and atmospheric gases, 30 Wm-2 is reflected by the earth’s surface, and 67 Wm-2 is absorbed by the atmosphere. Thus, the sun’s primarily ultraviolet and visible light short wavelength incoming radiation mostly passes through the atmosphere, heating up the surface of the earth like it does a greenhouse. The outgoing surface radiation of 390 Wm-2 is shown on the bottom right. This is the longer wavelength infrared radiation of the Earth’s surface rising into the atmosphere. The change in wavelength between incoming and outgoing is because the sun is much hotter than the earth. [1]

The radiation spread or spectrum between high energy ultraviolet and low energy infrared is based on the temperature of the radiating body. Some of the infrared radiation (40Wm-2) escapes, but over 80 percent (324 Wm-2) is reflected back to the surface by the gases in the atmosphere,  which are called greenhouse gasses for this reason. The other heat energy components in the diagram are those associated with the hydrologic cycle; the evaporation and condensation of water is also a function of heat and temperature. The climate changing equation is that incoming short wavelength solar radiation must be either reflected back into space, or balanced by outgoing longwave radiation, mathematically 342 = 107 + 235. It is clear that the greenhouse gases play a key role in this balance. If more gas is added, more heat is radiated back from the atmosphere and surface temperature must go up to compensate. Global warming results. [2]

Trapping the heat of the sun under a warming blanket of atmosphere makes life on Earth possible. Without the greenhouse effect of its atmosphere, Earth would be like our planetary neighbor in the next outward orbital. Mars has an average temperature about 75°F below zero. If no action is taken to stem the tide of rising temperature, Earth will become more like Venus, where the mostly carbon dioxide atmosphere creates a super greenhouse effect with an average temperature of over 800°F. The planet hunting astronomers call the region near a star which falls in the range where liquid water can exist the Goldilocks Zone indicating that life as we know it could be possible there ― the circumstellar habitable zone. It is necessary but not sufficient that Earth is in one. It must also have sufficient moderating atmosphere with enough (but not too many) greenhouse gas molecules.  

The French mathematician Jean-Baptiste Fourier is credited with making the first observation that the earth must be warmed by solar radiation due to atmospheric containment: “Tous les effects terrestres de la chaleur du soleil sont modifiés par l’imposition de l’atmosphère.”  (all of the sun’s heat effects on earth are modified by the atmosphere). [3] This philosophical observation was rooted in science by the Swedish physicist Svante Arrhenius who first quantified the effect of carbon dioxide (then called carbonic acid) on temperature, which he called the “hothouse” effect (which, ironically, is probably the better term).  His conclusion was “… if the quantity of carbonic acid increases in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression.” [4] The theoretical musings about the greenhouse effect became more factual after the first four decades of the 20th century. British engineer Guy Callendar reviewed historical data in 1938 to conclude that: “by fuel combustion man has added about 150,000 million tons of carbon dioxide to the air during the past half century” resulting in a measurable global temperature increase “at an average rate of 0.005°C per year.” [5] While convincing, the correlation of carbon dioxide to temperature does not prove causation – that the accumulation of atmospheric carbon dioxide is sine qua non for the measurable rise in global temperature. 

The first scientific experiments were carried out by the Irish physicist John Tyndall who realized that the absorption of radiation by gases was “a perfectly unexplored field of inquiry” in 1859. He constructed the world’s first spectrophotometer, a tube that could be filled with different gases and subjected to radiation. It was instrumented with a recently invented device called a differential thermopile that could measure miniscule changes in temperature. Six months after he began his experiments, he presented his eureka results to Britain’s Royal Society: Different gases varied markedly in their ability to absorb and retransmit radiant heat. Nitrogen and oxygen, which make up over 99 percent of atmosphere were found to be essentially transparent to radiant heat, but other more complicated molecules, including water vapor, carbon dioxide, ozone, and (quixotically) perfume absorbed heat much more readily, even in small concentrations. Tyndall stressed the importance of water vapor, because “comparing a single atom of oxygen or nitrogen with a single atom of aqueous vapor, we may infer that the action of the latter is 16,000 times the action of the former.” [6] He concluded that water vapor was the most important gas controlling the surface temperature of the earth. This, then, became Royal Society gospel and accepted science for over a century.

The emergence of carbon dioxide as the true climate chimera was only a matter of time and science. Tyndall’s primitive experiment demonstrated only that humid air absorbed heat energy. Why it did so was another matter. The physics is complex, relating to the quantum energy levels of atoms of the greenhouse gas molecules. Spectroscopy, the study of the absorption and emission of light and other radiation as related to its wavelength, evolved rapidly in the early decades of the twentieth century. The emission or absorption of light within a narrow frequency and energy band is called a spectral line. Carbon dioxide has thousands of spectral lines that are responsible for the absorption of the infrared radiation of heat energy. A detailed understanding only became possible with accurate measurements at different heights in the atmosphere. These vary in intensity and width with temperature and pressure and therefore with altitude … a multivariable problem in three dimensions presenting a tangle of interrelated calculations.  High speed computation was needed to run the iterative  sequences of differential equations. By the 1950s, the measurements were available and the computers were programmed.  The absorption of heat energy by molecules of carbon dioxide became settled cause and effect science. As early as 1956, there was convincing evidence that “…if the carbon dioxide content of the atmosphere should double, the surface temperature would rise by 3.6 degrees Celsius.” [7]

There remains the vexing problem of water vapor.  There is a rational reason why water and its vapor loom large in debates about climate change causation. Weather, the fluctuating state of the atmosphere with elements of wind, rain, and sunshine that determines climate only when averaged over decades, is dominated by water. Rain in summer and snow in winter come from clouds that are condensed water vapor evaporated from liquid oceans, lakes, and rivers. Water  is the most variable component of atmosphere and is central to climate variability and change.  Oceans cover 70 percent of the Earth’s surface, contain over 96 percent of its water, produce 86 percent of all evaporation, and receive 78 percent of all rain. Spinning this sloshing volume at speeds of up to 1000 miles per hour  between and around the embedded land mass continents of a tilted, heated globe produces weather.

Water vapor is a natural greenhouse gas. It is also the most heat absorbing of all greenhouse gases. The hydrologic cycle of evaporation, rain, and runoff has been going on for billions of years ― the planetary plumbing system. The storing of the sun’s heat energy as the latent heat of evaporation of water into the atmosphere (note figure above) and its release when the vapor condenses to fall as rainwater provides the energy for weather. To complicate matters, water vapor produces positive feedback. Warmer weather means more evaporation which increases the water vapor in the atmosphere which traps more heat which causes warmer weather. Positive water vapor feedback  is considered the most important factor in amplifying the increase in surface temperature. Further, water vapor condenses into clouds, which are not gases, but contribute nonetheless to the greenhouse effect by absorbing and emitting the infrared heat radiation. But clouds also act a shield, cooling the climate by reflecting solar radiation. The variability of cloud formation and movement is one of the most profound conundrums of climate science. The only plausible way to address the chaotic interplay of sun, wind, and water was to develop increasingly sophisticated models that require high speed supercomputers. That has now evolved to many different models that can be compared and contrasted to narrow the uncertainty.

The Coupled Model Intercomparison Project (CMIP) was started in 1995 as a collaboration among models to compare results. First generation Atmosphere – Ocean General Circulation Models (AOGCM) used the physical dynamics of atmosphere, ocean, land, and sea ice as impacted by greenhouse gases and particulates called aerosols. State of the art Earth System Models (ESM) were more recently added to include the effects of  biochemical carbon, sulfur, and ozone cycles. Model validation consists in part of  inputting historical data to compare model output with the known result. The latest CMIP round was based on data collections that ended in 2013 to evaluate the relative efficacy of 56 different models from twelve countries including the United States, China, Russia, and Norway (where weather forecasting started). The conclusion was that doubling the amount of carbon dioxide in the atmosphere would result in a temperature increase of 2.1 to 4.7 degrees Centigrade. [8] It is worth noting that the 3.6 degree rise estimated in 1956 is consistent with this result.  Modelling continues as carbon dioxide emissions and temperature keep rising. 

Even though water vapor is the dominant greenhouse gas, it is essentially irrelevant to climate change just as it is paramount to weather. Its variability in the short term of weather is offset by its consistency over the long haul of climate. The rising concentrations of other atmospheric gases due not immediately impact weather ― but they are at the epicenter of the climate change problem because they have been and are being added to the atmosphere continuously.  The conclusion made by the United Nations was that the only way to arrest climate change was to reduce the atmospheric emission of greenhouse gases over time. The Kyoto protocol was a United Nations (UN) treaty initiated in 1997 and which went into effect in 2005 after ratification by Russia and Canada as the last two of the stipulated fifty-five nation quorum. It specified limits on the six greenhouse gases that were found to be the most damaging due to heat absorption characteristics and concentration in the atmosphere: carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), sulfur hexafluoride (SF6), hydrofluorocarbons (HFCs), and perfluorocarbons (PFCs). The Global Warming Potential or GWP was established as a parameter with a molecule of carbon dioxide having a value of 1 in order to quantify the effects of the other gases. The UN Conference of Parties (COP) that constitute the signatories to the treaty agreed to proceed “with a view to reducing their overall emissions of such gases by at least 5 per cent below1990 levels in the commitment period 2008 to 2012.” [9,10]

The last three “minor” greenhouse gases are frequently grouped together as the “F-gases” to indicate that they contain the element fluorine;  taken together, they constitute less that 1% of the total greenhouse gas emissions. Sulfur hexafluoride (SF6) gas is used primarily in high voltage electrical distribution systems due to its insulation properties. It was a replacement for oil-filled electrical components that contained polychlorinated biphenyls (PCBs) which were banned in 1979 by the Toxic Substances Control Act (TSCA).  Each SF6 molecule is the equivalent (GWP) of 23,900 molecules of CO2.  HFCs and PFCs consist of a number of different compounds that were formulated to replace chlorofluorocarbons (abbreviated as CFCs) that were banned by the Montreal Protocol of 1987 due to their ozone depleting effect (ozone filters damaging UV radiation). Their GWP values range between 140 and 11,700 for HFCs and between 6,500 and 9,200 for PFCs . In the 1950’s, Barry Commoner, a prescient scientist at the forefront of the environmental movement, devised four laws of ecology. [11] The irony of introducing greenhouse gases (HFC and PFC)  to replace an ozone depleting substance (CFC) is direct evidence of his fourth law  – “There is no free lunch” –  every environmental solution (ozone depletion) has a cost (greenhouse gases). This applies equally to SF6 and PCBs.

Nitrous oxide (N2O) is the least known of the three “major” greenhouse gases, its provenance usually listed as “agricultural soil management.” With a GWP of about 300, it constitutes about 8% of the total greenhouse gas composition. The main culprit is fertilizer, which is about 10 percent nitrogen that must be added to the soil to compensate  for the nitrogen removed with the harvest of the crop –  about 100 pounds of nitrogen are removed with the harvest of every acre of corn. Fertilizer is necessary and sufficient to “manage”  soil agricultural productivity. This added nitrogen is acted upon by the bacteria in the soil as a source of energy for their own growth and reproduction – a process that is called nitrification, basically the conversion of ammonium  (NH3)  into nitrate (NO3). Nitrous oxide is a naturally occurring by-product of bacterial nitrification of the added nitrogen-based fertilizer. Not to get too technical but to be complete, there is also a process called denitrification in anaerobic (lacking oxygen) soils where bacteria reduce nitrate to gaseous nitrogen; denitrification, like nitrification,  releases nitrous oxide as a by-product. Thus, as more crops are grown for the ever-expanding global population for either food, fodder or fuel (ethanol), more nitrogen enriched fertilizer must be used to reconstitute the depleted soil – and therefore more nitrous oxide results. The Anthropocene nitrogen cycle has been called the Wibbly-Wobbly Circle of Life. [12] Commoner’s first law  of ecology is  “Everything is connected to everything else.” The earth is such a complex and balanced ecosystem that every disturbance (added fertilizer) has far-reaching effects (greenhouse gases and global warming).

The three primary sources of methane (CH4) with a GWP of around 30 are enteric fermentation, natural gas systems, and landfills. Taken together, they contribute more that three fourths of the total methane emissions in approximately equal shares. Enteric fermentation methane is from the normal digestion of food by ruminant animals, particularly cattle. Ruminants are named for the rumen, the first of their four stomachs –  the repository for the fibrous material that they consume. Microbes in the rumen break down the tough cellulose as part of the digestive process; methane is a byproduct of that process that is expelled by the animal as exhalation. Over 95% of enteric fermentation methane is from beef and dairy cows. Other animals, including humans, produce the remainder of the enteric (intestinal) fermentation methane as flatulence. Methane is the primary constituent of natural gas that is widely used for heating and to generate electricity – some of this natural gas escapes into the atmosphere.  Landfills are the largest of the three major sources of methane, comprising almost 40% of the total – the source is anaerobic bacterial decomposition of human trash. Commoner’s second law of ecology applies to methane – “Everything must go somewhere” – there is no way to simply throw things (trash) away, because it will still be there and you have to live with the results (greenhouse gases).

And last but certainly not least is carbon dioxide, the scion of the industrial age and perhaps the harbinger of its demise; it makes up more than 80% of all greenhouse gasses – by definition it has a GWP of 1. The carbon cycle is the essence of life;  carbon dioxide is input to plant photosynthesis and output of organisms like humans oxidizing food for energy. The majority of excess carbon dioxide in the atmosphere comes from the combustion of fossil fuel – oil, gas and coal.  It is the energy released by the oxidation of hydrocarbons that is both the boon and the bane of the modern world. For example, the natural gas reaction is:

                          CH4    +  2O2      ―>   CO2     +   H2O  +    energy 

The level of CO2 in the atmosphere has historically been about 280 parts per million (ppm). It is now over 420 ppm.  The energy we use to make electricity and to operate vehicles is increasing greenhouse gas concentrations which are causing the earth to heat up. “Nature knows best” is Commoner’s third law of ecology – every human made change is likely to be detrimental to the balance of nature.  Anthropogenic greenhouse gases are the most obvious and potentially existential example. Our mother is nature.


1. Third Annual IPCC Report – https://www.ipcc.ch/report/ar3/wg1

2. Dessler, A. and Parson, E. The Science and Politics of Global Climate Change, Cambridge University Press, New York, 2006, pp 6-11.

3. Fourier, J. “Remarques Generales sur les Temperatures Du Globe Terrestre et des Espaces Planetaires”. Annales de Chimie et de Physique. 1824 Volume 27 p 165.

4. Arrhenius, S. “On the influence of carbonic acid in the air upon the temperature of the ground”  The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. April 1896, Volume 41 No. 251: pp 237–276.

5. Callendar, G. “The artificial production of carbon dioxide and its influence on temperature” Quarterly Journal of the Royal Meteorological Society April 1938 Vol. 64 Issue 275 pp 223-240.

6. Fleming, J. Historical Perspectives on Climate Change, Oxford University Press, New York 2005. pp 66-74.

7. Plass G. “Carbon Dioxide and the Climate.” American Scientist, 1956, Volume 44 pp 302-316.

8. Flato, G. et al Evaluation of Climate Models. Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change Cambridge University Press, Cambridge, UK. pp 741-827.

9. Kyoto Protocol to the United Nations Framework Convention on Climate Change. Conference of the Parties. FCCC/CP/L.7/ADD.1, Kyoto, Japan, 10 December 1997.

10. https://www.epa.gov/enviro/greenhouse-gas-overview   

11. Miller, Stephen. “Early Voice for Environment Warned About Radiation, Pollution”. The Wall Street Journal. Retrieved June 2018. In his 1971 best seller The Closing Circle, Commoner posited four laws of ecology: Everything is connected; Everything must go somewhere; Nature knows best; and There is no such thing as a free lunch.

12. Essay in The Economist, 24 December 2022.


Moose frequent ponds and streams to eat aquatic plants.

Common Name: Moose, elk – The Algonquian word moos-u means “he shaves or trims.” Native Americans applied the sobriquet to describe the characteristic stripping bark and the lower branches from trees. It became moose as colonists migrated north into their habitat. Moose are called elk in Europe absent any prior vernacular names in native languages. Elk is derived from elaphos, the Greek word for deer.

Scientific Name: Alces alcesAlces is Latin for elk. It is the only species in the genus so one name serves for both. The American elk is a completely different species from the European “moose elk.” In the deer genus as Cervus canadensis, elk are also called wapiti from the Shawnee word meaning “one with a white rump,” a prominent visual characteristic.

Potpourri:  Moose are solitary sentinels in the northern, boreal forests spanning the globe in both North America and Eurasia, where they are known as elk. Moose is metaphor for rugged individualism, surviving the extremes of ice and deep snow with diminished sunlight and plunging temperatures without the restful hiatus of hibernation. They are the giants of the deer family, their bulk sustained by an herbivorous diet. Moose are capable of consuming almost anything that they can find. During winter, they consume up to fifty pounds of twigs and shrubs a day. Summer is a relative smorgasbord with closer to sixty pounds of birch, willow, aspen, and maple leaves supplemented with a wide range of aquatic plants. [1] With towering columns for legs, they can pass through snowdrifts in pursuit of nature’s scant winter provender. Unlike their cervid cousins that form herds for some protection in numbers, moose keep to themselves. Their shear bulk wards off all but the most determined of predators, primarily wolves. From Teddy Roosevelt’s Progressive “Bull Moose” Party to the multitudes associated with Moose International, moose is metaphor … an indomitable animal astride the frozen tundra symbolizing strength and salubrity.

Moose have all of the characteristic features of the deer or cervid family to which they belong. Cervidae is derived from the Latin cervus, meaning hart or stag (applied now only to males) which in turn comes from keras, the Greek word for horn. Deer are hoofed mammals that subsist wholly on plants that have horns in the form of antlers. While the class Mammalia generally means warm-blooded, hairy animals that feed offspring with milk from mammary glands, Linnaeus, when he first introduced the taxonomic grouping in the tenth edition of Systema Naturae of 1758, also included four-chambered hearts, lungs, a covered jaw, five sense organs mostly with four feet and a tail as key traits. [2] All of these features are modified according to evolution to suit the particular mammalian environment in which survival is sought. In the case of moose, large body size, palmate antlers, dense hair, long legs, and an extended, snouted jaw are necessary and sufficient to eke out a living in the extremities of northern latitude.

Moose Habitat. Cascade Canyon, Grand Teton National Park

Large size and cold climate are related according to Bergmann’s Rule.  The eponymous correlation was first established by the German biologist Carl Bergmann in 1847 with the hypothesis that heat loss was proportional to the ratio of an animal’s surface area to its volume. The logic is that thermogenesis (“heat production”) is a matter of body mass and heat loss emanates mostly from its surface. In essence, larger moose, bear, and lynx are more likely to survive the cold to reproduce more effectively than those at the lower end of the size spectrum. The rule holds fairly well with warm blooded animals like mammals (71 percent) and birds (76 percent). There are other biological traits that vary according to latitude. Gloger’s Rule is that lighter colors prevail in northern areas as a matter of survival due to cryptic coloring for both predators and prey; the arctic fox is hard to spot and the snowshoe rabbit is hard to find. As moose are never predator nor rarely prey (except as calves), there is no environmental stressor to select for whiter fur.  Similarly, Allen’s Rule is that northern animals have smaller appendages like ears, tails, and limbs relative to southern cousins ― a corollary also based on heat loss. [3] Field studies have shown that moose do indeed get larger as you go from the southern end of their range to the north following Bergmann’s Rule but their ears and antlers get larger and wider at the same time, violating Allen’s Rule. [4]

Moose antlers are employed in head butting contests to establish male pecking order dominance during the annual rut. This is as dangerous as it sounds. Weighing as much as a ton each, two bull moose jousting with bony, multi-tined weapons frequently break ribs and scapulae while frequently tearing through flesh. About five percent of moose die in combat every year and one third will die of wounds inflicted over the course of their short, brutish lives. The winners do most of the mating. All of this is nature’s pathway for the winning bull moose to perpetuate genes for  bulk, brawn, and big antlers without regard to latitude. Once this is accomplished, the antlers fall off in the winter only to be regrown from the scull up in time for the next mating season. In the wild where survival depends on serendipity, the handiwork of evolution is here evident. The energy needed to grow a set of bull moose antlers made from the same skeletal bone tissue that forms the framework of the body is up to five times that needed for sustainment metabolism. Bull moose can lose up to twenty percent of their body weight in the run up to the rut. This is about the same energy differential as that needed for a cow moose to give birth to a calf (sometimes two and rarely three). Getting enough protein from an herbivorous diet is hard enough, but getting enough calcium and phosphorous to make antler bone tissue is the real challenge. These minerals must be sequestered from extant bone, resulting in osteoporosis and weakness just in time for rut trial by contact combat. [5] While this must have be a good evolutionary result for moose in their current environment, it may not be sustaining in the long term. Evolution is a record of the past with no plans for the future.

The violence of antler assaults is  testimony to the importance of sexuality in the evolutionary cycle of life in the caldron of survival selection. It is just one of numerous characteristic traits that emerged as successful in promoting the moose brand with cows attracting bulls and vice versa. Reproduction is a biological mandate, not an option. Meeting and mating for moose that live alone out of sight in remote wilderness habitats must rely primarily on sound and smell. For enhanced audio, bull moose ears and antlers operate in synchrony with four key features to detect even the faintest cow moose bleat. The moose ear or pinna is about 65 square inches, more than fifty times larger than ours. Stereophony, the ability to determine sound  directionality is enhanced with a wide separation of a foot, twice that of humans. Moose ears operate independently, each rotational for a complete 360 degrees and tiltable by 90 degrees away from vertical. And lastly, like the hearing horn of yore, moose antlers concentrate sound to amplify the signal to enhance detection over background noise. Measurements using a taxidermic antlered moose head (there are regrettably many to choose from) revealed a fifty percent decibel increase when measured at the base of the antler. It is probable that the wide, palmate shape of moose antlers, unlike the tubular shape of most other deer, evolved as a more effective sound receiver. It is also probably not a coincidence that female moose have a better sound repertoire than their male counterparts, which is unique among cervids. [6]      

Pheromones as aphrodisiacs operate over time to establish a geographical datum to which a sound signal may only have provided a vector direction. Smells are considered to be the most enduring of animal senses. For moose, they are sine qua non. Mammalian olfactory systems consist of two separate “chemosensory” signals to different parts of the brain that end in the hypothalamus, the region controlling behavior and endocrine/hormonal response. The main olfactory system (MOS) samples the air for volatile chemicals across a broad spectrum for general situational awareness, such as food emanations. The accessory olfaction system (AOS) triggers response in a specialized sensor called the vomeronasal organ (Jacobson’s Organ) that is thought to be exclusively for reproduction related smells. There are four kinds of pheromones: Modulators influence general psychological state; Releasers have specific, immediate responses; Signalers are less specific and gradual; and Primers change behaviors over the longer term. For moose, bull rut urine is Cupid’s aromatic arrow. As a pheromone, it is a Releaser, causing “overt displays of attraction and copulation.” It is a complex compound that has not been fully characterized with over 100 chemical constituents. Courting consists of a rutting bull digging and urinating in a dirt pit, wallowing in the resultant muck to obtain a whole-body bridal bouquet. Cows attracted to the smell follow suit until the nuptial party is fully aroused and sex ensues.[7] Life goes on according the laws of nature, a new calf conceived.

Moose have a distinctive rounded, downward drooping snout from which a fleshy outgrowth called a dewlap is suspended. The bulbous snout houses an elaborate snorkel system comprised of two fatty nose plugs that are held over the nostrils by powerful muscles. [8] The moose snorkel seals air lines against water intrusion in like manner to submarines and reef divers. The delicate, conical deer family muzzle was transformed to moose snorkel-snout to facilitate consumption of aquatic plants, not infrequently with full immersion dives to lake bottoms. In one observed forage, a moose dove for almost an hour, covering 100 square yards, swimming at speeds comparable to a paddled canoe. Moose, then, are semi-aquatic deer. Their effect on riparian ecosystems is substantial, depositing the equivalent of one hundred pounds of commercial fertilizer in a year. The moose dewlap projection is similar to the swollen necks of lizards and to bird wattles. There are numerous hypotheses about the function of dewlaps that range from sexual attraction to predator avoidance. The former is based on the “peacock’s tail argument” in that having a huge encumbrance with no function must mean good genes and the latter is based on increasing apparent size to scare off would be attackers. [10] It is hard to see how this might apply to moose, where sexuality is a matter of olfaction and whose huge bulk hardly needs accentuation. Since bull and cow moose both have dewlaps, albeit with a substantial amount of sexual dimorphism (the male dewlap is much larger), it is more likely that it is vestigial like the tailbone coccyx in humans. The comical appearance of droopy snouted moose is epitomized by Bullwinkle, the sidekick of Rocky the Flying Squirrel, who lacks a dewlap altogether.

Moose have followed the same population fluctuations as the white-tailed deer from the bust of the nineteenth century to the boom in the twenty-first. The burgeoning human enterprise moving inexorably west and north through the 1800s depleted moose directly by hunting for both food and sport (moose would hardly call it that) and indirectly through tree removal habitat destruction. As the human diaspora reversed in the urbanization of the 20th century, newly fallowed fields progressed to forests and moose moved southward to their original range across the northern tier of states. With an estimated fifty thousand moose in the northeast alone (and more across the upper Midwest), moose crossing signs now proliferate, warning motorists to be wary ― due to their hood-high height, a direct collision with a moose results in a one ton weight through the windshield with usually fatal certainty. This is especially a problem in winter, when moose seeking salt learn that treated roads are covered with it. [11] Increasing numbers of hungry moose wandering near homesteads also increases the likelihood of human encounters, which are not always benign. Unlike deer, moose can be quite aggressive, particularly during mating season and when accompanied by calves. Man’s best friend is equally sworn moose enemy due to their penchant for barking and chasing. Dogs are accordingly subject to targeted moose attacks even without specific provocation. [12]

Moose population dynamics have long been a matter of scientific interest. Questions about environmental sustainability and the role of predators can only be properly answered with field observations, which are impractical to conduct in open ranges. Michigan’s Isle Royale National Park situated fifteen miles off shore in Lake Superior has served as an isolated experimental enclave for well over a century. In the 1900s, several moose crossed over to the 200 square mile island, and, absent wolf predation, proliferated. The moose population surpassed 3,000 in 1930, consuming most of the food supply, resulting in a period of starvation from which only one out of every fifteen moose survived. In about 1950, a population of wolves also immigrated to the island across a frozen channel, allowing for a comprehensive study of predator/prey behavior in the wild. Over time, a pack of about twenty wolves  preyed almost exclusively on the young, old, and infirm which stabilized the moose population at 500 with a self sustaining food supply validated by measuring tree ring growth. [13]  The now balanced ecosystem became one of the foundational bases for the reintroduction of apex predators to several western states. By 2017, the Isle Royale wolf population had dwindled to just a single mating pair. This was attributed to inbreeding, one of the unintended genetic consequences of the mammalian dominant male model. As a result, the moose population had tripled to 1500 with commensurate overbrowsing damage. To remedy the otherwise inevitable die-off, six wolves have been released on the island as part of new 20 year study by the National Park Service. [14]

In recent years, moose populations have plummeted, most notably in Vermont and Minnesota. While speculative, the effects of climate change are thought to play a key role. Three factors are germane: rising temperature; changes to forest species composition; and changes in the species and numbers of parasites. The dense, double-layer pelt that protects moose from the rigors of winter becomes a heat blanket when temperatures rise. Heat stress in moose occurs when summer temperatures exceed 57 degrees F and winter temperatures exceed 23 degrees F. The only cooling remedies available are seek shade, get wet, or move north, and many do. Tree species migrate north for the same reason ― they evolved to operate within a temperature band that balances evaporation with uptake. The maple and birch trees that are staples of moose cuisine are being driven out by the less palatable tough-barked oaks and hickories. [15] But the primary culprit for the struggling moose population is parasitic. White-tailed deer are carriers of the black-legged tick that causes Lyme Disease in humans. They are also carriers of the winter tick (Dermacentor albipictus) that infests moose. It has been concluded by some researchers that the winter tick is the primary cause of moose mortality in New England. Individual moose can have up to 50,000 ticks, causing lesions that result in the loss of almost all of the protective fur. In some areas, more than fifty percent of juvenile moose succumb. [16] The plight of polar bears has been the focus of climate change Cassandras. Moose may be next.


1. http://www.env.gov.nl.ca/snp/Animals/moose.htm    

2. Drew, L. I, Mammal, Bloomsbury Publishing, London, 2017, pp 9-25.

3. Millien, V. “Ecotypic variation in the context of global climate change: Revisiting the rules”. Ecology Letters. Volume  9  Issue 7, 23 May 2006  pp 853–869.

4. Nygrén, T. et al “Moose Antler Type Polymorphism: Age and Weight Dependent Phenotypes and Phenotype Frequencies in Space and Time.” Annales Zoologici Fennici 19 December 2007 Volume 44, Number 6,  pp 445-61.

5. Emlen, D. Animal Weapons, Henry Holt and Company, New York, 2014, pp 117-122.

6. Bubenik, George A.; Bubenik, Peter G. “Palmated antlers of moose may serve as a parabolic reflector of sounds”. European Journal of Wildlife Research. August 1, 2008, Volume 54 Number 3 pp 533–535.

7. Whittle, C. “Identification and Function of Male Moose Urinary Pheromones” PhD Thesis, University of Alaska, 2005.

8. . Sharp, D. “Researchers take a look at the moose’s enigmatic nose”. USA Today. May 5, 2004.

9. Pennesi, E. “This diving, pooping moose is saving the ecosystem – for now” Science, 21 October 2018.

10. Bro-Jorgensen, J. “Evolution of the ungulate dewlap: thermoregulation rather than sexual selection or predator deterrence?” Frontiers in Zoology. 18 July 2016 Volume 13 Number 1 p 33.

11. Schueller. G. “Moose in a Mess” Defenders of Wildlife Magazine,  Winter 2007

12. Alaska Department of Fish and Game “What to Do About Aggressive Moose” at http://www.wildlife.alaska.gov/index.cfm?adfg=aawildlife.agmoose     

13. Lack D. “Population, Biological” Encyclopedia Britannica Macropedia W. Benton Publisher, University of Oxford, Volume 14 p 839.

14. Mlot, C. “Classic Wolf-Moose Study to be recreated on Isle Royale” Science Volume 361 Issue 6409, 28 September 2018. Pp 1298-1299.

15. . Rines, K.. New Hampshire’s moose population vs climate change. New Hampshire Fish and Game Department Report 5484.

16. Debow, J. et al “Effects of Winter Ticks and Internal Parasites on Moose Survival in Vermont, USA”. The Journal of Wildlife Management. 2 August 2021 Volume 85 Number 7 pp 1423–1439.


Common Name: Wintergreen, Teaberry, Checkerberry, Boxberry, Mountain tea, Deer berry, Ground holly, Spiceberry. Often confused with Partridgeberry due to similarities in ground hugging habitat, berry size, and color – In a forest of deciduous trees that is otherwise nearly denuded in winter, the clusters of bright green shiny leaves that cover the ground in large swaths are eye catching, a reminder that even in winter there is green ― wintergreen.

Scientific Name: Gaultheria procumbens – Jean François Gauthier was the royal physician and botanist for King Louis XV in the North American colony of New France. The Swedish/Finnish naturalist Peter Kalm, an apostle of Carl Linnaeus, honored Gauthier with the eponymous genus name in recognition of the support he had provided during Kalm’s expedition to North America in 1748. The species name is  from the Latin verb procumbere which means to fall, bend, or lean forward. Procumbent is a botanical term for plants that have stems that trail along the ground without putting down roots.  

Potpourri: Wintergreen is a contradiction in terms. Winter is white snow and occasionally black ice. In the waning light of autumn, leaves of deciduous trees turn from green to yellow and/or red and eventually brown as they die and fall to become a part of the earthworm-churned humus below. Trees with leaves or needles that don’t fall in fall are called evergreen as they always are (ever green); winter has nothing to do with it. The seasonal oxymoronic distinction for the diminutive ground cover is likely a matter of perspective. The expanse of shiny bright green leaves trailing through the woods is in stark contrast to the browns and grays of the wintering forest floor. Wintergreen is most notable for the aroma and flavor of its leaves and berries. The name wintergreen accordingly evokes the freshness of the mountain air in winter and is a metaphor for natural purity. Like all floral emanations, however, wintergreen is produced by the plant for its own purpose absent any human influence.

Wintergreen is a member of the heath family, Ericaceae, derived from ereike which is the Greek name for heather. The ericoids are predominantly perennial, woody shrubs and herbs that occupy acidic uplands with low soil fertility ― they necessarily evolved survival strategies suited to these distressed, niche areas. Among the roughly 2,000 heather-type plants are some of the most noteworthy montane species including mountain laurel, pink azalea or Pinxter flower, mountain rosebay or Catawba rhododendron, and high-bush blueberry. In many cases, heath plants dominate their habitat, crowding out the competition to create a virtual monoculture in the understory. This is evidenced by the dense stands of mountain laurel and rhododendron in the northern and southern Appalachians respectively.  Wintergreen is their diminutive cousin, consisting of leathery, alternate leaves with a distinctive sheen and almost imperceptible teeth along the margin. Bell-shaped flowers scented to attract pollinating bumblebees become the red berries of autumn that persist into winter. Red attracts foraging birds to consume the pulp, depositing the indigestible seeds remotely in a dollop of nutritious excrement. [1]

Wintergreen flowers attract pollinators

Marginal habitats are especially challenging for all living things. Animals have the option to rove in and out seasonally in search of food and surreptitiously to avoid predators. Plants and fungi are sessile, growing only upward and outward from a set datum. Once they establish underground interconnected networks of roots and mycelia (the” wood wide web”), the die is cast. Any and all interaction with the outside world to attract benefactors and repel invaders becomes a matter of plant physiology. Metabolism is the general name for the chemistry of growth and decay, consisting of both the new tissue growth of anabolism and the energy creation and waste disposal of catabolism. Plants produce hundreds of thousands of primary and secondary chemicals for metabolism called metabolites that are both necessary and sufficient for attraction and repulsion.  Metabolites with low molecular weight and an affinity for fat (lipophilic) are often volatile, becoming airborne due to evaporation when exposed to air at ambient temperature. The most effective way to communicate at a distance is to take advantage of atmospheric motion and dispersal. More than 7,000 volatile plant metabolites have been identified from foods and beverages. [2] The volatile oil that is produced by the wintergreen plant is methyl salicylate.

Methyl salicylate is a colorless liquid at room temperature and consists of 8 carbon, 8 hydrogen and 3 oxygen atoms with the formula (C8H8O3). [3] Fresh wintergreen leaves have less than 1 percent by dry weight (technically called weight percent abbreviated wt%). The oil is extracted by bulk fermentation of harvested leaves; an enzyme breaks the chemical bond to release almost pure (96-99 wt%) methyl salicylate. [4] Volatile oils like wintergreen probably originated due to the random mutation of evolution as a way to deter herbivores. While herbivore is generally applied to distinguish vegetation eating as an alternative to meat eating among animals, here it refers to leaf eating insects that were a primary threat to primordial plants. Methyl salicylate evolved independently of wintergreen in other plant species, as similar threats to survival yield similar reactions. This is a well-documented phenomenon called convergent evolution. Over the eons, the original volatile plant oils evolved further to promote survival, taking on a wide variety of functions such as attracting some animals and repelling others. The complex nature of plant chemical interactions with their environment remains largely opaque to science absent those subject to field study ― like methyl salicylate. [5]

Botanists have suspected of over half a century that there was one phytochemical (phyton means plant in Greek) responsible for triggering plant defensives in response to alien invaders. Defensive behavior had been observed by within an individual plant and remarkably from one plant to another ― presumably via volatile chemical communication. Salicylic acid was suspect for a time but testing of specific defensive response failed to correlate with concentrations of the chemical. Continued research revealed that SABP2, the enzyme that converts methyl salicylate to salicylic acid, is the probable intraspecies signal. The observed phenomenon is attributed to a plant producing methyl salicylate at the damaged site and transmitting it through its vascular system with SABP2 converting it to salicylic acid to trigger resistance remotely. [6] And that is not all. Methyl salicylates attract predatory insects. Experiments with hops, an important crop for the brewing industry, revealed that four times as many species of predatory insects were attracted when controlled release dispensers of methyl salicylate were placed in the field. This resulted in an equally dramatic drop in the number of spider mites, the primary arthropod predator of hops. [7] The predatory insects evidently developed methyl salicylate sensors as a means of locating an easy meal of mites. Everything is connected to everything else in ecology.

Oil of wintergreen is toxic and therefore potentially deadly at high dosage. One 5 milliliter teaspoon of oil contains 6 grams of methyl salicylate, comprised in part from salicylic acid. Aspirin, the first commercial analgesic,  owes its effects to the release of salicylic acid originally extracted from willow trees (genus Salix). One teaspoon of wintergreen oil is the equivalent of swallowing twenty aspirin pills (the normal dose is two tablets every 6 hours). Since ingested chemicals are spread throughout the body once absorbed through the walls of the small intestine, doses are normalized to body weight using the ratio of milligrams per kilogram, the equivalent of parts per million (ppm). A dose of 100 mg/kg can be fatal. As an example, only 3 grams or half a teaspoon of oil of wintergreen would be a potentially fatal dose for a child weighing 30 kilograms (about 65 pounds). [8]  A popular medicinal field guide includes the caveat that oil of wintergreen is “highly toxic;absorbed through skin, harms liver and kidneys(emphasis in the original). [9]

In spite of its toxicity, wintergreen is edible. The minute amount of methyl salicylate in the leaves and berries is well below the threshold for harmful effects in mammals and birds. It is the volatility of methyl salicylate that imparts the pleasant smell and taste of wintergreen that attracts them. The taste is perceived from the aroma since there are only five taste receptors: sweet, sour, salty, bitter, and savory; there is no wintergreen. When the leaves or berries are chewed, the released volatiles ascend through passages from the mouth to the olfactory epithelium in the nasal cavity that connects to the brain where scent information is processed. There are essentially an unlimited number of different aroma combinations and wintergreen is one of them. From the human perspective, wintergreen tea has been a North American staple ever since the colonists adopted the practice from Native Americans. Wintergreen berries consumed directly or made into pies and jellies have the same provenance. The wintergreen flavor, now mostly a food industry additive made from either laboratory produced methyl salicylate or black birch trees (which also contain methyl salicylate), is widely used in chewing gum and other consumer concoctions. Wintergreen is an important food source for birds, particularly ground dwelling species like the ruffed grouse, comprising over 2 percent of their food intake year-round. Among mammals, wintergreen browse is estimated to comprise about 5 percent of total food input for white-tailed deer, particularly further north. Wintergreen berries constitute up to 2 percent of the diet of black bears. [10,11]

Medicinal is the middle ground between toxic and edible. From the dose perspective, size matters. An amount of oil consumed and distributed in a large body can in its diluted state can prevent smaller animals like microbes from proliferating. Since methyl salicylate/oil of wintergreen probably arose as a deterrent to insects, that faculty persists. Most if not all antiseptic mouth washes contain methyl salicylate, listed on the label as anti-gingivitis and anti-plaque, killing the causative bacteria. Wintergreen leaves, while they may be eaten whole by deer, show no evidence of insect chewing. Apparently, the taste or smell suffices to deter them. Some of the “family friendly plant based” insect repellents that eschew DEET contain wintergreen oil, taking advantage of this effect. Based on personal experience, these natural chemical sprays work better than their industrial counterparts in deterring the confounding cloud of gnats that dive bomb into eyes and ears. This makes sense because plant chemical shielding is based on millennia of trial-and-error mutations and those chemicals that persist in living plants must have been effective.  Two comments about gnats ― a loose, descriptive term for small flying insects. First, there is a reason that they home in on eyes and ears. This must also be scent-based chemical attraction, in all likelihood an ingredient necessary for gnat sex according to their Kamikaze persistence. Second, gnats consist of multiple species all with their own evolutionary history and there is therefore no single chemical that will deter them en masse. This is why the so-called natural sprays, which can contain geranium oil, soybean oil, castor oil, cedarwood oil, citronella oil, peppermint oil, and lemongrass in addition to wintergreen oil, work better than chemical sprays, which mostly contain just DEET, most effective against ticks and mosquitoes.

Oil of wintergreen is an effective and potent pain medication. Methyl salicylate and its derivative salicylic acid are demonstrably one of the best treatments for everything from aching joints to migraines. Given the rudimentary understanding of the nervous system and pain propagation, the actual mechanism remains a mystery but surely it has something to do with neurotransmitters and receptors. Aspirin was the only commercial pain killer until the advent of ibuprofen (Advil), acetaminophen (Tylenol), and naproxen (Aleve) starting in the mid 1970’s. Native Americans used wintergreen broadly for a wide variety of ailments. Cherokees chewed the leaves for sore gums and to alleviate the symptoms of dysentery in addition to a substitute for chewing tobacco (which also contains methyl salicylate). The more northerly Iroquois Confederation tribes used wintergreen as a topical treatment for arthritis and rheumatism and internally as blood purifying tea.  In many cases, the specific treatment employed a concoction of several different herbs including wintergreen; its contribution to salubrity is moot.[12] There is some science here, however. A randomized double-blind trial with 182 participants with acute pain conducted using “topically applied rubefacients containing salicylates” to one group and a placebo to the other resulted in a 50 percent pain reduction. Similarly, a trial with 429 participants with chronic musculoskeletal and arthritic pain yielded a moderate but lower pain reduction. [13] One must conclude that oil of wintergreen is one of the few validated herbal remedies; it actually works.


1. Niering, W and Olmstead, N, National Audubon Society Field Guide to North American Wildflowers, Alfred A. Knopf, New York, 1998, pp 496-510.

2. Goff, S and Klee, H “Plant Volatile Compounds: Sensory Cues for Health and Nutritional Value?” Science Volume 311 Issue 5762, 10 February 2006, pp 815-819.

3. http://chemister.ru/Database/properties-en.php?dbid=1&id=2994

4. https://hort.purdue.edu/newcrop/med-aro/factsheets/WINTERGREEN.html     

5. Pichersky, E. “Plant Scents” American Scientist Volume 92 Number 6, November – December 2004, p 514.

6. Leslie, M. “At Long Last, Pathologists Hear Plants’ Cry For Help” Science, Volume 318 Issue 5847, 5 October 2007, pp 31-32.

7. James, D. and Price, T.  “Field-testing of methyl salicylate for recruitment and retention of beneficial insects in grapes and hops” Journal of Chemical Ecology 30 August 2004, Volume 30 Number 8 pp 1613–1628.

8. Tidy, C “Salicylate Poisoning” Patient Professional Articles 2014 at https://patient.info/doctor/salicylate-poisoning  

9. Foster, S. and Duke, J. Peterson Field Guide Medicinal Plants and Herbs, Houghton Mifflin Company, Boston, 2000, p 31.

10. https://wildadirondacks.org/adirondack-wildflowers-wintergreen-gaultheria-procumbens.html

11. Angier, B. Edible Wild Plants, Stackpole Books, Mechanicsburg, Pennsylvania, 2008, p 262.

12. The ethnobotany Native American database lists all documented uses of drugs by different tribes. http://naeb.brit.org/uses/search/?string=gaultheria+procumbenshe   

13. Mason, L.et al “Systematic review of efficacy of topical rubefacients containing salicylates for the treatment of acute and chronic pain” British Medical Journal 24 April 2004 Volume 328 Issue 7446 p 995.

Parasol or Lepiotoid Mushrooms

Of all the fungi that have the umbrella shape, the Parasol Mushroom is the epitome

Common Name: Parasol Mushroom – The umbrella analogy is applicable to all mushrooms that have a stem or stipe holding up a cap or pileus. Since the umbrella (from the Latin umbra meaning shade) is equally a protection against rain or sun, parasol (Latin parare to shield and sol, the sun) is equally apt. Parasol is applied only to this mushroom out of the thousands of possible candidates due to its exceptionally broad cap held aloft by a relatively narrow handle-like stem.

Scientific Name: Lepiota procera – The generic name is from the Greek lepos, meaning rind, husk, or scale in reference to the scurfy surface of the cap. Procerus is Latin for tall. It is equally known as Macrolepiota procera to reflect the breakup of the original Lepiota genus into many new genera according to genetic DNA-based associations.

Potpourri: The lepiotoid mushrooms occupy an uncertain niche between the agarics and the amanitas. The agarics are exemplified by the “supermarket” White Button Mushroom (Agaricus bisporus), a cultivar of the Meadow Mushroom (Agaricus campestris) originating in the caves of Paris in the seventeenth century. They are characterized by having brown spores, free gills, and a partial veil.  The amanitas are among the most notable of all mushrooms, including the deadly, pearly-white Destroying Angel (Amanita disporangia) and the iconic red, white-dotted Fly Agaric (Amanita muscaria). Amanitas also have free gills, a partial veil, but with white in lieu of brown spores and a full veil. Lepiotas have white spores, free gills, and a partial veil, combining the traits of Agarics and Amanitas. [1]

Key mushroom features

Partial and full veils, as the names imply, are thin membranes that protect (veil) the gilled spore-bearing surfaces of some mushrooms until just before spore release to minimize any damage that could accrue during their emergence from the subterranean domain of the fungal mycelium. The partial or inner veil of Lepiotas, Agarics, and Amanitas is attached from the edge of the cap to a ring or annulus on the stem. The full or universal veil of most Amanitas covers the entire mushroom (like an egg), leaving a bowl-shaped remnant called a volva at the base of the stem, and frequently “veil fragments” on the cap. Free gills means that the gills are not attached to the stem affixed to the underside of the cap. Gill attachment is one of the primary features used by mushroom keys to distinguish one species from another. Notched and decurrent (descending the stem) gill attachments are the two primary alternatives to free gills. Another mushroom key distinction is the presence of scales on the cap of many lepiotoid mushrooms, especially the larger, “parasol-like” species. These structures are outgrowths from the cap and are not fragments of a gill-enclosing veil. In general, if a patch on the cap of a mushroom is flattened and light-colored, it may be a fragment of the universal veil. If it is angular and darker, it is a scale.   

The Agaric – Lepiota Family (Agaricaceae) and the Amanita Family (Amanitaceae)  are both in the order Agaricales. The fact that the Amanita muscaria is also called Fly Agaric is indicative of the still unravelling origin story of gilled mushrooms. Carolinus Linnaeus established the current system of biological classification or taxonomy with the publication of Species Plantarum in 1753. Fungi were placed in Cryptogamia, “hidden life” in Latin, one of the twenty four classes of the Plant Kingdom. This designation was for those plants that had reproductive systems that had not yet been determined (and were therefore hidden), as spores not visible to the naked eye had yet to be rationalized as a means of sexual transmission. The four orders of Cryptogamia were ferns (Filices), bryophytes like mosses and liverworts (Musci), algae which included all lichens, and fungi. There were ten genera of fungi, including Boletus for all mushrooms with pores instead of gills, Phallus for stinkhorns, Clavaria for coral fungi, Lycoperdon for puffballs, and Agaricus for all gilled mushrooms. [2] The bareboned Linnaean system persisted for about a century, becoming the baseline for those inclined toward generalizations that were adequate for comprehending the basic organization of life, a group that has since become known as the “lumpers.” The “splitters” are their antithesis, carving out increasingly narrow speciation in search of the biological holy grail of monophylogeny, having a single common ancestor.    

The genus Lepiota was one of the first to be stricken from the ranks of Agarics. This occurred in the late nineteenth century, as spore color became one of the characteristics that served to further distinguish mushroom genera.  Thus the original lepiotoids were defined as all white-spored mushrooms that had free gills that were not in the fully veiled Amanita genus. By 1888, those mushrooms with radiating ridges on the cap that look like and are called pleats were placed in the new genus Leucocoprinus (leuco means white in Greek). Ten years later, the one green spored Lepiota was moved to the genus Chlorophyllum. In 1948, Lepiotas with a different mechanism for growth involving what are called clamp connections were moved to Leucoagaricus and the largest Lepiotas were moved to Macrolepiota (macro is Latin for big) in which L. procera is currently placed. The splitting continued as DNA became the final arbiter of species. The only mushrooms of the approximately 1,000 species of white spored mushrooms with free gills that do not have a universal veil (Amanita) that remain in the original Lepiota genus are smaller in size mostly with scales on the cap and banding on the stem. [3] But more recent phylogenic evaluations of the 22 extant genera have shown that “taxonomic circumscription and segregation of the genus Lepiota has been problematic.” [4] Which is why, for the sake of some consistency in field identifications, Parasol mushrooms as an archetype as used here should suffice. 

Since this article is about Parasol Mushrooms, it is apropos to address the mushroom umbella analogy. The logic of syllogism would suggest that if it looks like an umbella, then it must be a rain shield. In reality, the cap has the opposite function – to retain water. The umbrella shape is to ensure that there is enough humidity as water vapor on the underside of the cap for water droplets to condense in the vicinity of the spore-bearing gills. To explain why this is so, a few points about mushroom physiology must be noted. A mushroom is a fruiting body that is produced by a fungus, the tangled mass of thread-like strands called a mycelium that is wholly underground or inside dead wood. The only function of the fruiting body mushroom is to spread the reproductive spores into the environment to propagate the species (like apples on an apple tree). When a fungal mycelium is ready to reproduce, it forms a self-contained and out of sight proto mushroom called a primordium. Once fully formed, the fungus waits for promising weather, which is quite frequently after substantial rain has fallen. This is why mushrooms mysteriously appear overnight after rain; they are already there, ready, and waiting. Once the mushroom erupts from its hypogeal lair, the cap opens, separating the partial and/or universal veil if it has one, exposing the gills to the surrounding air for the first time. [5]

Spore shooting force F

So why does the air under the mushroom cap need to have plenty of water vapor? Because the spores need to be literally shot away from the gills so that they can freefall into the wind for dispersal. The motive force that ejects each spore outward is the result of the condensation of water vapor into a tiny droplet. Gills are like vertical, side-by-side slats suspended from the underside of the cap. The reason for this arrangement is to maximize the surface area available to be able to produce as many spores as possible; a flat surface would provide only a small fraction of the area with gills.  Because the probability of any one spore successfully germinating to produce a new fungus is vanishingly small, mushrooms need to produce millions of spores to succeed. Since the gills are mounted vertically on the gills rather than horizontally, a spore, if simply released, would remain stuck to the surface. Nature’s evolutionary solution is to literally shoot the spore horizontally, away from the side of the gill into the air gap between gills so that it can then fall due to gravity. Each spore is held at the tip of a stalk called a sterigma at a point called the punctum lacryman (Latin for the “point that cries”) depicted in the figure at A. It is here that the water vapor condenses, shown in B. The water droplet extends onto the spore surface in C due to surface tension, causing the center of gravity of the spore/water mass to shift rapidly to create what is called a surface tension catapult force (marked with an F in the figure). The miniscule (about 10 microns in diameter) spore is ejected outward at a speed of about 10 miles per hour with an acceleration of 25,000 times the force of gravity (called G-force). First hypothesized at the beginning of the 20th century, the catapult force was captured by high-speed camera about twenty years ago. The mechanics was demonstrated conclusively by modelling the spore/sterigma interface using polystyrene hemispheres just five years ago. [6] Fungi have been described as fantastic with good reason.

Green-spored Lepiota

Parasol mushrooms are extolled as one of the commendable edible species, with commentary that ranges from “choice, with caution” [7] to “tender caps … edible and highly regarded by many mycophagists.” [8] An abundance of caution is warranted. There are a number of species that are quite similar in appearance to L. procera with edibility caveats that range from unknown and therefore not recommended to demonstrably poisonous. There is even one species sometimes known as the Deadly Lepiota (L. josserandii) since it has the same amatoxin chemicals that are found in the most notable of all deadly mushrooms, the Destroying Angel (Amanita bisporigera) and the Death Cap (A. phalloides). The simple fact that the Parasol-type mushrooms are similar in characteristics to the problematic Amanitas (white spores and partial veil) should raise the red flag potential for mistaken identification. Absent a complete and through assessment of a parasol-like mushroom by a competent expert to include spore color, veil attachments, and scale configuration, consumption is unwise. There is an old saying: There are bold mushroom hunters and old mushroom hunters, but no old, bold mushroom hunters. The other aphorism of note is that you can eat any mushroom – once. The alleged ubiquity of deadly mushrooms in Anglo-Saxon culture and literature is a matter of phobia and not fact. The North American Mycological Association (NAMA) has maintained a national mushroom poisoning data base since 1982. It is not comprehensive since it relies on proffered reports; there is no requirement for medical and veterinary establishments to report mushroom poisonings. However, it provides some baseline data that is instructive. There were a total of 1700 reports of mushroom poisoning reports over thirty years with the vast majority involving ingestions by young children and dogs. Almost all result in various degrees of temporary gastrointestinal distress and full recovery with no lingering long-term effects.  Contrary to the perception of the general public, only about 10 percent of poisonous mushrooms― i.e. those which cause nausea and diarrhea (and sometimes both) ― are potentially deadly. Deadly plant toxins like those of hellebore and white snakeroot are much more common than deadly fungal toxins.  There is one lepiotoid mushroom that deserves special attention. According to NAMA, “Of the mushrooms generally considered poisonous, the one far most often consumed is Chlorophyllum molybdites. It is large and meaty; it resembles a generally choice edible, it tastes good, and it grows in lawns and parks. Chlorophyllum molybdites quickly rewards the unwary with gastric distress, vomiting, and diarrhea lasting several hours.” [9]  

The white gills of a young Green-spored Lepiota

The Green-spored Lepiota (Chlorophyllum molybdites) is variously known as “the vomiter” and “the gut-wrencher” for its notable stomach and bowel emptying effects. [10] There are a number of characteristics that can be used to make the distinction between the edible lepiotoid mushrooms and its poisonous doppelgänger. Habitat, distribution, and season are the most notable. Green-spored Lepiotas appear in clusters in grassy areas in the heat of summer and their edible cousins are found singly in mulch and open woods in the fall. What about the spore color? While it is true that C. molybdites has dingy greenish colored spores when fully open and mature, the gills are white and only turn slightly dingy with age. The green, although unique among mushrooms, is more a nuance than the convincing traffic light color. Most edible fungi are better when collected young and fresh; just about everything (and everyone) gets tough and sinewy with age. This then is the bane of the mushroom hunter. For Green-spored Lepiotas gathered while still immature, the gills would be white with scarcely a hint of the tell-tale green. The scenario: A flush of succulent looking white mushroom just like the one that you buy at the store pop up in the courtyard of your apartment complex and you rush out to gather, cook, and eat them. A beautiful summer day turns into a medical emergency in a matter of hours. [11]


1. Arora, D. Mushrooms Demystified, 2nd edition, Ten Speed Press, Berkeley, California, 1986, pp 293-310.

2. Linnaeus C.  Species Plantarum. Stockholm, Sweden,1753 : pp 1061-1186.

3. Vellinga, E. “An Overlooked California Lepiota- Old or New?” Fungi Magazine, Volume 2 Number 4, Fall, 2009, pp 7-9.

4. Johnson, J. and Vilgalys J. “Phylogenetic systematics of Lepiota sensu lato based on nuclear large subunit rDNA evidence”. Mycologia. 10 June 1998 Volume 90 Number 6,  pp 971–979.

5. Kendrick, B. The Fifth Kingdom, Focus Publishing, Newburyport, Massachusetts, 2000, pp 80-98. This is the single best desk reference for the Kingdom Fungi.

6. Chang, K. “Fungi Physics: How Those Spores Launch Just Right” New York Times, 27 July 2017. https://uphyl.pratt.duke.edu/NYTimes_Fungi_2017.pdf    

7. Lincoff, G. The National Audubon Society Field Guide to North American Mushrooms, Alfred A. Knopf, New York, 1981, p 520.

8. Roody, W. Mushrooms of West Virginia and the Central Appalachians, The University Press of Kentucky, Lexington, Kentucky, 2003, p 72-73

9. Beug, M. “An overview of Mushroom Poisonings in North America”. Mycophile Volume 45 Number 2, March/April 2004.

10. Salzman, J. “ Your Yard Might Be Home to the “Vomiter” Mushroom” Huffington Post 29 April 2011.

11. Hedgpeth, D. “Virginia family hospitalized after eating wild mushrooms found at apartment complex” Washington Post, 22 August 2018.