Stinkhorns

Common Name: Stinkhorn, Carrion fungus – Stink can mean either emitting a strong, offensive odor or, ethically, to be offensive to morality or good taste. Both interpretations apply according to the context herein. Horn is a foundational word of the Indo-European languages that refers to the bony protuberances that adorn the heads of many ungulates like deer. In that it also is associated with supernatural beings like the devil, it may be that this connotation was the original intent for its use. Devil’s dipstick is an idiomatic name for some species of stinkhorn that suggest this interpretation.

Scientific Name: Phallaceae – Phallus is the Greek word for penis. There can be no doubt that the family name was selected due to verisimilitude, the remarkable resemblance of the stinkhorn to male, mammalian, and notably human anatomy.

Potpourri:  Stinkhorns are a contradiction in terms. For some they are the most execrable of all fungi and for others they are elegant, one species even being so named (see Mutinus elegans). They range in size and shape from the very embodiment of an erect male canine (M. caninus named for its resemblance to a dog penis) or human penis (like Phallus ravenelii in above photograph) to colorful raylike extensions ranging outward and upward like a beckoning, stinking squid (picture at right). In every case they are testimony to the creativity of the natural forces of evolution, seeking new ways to survive the rigors of competition. Like the orchids that extend in intricate folds and colors of “intelligent design” to attract one particular insect to carry out pollinator duties, stinkhorns have become “endless forms most beautiful and most wonderful” that defy the odds of probability that must therefore lead to evolution as an explanation. [1] The priapic and tentacled extensions can only have been the result of successful propagation for the survival of the species, just like Homo erectus.

The phallic appearance of some stinkhorns is not as outré as it seems at first blush. The priapic shaft elevates spores to promote dissemination. Like a fungal Occam’s razor, stinkhorns evolved the simplest solution―growth straight upward with no side branches, placing the spore gleba at the bulbous apex. The fungus accomplishes this in a manner similar to humans, using water pressure to hold the shaft erect in lieu of blood pressure; hydrostatic versus hemostatic. The phenomenon is part of the fungal life cycle that starts in the mycelium, the underground tangled mass of threadlike hyphae that is the “real fungus.” The stinkhorn starts in the mycelium as an egg-shaped structure called a primordium containing the erectable shaft surrounded by spore laden gleba held firmly in place with jellied filler cloaked with a cuticle. It is the fruiting body of the fungus.  When environmental conditions dictate, the “egg hatches,” and the water pressurized shaft grows outward and upward lubricated by the jelly at a rate of about five inches an hour until it reaches fly over country. Here the biochemistry of smells including hydrogen sulfide (rotten eggs), mercaptan (rotting cabbage), and some unique compounds aptly named phallic acids draws flies from near and far. In ideal conditions, the slime and spores will all be gone in a few hours, and the bare-headed implement of reproduction will soon become flaccid.

Stinkhorns belong to a diverse and now obsolescent group of fungi called Gasteromycetes from gaster, Greek for “belly ” and mykes, Greek for fungus. With the translated common name stomach fungi, they are characterized by the enclosure of their spores inside an egg-shaped mass called a gleba (Latin for “clod”). Hymenomycetes alternatively have their spores arrayed along a surface called a hymenium (Greek for “membrane”) and are by far the larger grouping. The hymenium surface can take the form of gills or pores on the underside of mushroom caps or any of a wide range of other shapes ranging from the fingers of coral fungi to the cups of tree ear fungi. The Gasteromycetes include puffballs and bird’s nest fungi. [2] In the former, the ball of the puffball is the gleba. On aging, a hole called an operculum forms at the top so that the spores can be ejected (puffed) by the action of falling raindrops for wind dispersal. Each “egg” in the bird’s nest is a gleba and is also forced out by the action of falling rain. The projectile gleba affixes to an adjacent surface from which spores are then also air dispersed. Stinkhorns evolved to distribute the spores from the gleba following a completely different evolutionary random sequence. They attract insects to the stink at the top of the horn.  

Flowering plants called Angiosperms are ubiquitous, successful in their partnership with many insects to carry out the crucial task of pollination. While this is primarily a matter of attracting bees and bugs with colorful floral displays and tantalizing scents promising nectar rewards, there are odoriferous variants. Skunk cabbage earned its descriptive name from the fetid aroma that attracts pollinating flies to jumpstart spring with winter’s snow still on the ground. Another member of the Arum Family, the cuckoopint, attracts flies with its smell and then entraps them with a slippery, cup-shaped structure embedded with downward pointing spines, releasing them only at night after they are coated with pollen to then transport. Stinkhorns produce an olfactory gelatinous slime containing its reproductive spores that some insects, mostly flies, are drawn to. It is not clearly established whether the flies eat the goo and later defecate the spores with their frass [3] or whether they are only attracted by the smell, perform a cursory inspection, and then fly off with spores that “adhere to the bodies of the insects and are dispersed by them.” [4]  Some insight can be gained according to entomology, the study of insects. Do they eat the slime or do they merely wallow in it? 

The primary insects attracted to stinkhorn fungi are the blow flies of the Calliphoridae Family and the flesh flies of the Sarcophagidae Family. The term blow fly has an interesting etymology that originates with their characteristic trait of laying eggs on meat that hatch into maggots, the common name for fly larva. Any piece of meat left uncovered in the open long enough for the fly eggs to hatch was once called flyblown, which gradually took on the general meaning of anything tainted. The reversal of the festering meat term gave rise to the term blow fly for its cause. As a purposeful digression, wounded soldiers in the First World War left unattended for hours on the battlefield were sometimes found to be free of the infections that plagued those treated immediately because the blow fly maggots consumed their necrotic tissue. It is now established the maggots also secrete a wound healing chemical called allantoin (probably to ward off competing bacteria) and they are sometimes intentionally used to treat difficult infections. Flesh flies, as the family name suggests (Sarcophagidae means flesh eating in Greek), are also drawn to carrion to eggs for their larvae to eat. [5] If blow flies and flesh flies are attracted to stinkhorns due to the smell of rotting meat, they would presumably lay eggs. So the conundrum is what happened to the maggots? While eggs could hatch in a few days and larvae would feed for a week or two, stinkhorns last for several days, their slime removed in half that time.

Field experiments have verified that stinkhorn fungal spores are indeed ingested by flies. Drosophila, the celebrated fruit fly of early genetic studies, had over 200,000 stinkhorn spores in their intestine when dissected in an experiment. Given the volume available in a fruit fly gut, this quantity adds some perspective to the vanishingly small size of spores. The larger blow flies were found to contain more than a million and a half spores in a similar field evaluation. It was further demonstrated that spores passing through insects and defecated in their frass were fully functional. [6] This is not too surprising as spores evolved for survival under hot, cold, or desiccated environmental extremes; the fly gut is relatively benign by comparison. It is true, then, that flies eat spore bearing material. It is equally evident that there are no maggots in stinkhorn slime, even though this is what the average blow fly does when offered smelly meat. Diversity provides a reasonable basis for this contradiction. There are over 1,000 species of blow fly each to some extent seeking survival within a narrowed niche. Flies of the order Diptera are  noted for their propensity to mutate and adapt. Some species of blow fly and flesh fly deviated from the norm to consume stinkhorn slime for nutritional energy and lay eggs elsewhere. The stinkhorn and the flies they attract are an example of mutualism. Flies are attracted to and gain nutrition from what is essentially a fungal meat substitute and the fungus gains spore dispersion. Many fungi are excellent sources of protein, containing all eight essential amino acids needed by humans.  Flies need protein too.

The startling, trompe d’oeil appearance of a penis in the middle of a forest no doubt attracted humans as soon as there were humans to attract. The first written account of stinkhorns is Pliny the Elder’s Natural History written in the first century CE based on observations made on his military travels throughout the Mediterranean basin. John Gerard’s sixteenth century Herball  identifies the stinkhorn as Fungus virilis penis arecti forma  “which wee  English call Pricke Mushrum, taken from his forme.” [7] The bawdiness of Shakespeare’s rude mechanicals gave way to  Victorian Age corsets and high collars where there was no place for a “prick mushroom.” Charles Darwin’s daughter is accredited with the ultimate act of puritan righteousness. Ranging about the local woods “armed with a basket and a pointed stick” she sought the stinkhorn “her nostrils twitching.” On sighting one she would “fall upon her victim, and then poke his putrid carcass into her basket.” The day ended ceremoniously with the day’s catch “brought back and burnt in the deepest secrecy on the drawing room fire with the door locked because of the morals of the maids.” [8] As the modern era loomed and sexuality came out of the bedroom onto the dance floors of the roaring twenties, stinkhorns regained respectability.

The Doctrine of Signatures was the widely held belief that God intentionally marked/signed all living things to help humans determine how best to exploit them. To those who ascribed to this philosophy, a penis shape could only mean good for sexuality, which in the rarefied view of the pious, could refer only to procreation. Eating stinkhorns undoubtedly arose as either a way to enhance virility or as an aphrodisiac, and probably both. Dr. Krokowski, in Thomas Mann’s The Magic Mountain, lectures about a mushroom “which in its form is suggestive of love, in its odour (sic) of death.” [9] The dichotomy of budding love and the stench of death leaves a lot of room for speculation across the middle ground. Stinkhorn potions have been proffered as a cure for everything from gout to ulcers and proposed as both a cure for cancer and the cause of it. [10] There is insufficient research to conclude that any of this is true.

Stinkhorns as food from both a nutritional and gustatory purpose is at the fringes of the brave new world of mycophagy, fungus eating. Food is a matter of culture that extends from the consumption of frog legs in France to the mephitic surströmming of Sweden. Mushrooms have been on the menu for centuries from the shiitake logs of Japan in Asia to the champignons of Parisian caverns in Europe, but almost everything else was considered a toadstool. From the strictly aesthetic standpoint, the consumption of the stinkhorn “egg” dug up before it has a chance to become a smelly phallus has some appeal. Charles McIlvaine, the doyen of mycophagists whose goal at the dawn of the last century was to make the public aware of the “lusciousness and food value” of fungi, describes stinkhorn eggs as “bubbles of some thick substance … that are very good when fried.” His conclusion is that “they demand to be eaten at this time, if at any.” [11] Of more recent note, Dr. Elio Schaechter wrote that sautéing stinkhorn eggs in oil resulted in “a flavorful dish with a subtle, radish-like flavor. The part of the egg destined to become the stem was particularly crunchy, resembling pulled rice cakes.” [12] I am reminded of a Monte Python episode in with Terry Jones is upbraided for selling chocolate-covered frogs made from real frogs, the bones necessary to give the confection a proper crunch.

Netted Stinkhorn

Not all members of the Stinkhorn family look like a penis. Some have lacey shrouds that extend downward from the tip like a hoop skirt with a hint of femininity. These scaffolds are not for decoration but for scaling. In that it has been established that each stinkhorn species is in partnership with some form of gleba eating insect, the rope ladder can only be to allow crawling bugs like carrion beetles to climb up to the top to access the sporulated slime. The local species is Dictyophora duplicata, (net-bearing, growing in pairs) which is commonly known as netted stinkhorn or wood witch. After the bugs have finished with their slime meal, the result reminds some of a bleached morel. While netted stinkhorns are relatively rare in North America, they are abundant in Asia.

Bamboo Fungus

The netted stinkhorn called Zhu Sun meaning bamboo fungus for its native habitat is one of the most sought-after delicacies of Chinese cuisine. It featured prominently in banquets of historical importance, including the visit of U. S. Secretary of State Henry Kissinger to China in 1970 to reinstate diplomatic relations during the Nixon administration. Kissinger reputedly praised the meal for its quality, but it was never clear whether this was a matter of diplomacy or taste.  Part of the bamboo stinkhorn’s esteem stems from its health benefits according to ancient Chinese medicine. Recent research has confirmed that consumption correlates to lower blood pressure, decreased blood cholesterol, and reduced body fat. In the 1970’s the price of bamboo fungus was over $700 per kilogram but efforts to cultivate it commercially were developed driving the price down to less than $20 per kilogram. [13] It can be found in many Asian markets. The back of the package depicted above offers that “bamboo fungus is a magical fungus. It grows in a special environment, free from pollution. Once mature, it emits a notable light fragrance. Its shape is light weight. Its flavor is delicious. Its texture is silky. It is very nutritious. It is an ideal natural food.” Kissinger may or may not agree.

References

1. Darwin, C. On the Origin of Species, Easton Press, Norwalk, Connecticut, 1976 (original London 24 November 1859). P. 45.

2. Kendrick, B. The Fifth Kingdom, 3rd Edition, Focus Publishing, Newburyport, Massachusetts, 2000. pp 98-101.

3. Wickler, W. “Mimicry” Encyclopedia Brittanica Macropedia 15th Edition,  William Benton Publisher, Chicago, 1974, Volume 12, pp 218.

4. Lincoff, G. National Audubon Society Field Guide to North American Mushrooms, Alfred A. Knopf, New York, 1981, pp 831-835.

5. Marshall, S. Insects, Their Natural History and Diversity, Firefly Books, Buffalo, New York, 2006, pp 407-408, 481-484.

6. O’Kennon, B. et al, “Observations of the stinkhorn Lysurus mokusin and other fungi found on the BRIT campus in Texas” Fungi, Volume 13, Number 3, pp 41-48.

7. Money, N. Mr. Bloomfield’s Orchard, Oxford University Press , New York, 2002, pp 1-8.

8. Raverat, G. Period Piece: A Cambridge childhood. London,  1960 : Faber and Faber  p. 136

9. Mann, T. . The Magic Mountain, translated by John E. Woods. New York 1927: Alfred A. Knopf. p 364

10. Aurora, D. Mushrooms Demystified 2nd Edition, Ten Speed Press, Berkeley. California, 1986, pp 766-778.

11.McIlvaine, C. and Macadam, K. One Thousand American Fungi, Dover Publications, New York, 1973 (originally published in 1900 by Bowen-Merrill  Company), pp xiii, 568-576.

12. Schaechter, E. In the Company of Mushrooms, Harvard University Press, Cambridge, Massachusetts, 1997, pp 168-173

13. Chang S. and  Miles P. “Dictyophora, formerly for the few“. Mushrooms: Cultivation, Nutritional Value, Medicinal Effect, and Environmental Impact (2nd edition). Boca Raton, Florida: CRC Press. 2004  pp. 343-355.

Wineberry

Common Name: Wineberry, Wine raspberry, Japanese wineberry, Purple-leaved blackberry, Hairy bramble – Wine is the color of the tiny hairs that cover the stem and carpels, a dark red similar to that attributed to red/burgundy grapes. Berry is a general term applied to any small fruit. It originally derived from the Gothic word weinabasi, a type of grape, evolving to the Old English berie.  Berry is one of only two native words for fruit, referring to anything that was like a grape. The other is apple, given to larger, pome-like fruits. Weinabasi à Wineberry.

Scientific Name: Rubus phoenicolasius –  Rubus is Latin for “bramble-bush” of which blackberry was the most well known of the many types of prickly shrubs that comprise the genus. The species name means purple colored. [1] The Greek word for the color purple is phoinik, which was also the origin of Phoenicia, the ancient land on the eastern Mediterranean Sea coast, present day Lebanon. This littoral area was the source of sea snails from which a very valuable purple dye was extracted. Clothing dyed purple was thus a symbol of wealth and prestige, the term “royal purple” a vestige of its importance. Before the advent of chemical dyes in the early 20th century, color could only be naturally sourced like blue from indigo.

Potpourri: Wineberry would not make a very good wine and it isn’t really a berry. The first wines were naturally fermented thousands of years ago absent any knowledge of the pivotal role of yeast. The sugars in fruit were the food source for natural local yeasts that gave off alcohol as a byproduct of their metabolism. Grapes are the only common and prolific fruits that have enough natural sugar to produce the “weinabasi” libation discovered by fortuitous accident. Wineberries, like all of the other fruits from which wines might be made, must be supplemented with extra sugar (called chaptalizing) to feed the yeast fungus. Wineberry wine, albeit with a tart berry-like taste, would be a far cry from the rich flavor that the best French terroir can impart.  A berry is a fruit with seeds imbedded in the pulpy flesh, like grapes, watermelons, and tomatoes. Wineberry, like all brambles that comprise the genus Rubus, notably blackberry and raspberry, is an aggregate fruit with a multitude of tiny, clumped “berries”. One could presumably refer to one wineberry fruit as wineberries. Regardless of its unlikely name, wineberry has spread far and wide, becoming a nuisance to the point of becoming an invasive species in the Appalachian Mountain and coastal regions of the Mid-Atlantic states including Maryland and Virginia.[2]

Wineberries are native to central Asia extending eastward to the Japanese archipelago. They were intentionally introduced into North America by horticulturalists in the 1890’s to hybridize with native Rubus plants. The goal was to potentially improve on nature’s accomplishment by hybridizing native plants with introduced species to produce new cultivars with a greater yield of bigger berries and/or resistance to plant diseases and pests.[3] The compelling rationale for new edible crops at this point in time was that world population had surpassed one billion eliciting global food shortage concerns first raised by Thomas Malthus one hundred years earlier. The eponymous Malthusian principle that population rose geometrically (1,2,4,8 …) while agriculture rose only arithmetically (1,2,3,4 …) leading to inevitable famine was impetus for improvements in agricultural products and methods. The first Agricultural Experimental Station in the United States was inaugurated in New York in 1880 with the express purpose of addressing this challenge. Its director E. Lewis Sturtevant established the precept of conducting experimental agriculture to develop new plant foods. By 1887, with 1,113 cultivated plants and another 4,447 plants with edible parts, research focus shifted to developing fruit varieties. The bramble fruits of the genus Rubus, with about 60 known species and a well-established penchant for hybridization, were considered good candidates for experimentation. Wineberries from Asia became part of the mix.[4] As it turned out, the first green revolution of manufactured fertilizer using the Haber-Bosch process (see Nitrogen Article) and the second green revolution internationalizing Norman Borlaug’s high yield wheat put off the impending Malthusian famine, at least so far. There is every reason for Rubus breeding to continue. [5]

Wineberries nearly ripe beneath sepal husks

Bramble plants of the genus Rubus are so successful at dominating disturbed habitats that bramble has become a euphemism for any dense tangle of prickliness. Wineberry is only a problem because it is better at “brambling” than many other species, even though the stalks are covered with wine-colored hairs and have no prickles. It spreads both vegetatively with underground roots and with seeds spread in the feces of frugivores, animals that eat fruit. The wineberry plant consists of a rigid stem called a cane that extends upward, unbranching at first, reaching lengths of up to 9 feet. Vegetative spreading is enhanced by tip-rooting which occurs when the longer canes (> 3 feet) arch over and reach the ground, where adventitious roots form to establish an extension. In dense clusters, tip-rooting predominates. It takes two years to make a wineberry, as the first year primocanes apply all growth to cane extension and leaf formation for photosynthesis. The second year floricanes become woody and produce flowers that become fruits if fertilized. Wineberry flowers are hermaphroditic and are therefore less dependent on pollinators since there is no need to transport male pollen from the stamen of one flower to the female pistils of another. [6] Each wineberry fruit is protected by husks densely covered with the signature wine-colored hairs that are remnants of the sepals that comprise the calyx at the base of the flower. [7]

Wineberry is just one of many invasive species that have come to dominate large swaths of the forest understory in the twenty first century. Like kudzu planted for soil remediation of the Dust Bowl and plantain imported as a vital European medicinal, wineberry was introduced with good intention―the improvement of native berry stocks through hybridization. But, as has become increasingly obvious, the complexities of local ecology can result in mountains from molehills as “Frankenplants” take advantage of their reproductive strengths over the competition. There are a number of reasons for the success of wineberry in its unwitting but instinctual quest to become the one and only species wherever it can. It is an aggressive pioneer plant in any disturbed area. One study in Japan found that wineberry covered almost two percent of an extensive ski area after clearcutting, showing high phenotypic plasticity in its adaptations. Its tolerance to shade from tree growth due to old field succession of open areas promotes dense wineberry thickets that are the hallmark of its aggression. [9] On the other hand, all Rubus brambles are apt to dominate disturbed areas like roadside cuts, where one typically finds both raspberries and blackberries in addition to wineberries. There is some irony in that recent DNA analysis of the genus indicates that the first Rubus brambles evolved in North America and subsequently invaded Eurasia without any human intervention. They are brambles, after all. [10].

A bramble of wineberry canes

On the positive side, wineberries are tasty and nutritious, providing a snack for the passing hiker and food for the birds and the bees. A popular field guide to edible plants includes wineberries with raspberries and blackberries as uniformly edible, notably “good with cream and sugar, in pancakes, on cereal, and in jams, jellies, or pies.” [11] The consumption of Rubus fruits by humans precedes the historical record. Given that Homo erectus evolved from the fruit eating great apes, the impetus would be a matter of wired instinct. It is hypothesized that the reason that primates are the only mammals with red color vision is evolutionary pressure to find usually reddish fruit for sustenance and survival in the jungle forest. Historical documentation of the consumption of aggregate fruits was established by Pliny the Elder in 45 CE. He noted in describing raspberries that the people of Asia Minor gathered what he called “Ida fruits” (from Turkey’s Mount Ida). The subgenus of raspberries which includes wineberries is appropriately named Idaeobatus. It is probable that the Romans began to cultivate some form of raspberry as early as 400 CE. [12] Rubus aggregates were also important medicines in addition to the more obvious nutritional attributes. They contain secondary metabolites such as anthocyanins and phenolics which are strong antioxidants, contributing to general good health. Native Americans used them for a variety of ailments ranging from diarrhea to headache, although there is no indication that the effects were anything beyond placebo. [13]

All things considered, it is hard to get worked up over wineberries as pernicious pests. Granted they tend to spread out and take over but then again, so do all of the other brambles. In most cases, the area in question falls into the category of a “disturbed” habitat. While this could be due to storm damage, it is almost universally due to human activities. Road cuts through the forest may be necessary for any number of reasons, but they are initially unsightly tracts of rutted mud unsuited for hiking. Once nature takes over, the edges, now in direct sunlight, become festooned with whatever happens to get there first and grows fast. And what could be more appropriate than a bunch of canes covered with wine colored fuzz bearing sweet fruits?   

References: 

1. https://npgsweb.ars-grin.gov/gringlobal/taxon/taxonomydetail?id=32416

2. https://plants.sc.egov.usda.gov/home/plantProfile?symbol=RUPH

3. “Wineberries”  Plant Conservation Alliances, Alien Plant Working Group. 20 May 2005.

4. Hedrick, U. “Multiplicity of Crops as a Means of Increasing the Future Food Supply” Science, Volume 40 Number 1035, 30 October 1914, pp 611-620.

5. Foster, T. et al “Genetic and genomic resources for Rubus breeding: a roadmap for the future” Horticulture Research, Volume 116, 15 October 2019 https://www.nature.com/articles/s41438-019-0199-2   

6. Innes, R.  Rubus phoenicolasius. In: Fire Effects Information System, [Online]. U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station, Fire Sciences Laboratory (Producer) 2009. https://www.fs.usda.gov/database/feis/plants/shrub/rubpho/all.html   

7. Swearingen, J., K. Reshetiloff, B. Slattery, and S. Zwicker  “Plant Invaders of Mid-Atlantic Natural Areas”. Invasive Plants of the Eastern United States. 2002 https://www.invasive.org/eastern/midatlantic/ruph.html   

8. Wilson, C. and Loomis, W. Botany, 4th Edition, Holt, Rinehart and Winston, New York, 1967, pp 285-304.

9. Innes op cit.

10. Carter, K. et al. “Target Capture Sequencing Unravels Rubus Evolution”Frontiers in Plant Science. 20 December 2019 Volume 10 page 1615.

11. Elias, T. and Dykeman, P. Edible Wild Plants, A North American Field Guide, Sterling Publishing Company, New York, 1990, pp 178-185.

12. Bushway, L et al Raspberry and Blackberry Production Guide for the Northeast, Midwest, and Eastern Canada, Natural Resource, Agriculture, and Engineering Service (NRAES) Cooperation Extension, Ithaca , NY. May, 2008. https://www.canr.msu.edu/foodsystems/uploads/files/Raspberry-and-Blackberry-Production-Guide.pdf     

13. Native American Ethnobotany http://naeb.brit.org/uses/search/?string=rubus%20&page=1

Jimsonweed

Common Name: Jimsonweed, Jamestown weed, Thorn apple, Devil’s trumpet, Mad-apple, Stinkwort, Locoweed – The plant was named by the early settlers of  the first permanent English colony in North America established in 1607 eponymously named for their sovereign. Jamestown weed became Jimsonweed as an elision.

Scientific Name: Datura stramonium – The generic name is Hindi for a similar plant that grows on the Indian subcontinent that derives from the ancient Sanskrit word dhattura. The species name is a combination of the Greek strychnos and makinos, which translates roughly to “nightshade-mad.” [1] Nightshade is the common name of  the plant family more formally called Solanaceae and mad attests to the psychoactive effects that ingesting the plant induces.

Potpourri:  Jimsonweed is more American than apple pie. It is named for the first port of entry established by the Virginia company inspired by (the virgin) Queen Elizabeth I and named for her successor King James VI of Scotland who became England’s James I in 1603.The former was the last of the Tudors and the latter was the first of the Stuarts. The striking, stinking flower could hardly be ignored and quickly became one of the cynosures of the colony. Its medicinal properties were of major import in the steaming, swampy caldron of the tidewater coastal area ― it was valued for its “cooling” effect. It was surely one of the first of the New World plants coopted by the Europeans from their Indian neighbors as native herbals. Along with tobacco, which was promoted to “purgeth superfluous fleame  (phlegm) and other gross humors [2],” it joined the other members of the Nightshade Family, inclusive of tomatoes and potatoes, in the reverse migration of plants back to Europe. Jimsonweed expanded with the population westward, becoming an agricultural nuisance plant. Its medicinal properties are now better understood, its unrestricted use tempered with caution. It is hallucinogenic in moderate doses and deadly in excess.

The trials and tribulations of Jamestown and the Virginia Colony in its early years are interwoven with its namesake weed. As tobacco growing English settlers moved inland in the early 17th century, displaced Native Americans fought back with justifiable ferocity. Following a series of deadly raids along the Potomac River by the Susquehannocks in 1676, a young planter named Nathanial Bacon led a group of settlers demanding that the royal governor, Sir William Berkeley, take action to prevent further bloodshed. Bacon’s Rebellion, seen by some as a herald of American independence a century later in prescribing universal suffrage, forced the royal governor to flee as his capital city of Jamestown was torched. The rebellion was eventually quashed by British troops sent by King Charles II, who had succeeded his father Charles I a decade after he had been beheaded in the English Civil War during which the Virginia colony remained royalist.[3] In that every army marches on its stomach, the campaigning soldiers were no exception, and jimsonweed was on the menu. According to the historical record, jimsonweed was gathered for a boiled salad “and some of them ate plentifully of it.”  The ensuing reverie in which “one would blow up a feather in the air; another would dart straws at it with much fury; and another stark naked was sitting up in a corner” required their confinement “lest they should in their folly destroy themselves.” After eleven days of “a thousand such simple tricks,” they regained their composure “not remembering anything that had passed.” [4] Jimsonweed was a weed to be reckoned with.  

Native Americans were masters of herbal medicines as a matter of survival in the eons preceding knowledge of or access to analgesics and antibiotics. The Rappahannock, one of the numerous tribes of Virginia, were well acquainted with jimsonweed, though certainly by another name. Decoctions of leaves were made into salves for the treatment of wounds and their incident inflammation and formed into poultices for fevers and pneumonias. The seeds and leaves were known to be poisonous, a knowledge shared with other major east cost tribal communities like the Iroquois. This was undoubtedly the result of trial and error by more than one individual in the distant past, the learned lore remembered.  While local Indians could have warned the English soldiers in advance of their folly, it is more likely that they would have encouraged it as relations were strained at best. The Cherokee, inland toward the Appalachians, smoked dried jimsonweed leaves as a treatment for asthma, which would seem to be at odds with reason. [5] This latter use, however, became one of the most popular treatments in Europe in the 19th century. Smoke from what was called stramonium (from the species name) was recommended by physicians all over the world. The noted French novelist Marcel Proust wrote to his mother that “I had an attack of asthma and incessant running at the nose, which obliged me to walk all doubled up and light anti-asthma cigarettes at every tobacconist’s I passed.” [6] There is every reason to believe this to have been effective, preceding modern inhalers that alleviate asthmatic symptoms.

Jimsonweed that was once a weed from Jamestown became in time one of global apothecary’s standard prescriptions as stramonium. The 16th century English herbalist John Gerard wrote of the thornapple, a name that refers to the large, spiny fruits of jimsonweed, noting that its blossom was “offending to the head when it is smelled unto.” Juice from thornapples “boiled with hogs grease to the form of an unguent or salve, cures all inflammations whatsoever” and “doth most speedily cure new and fresh wounds.” [7] That this is similar in form and function to its uses among Amerindians lends credence to at least its vulnerary qualities. By the early 20th century, stramonium was included in most national Pharmacopoeias … the process to extract the medicinal compounds from jimsonweed leaves was specified in the United States Pharmacopeia. A yield 0.35 percent of its alkaloids, noted for their “unpleasant narcotic odor and a bitter, nauseous taste,” was expected with a collector able to capitalize its leaves for 2 to 5 cents a pound. In addition to dilation of the eye, a feature common to many Nightshade Family plants (especially belladonna, or beautiful woman, as eyes thus darkened were considered alluring), narcotic, diuretic, and anodyne uses were prescribed. As a validation of the common practice “in asthma, they are frequently employed in the form of cigarettes which are smoked or the fumes are inhaled.” [8]

What to make of this unusual plant that cooled Jamestown’s summer heat, evoked outbursts of exuberance from staid British soldiers, and healed the bloody wounds of war? Datura stramonium almost surely evolved in the tropics of the Americas and made its way north and south into more temperate regions. [9] It achieved this feat with an egg-shaped seed capsule that is covered with prickles―a botanical oddity. [10] Since plants are sessile, they must employ external agents to disperse seed. One of the more successful ways of doing this is to grow a fruit that is both colorful and sweet like an apple to attract animals. Consumption of the fruit results of the deposition of the seeds, hardened against gastric acids, in a mound of excrement, an ideal fertilizer, at some distance from the parent. Spines or thorns have the opposite purpose. The sharp points that stick into sensitive mouth tissues are there to prevent animal ingestion. This is why they are normally found along stems in plants like roses and barberries. Jimsonweed evolved a novel solution. The seed pods burst open (technically called dehiscence) with enough force to eject the seeds up to 10 feet from the parent plant. If a stream happens to be within range, the seeds are buoyant and can stay afloat for over a week. Each plant can contain as many as 50 seed pods, ejecting over 30,000 seeds that are both consistent and persistent. In one field trial over 90 percent of the seeds germinated almost 40 years after pod ejection. Once established, it can wreak havoc on crops, notably reducing crop yields of soybeans and tomatoes by up to 50 percent. [11] The plant from Jamestown is a serious weed.

But that is only half of the story. Jimsonweed produces powerful chemicals that deter almost all herbivores from eating it, killing many that try … but most don’t since the alkaloids that act as deterrents are distinctly bitter in taste. Bitter is one of the five nominal tastes sensed by most animals that indicates poison, just as sweet is nutrition, salty is minerals, sour is unripe and savory is protein. The most common form of animal jimsonweed poisoning is contaminated hay as silage for cows and horses and contaminated grain fed to chickens. This occurs when harvesters fail to carefully inspect fields for infestation prior to threshing. [12] The most serious jimsonweed poisoning problem is humans, particularly juveniles for which the lure of intoxication is unconstrained by the wisdom of years. The main culprit is atropine, mnemonically described by clinicians as having symptoms of being “blind as a bat, mad as a hatter, red as a beet, hot as a hare, dry as a bone, the bowel and bladder lose their tone, and the heart runs alone.”  Since over 80 percent of all cases also involve hallucinations, the “tune in … turn on” experience is considered by some to be worth the risk; there were over 300 emergency room visits in 1993 alone. [13]  A popular medicinal plant field guide provides the following dire warning concerning jimsonweed: “Violently toxic. Causes severe hallucinations. Many fatalities recorded.” The bold typeface is in the original. [14]

The effects of jimsonweed are a matter of neuroscience. The circuitry that sends signals to trigger heartbeats and climb stairs relies on neurons that convey action impulse to muscle momentum. Neurons operate in sequences with the dendrite at one end  signaling to the  axon in the next in line across a gap called a synapse. The signal is carried across the synapse with molecules called neurotransmitters. There are about twenty, including the well known serotonin which is linked to the management of anger and dopamine which signals pleasure. Acetylcholine (ACh) is not so well known, but it is perhaps the most important. It is defined as “the neurotransmitter at many synapses in the peripheral and central nervous systems, including the neuromuscular junction.” Atropine, the alkaloid produced by jimsonweed disrupts the proper operation of acetylcholine. In the lexicon of psychiatry, it is an agonist. ACh is the primary neurotransmitter of the Autonomic Nervous System (ANS) which carries out most if not all of unconscious signals that operate the organs like pulsing hearts and breathing lungs. This includes the sympathetic nervous system, which is essentially crisis control central with its functions known suggestively as the four F’s (fight, flight, fright and sex). ACh also operates to trigger the conscious operation of muscles, the essence of all bodily movement from walking to chewing. [15] Any disruption to the proper operation of acetylcholine is bound to have consequence, which jimsonweed does. Mad-apple is an apt common name.

References:

1. Center for Agriculture and Bioscience Compendium https://www.cabidigitallibrary.org/doi/10.1079/cabicompendium.18006

2. Boorstin, D. The Americans, The Colonial Experience, The Easton Press, Norwalk, Connecticut, 1958. pp 209-210.

3. Mapp, A. Virginia Experiment, The Old Dominion’s Role in the Making of America 1607-1781, Lanham, Maryland, 1985, pp 119-172.

4. Beverly, R. The History of Virginia, in Four Parts, London,  Printed for F. Fayram, J. Clarke, 1722 https://www.gutenberg.org/cache/epub/32721/pg32721-images.html#Page_109     

5. The Native American Ethnobotanical Database http://naeb.brit.org/uses/search/?string=datura        

6. Jackson, M. “”Divine Stramonium”: The Rise and Fall of Smoking for Asthma”. Medical History. April 2010  pp 171–194.   https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2844275/             

7. Gerard, J. Generall Historie of Plantes, London, 1597, pp 191-193 https://archive.org/details/herballorgeneral00gera/page/n5/mode/2up        

8. Henkel, A. “Jimson weed”. American Medicinal Leaves and Herbs. U.S. Government Printing Office. 1911.  p. 30.

9. “Datura stramonium”. Germplasm Resources Information Network (GRIN). Agricultural Research Service (ARS), USDA   https://npgsweb.ars-grin.gov/gringlobal/taxon/taxonomydetail?id=13323    

10. Niering, W.  and Olmstead, N. National Audubon Society Field Guide to North American Wildflowers. Alfred A. Knopf, New York, 1998, pp 802-803.

11. Michigan State University Department of Plant, Soil, and Microbial Sciences https://www.canr.msu.edu/weeds/extension/jimsonweed    

12. Cornell University Weed Identification for New York State https://blogs.cornell.edu/weedid/jimsonweed/

13. Arnett A. “Jimson Weed (Datura stramonium) poisoning”. Clinical Toxicology Review. December 1995. Volume 18 Number 3. https://www.erowid.org/plants/datura/datura_info5.shtml  

14. Duke, J. and Foster, S. A Field Guide to Medicinal Plants and Herbs, Houghton Mifflin Company, Boston, 2000, p 205.

15. Bear, M, Connors, B., and Paradiso, M. Neuroscience, Exploring the Brain, 4th edition, Wolters Kluwer, Philadelphia, 2016,

Japanese Beetle

There is no mistaking the brown and green wing covers of the Japanese beetle.

Common Name: Japanese Beetle – Unlike many animals and plants broadly referred to as Asian in origin, there is no doubt that this beetle was inadvertently introduced from Japan to the United States and spread, becoming an agricultural Juggernaut.

Scientific Name: Popillia japonica – The genus is based on a well established Roman surname. Marcus Popillius Laenas was consul, one of the Roman Republic’s two top magistrates, noted for his defeat of the Gauls in 359 BCE. He was the first of a long line of distinguished Roman leaders named Popillius. There is no known connection between any of these descendants and beetles. The species name establishes geographic origin in Japan.

Potpourri:  The Japanese Beetle is a case study in the invasive behavior of an alien species in the life and times of the twentieth century.  Its clandestine point of entry in August 1916 was  New Jersey in the form of beetle larvae ensconced in iris rhizomes imported from Japan as horticultural garden center offerings. [1] Spreading at a rate of about 10 miles per year, the shiny green and brown scourge has ravaged planthood in the eastern half of North America for over a century. The root munching grubs eat voraciously through turf all summer long, despoiling large swaths of lawns, or, if used to hit balls and then go find them, golf courses, only to become adults after wintering over six inches deep. What follows in spring after pupation is a two month feeding and mating frenzy culminating in the turf deposition of some 50 eggs per female to sow the seeds for Malthusian beetle populations. With an annual agricultural loss cost estimated at half a billion dollars, they have spawned a whole industry of eradication and control.

Beetles are by some measures the most successful of earth’s inhabitants. With more than 300,000 species worldwide, they comprise about one fourth of all described animals―a thousand beetles for every primate. This is in part due to an “intelligent” design. The Order Coleoptera  to which they are assigned is literally Greek for ‘sheath wings,’  describing their key taxonomic anatomical similarity. The hardened, chitinous front wings encase the more delicate rear wings with an armored barrier similar in form and function to  a box turtle’s carapace, protecting the beetle from many an unwelcome intruder. These encapsulating forewings, which are called elytra, the plural for elytron which also means ‘sheath’ in Greek, unfold with an elaborate linkage of struts and elbows to release the diaphanous rear wings for flight. The beetle, a six-legged biological version of the bipedal transformer toy, thus converts from a stolid, tank-like ground vehicle into a clumsy but functional airfoil to find food, to find a mate, to escape emergent threats, or simply to gad about on summer days. [2] The aphorism that the Creator must have had an inordinate fondness for beetles because he made so many of them is frequently attributed to Charles Darwin. The more likely source is British entomologist J.B.S. Haldane who wrote that “the Creator would appear as endowed with a passion for stars, on the one hand, and for beetles on the other for the simple reason that there are nearly 300,000 species of beetle known, and perhaps more …” [3] The versatility and resilience of beetles is notable, divine or otherwise.

Japanese beetles are in the family Scarabaeidae, usually referred to as simply scarabs, which comprise one tenth of all beetle species (a mere one hundred per primate). The historical importance of the scarabs is evident in nomenclature. Scarabaeus is Latin for beetle, which probably came from the Greek karabos, meaning horned beetle, with good reason. According to the dated but enduring Linnaean taxonomy, scarabs are distinguished in having the last 3 to 7 sections of their 10 segmented antennae formed into a lamellate or plate-like club; lamellicorn beetle is an alternative name. The notoriety of horn-beetle scarabs is due in part to their relatively large size, and, in many cases “outgrowths on head and thorax” that “produce bizarre forms.” [4] But the more surprising scarab origin story is central to Egyptian mythology. Khepri was one of the names for their Sun god  (along with Ra, Atum and Horus) that is a cognomen taken directly from kheprer, the Egyptian name for the dung-beetle. Many scarabs feed on animal feces and other decaying matter as a nutritional niche. The dung beetle carries this one step further, molding the semi-solid stool into balls that can be rolled along the ground and deposited into a purpose built hole. Here the eggs are laid so that hatched larvae will be provisioned with their first feast. The Egyptian holy men interpreted the dung ball as representing the sun being pushed into the “Other World” at dusk and back over the horizon at dawn. Thus the scarab amulet, a signature Egyptian embellishment and adornment, symbolized “the renewal of life and the idea of eternal existence.” [5] The transubstantiation of the bread and wine of communion into the body and blood of the Christian deity that are then consumed in sacristy is no less outré.

While dung is not on the menu for the Japanese Beetle, just about everything else is, earning it the distinction of being considered polyphytophagous, Greek for “many plant eating.”  While roses and fruit trees are its most notorious targets, the beetle smorgasbord includes at least 435 identified species from 95 families including garden and field crops, ornamental shrubs, and shade trees. The choice of one plant over another is related at least in part to scent.  Research has demonstrated that the phytochemicals eugenol and geraniol are particularly attractive―the fact that roses contain both provides some empirical validation. Exacerbating the beetle invasion problem (beetlemania?) is their tendency to congregate on one plant, creating a writhing mass of coruscating green and brown. Field testing has revealed that twice as many beetles alight to join a party in progress, eschewing adjacent plants of the same species for no apparent reason. Both the quality and quantity of the meal must surely suffer as communality prevails. With a preference for plants in direct sunlight, the banquet starts at the top, stripping the foliage downward by eating between the leaf veins, leaving characteristic lacelike skeletons as remnants. In many cases, the plant is left totally defoliated and dies as a result. In one field test 2,745,600 beetles were collected from 156 peach trees … an average of 17,600 per tree. As half of that population would be female, the ensuing egg deposition in nearby fields would result in a veritable contagion of larval grubs, eating away at the roots of the ecosystem to the detriment of both field and forest. [6] The scourge of the Japanese beetle to an environment unprotected by native predators can be apocalyptic.

Beetle mating mania

Evolutionary success for any animal species requires a minimum of two surviving adults to replace each gravid female. In insects, this is achieved predominantly by depositing large caches of fertilized eggs that hatch to larvae and pupate to adults mating in sufficient numbers to establish perpetuity. Japanese beetles evolved to survive predation and attrition in their native habitat, primarily the grasslands of northern Honshu and the whole of Hokkaido. In the United States they are unchecked, and their sexual drive to survive has produced exorbitant dividends. Male beetles are equipped with a penis-like aedeagus to inject cyst-encapsulated spermatozoa into the female vagina. The instinctual male mating mandate is triggered by the pheromones of emergent virgin females; they descend en masse, forming large clusters called “beetle balls.” One experiment using females in a trap collected almost three thousand males in one hour. Mating attempts persist throughout month-long adult lives. Coitus occurs primarily on leafy foliage that doubles as dining room and can last for several hours. Speaking of balls, one male was observed mating with seven different females in a single day and another was observed mating with at least two different females over five consecutive days.  Females take periodic breaks from the action to dig about three inches into the soil to lay several eggs only to return to remate and repeat, ultimately laying about fifty. [7] A population bomb nonpareil.

The exploding growth of Japanese beetles was noted within two years of their initial introduction in a nurseryman’s refuse pile in Burlington County New Jersey in 1916. By 1920, 1,000 quarts of beetles were collected in one half square mile and two years later, the area had expanded to six square miles. In 1923, when the range had surpassed 700 square miles and extended into Pennsylvania, the clarion call was sounded at the national level. The USDA dispatched scientists to Japan to search for predators and began evaluating pesticides for control and remediation. [8] But it was too little too late and by 1970, the range had reached at least 150,000 square miles and extended over 14 states. Despite extensive efforts to stem the tide, it is now established in 30 states. While it was long thought that the Rocky Mountains and the Great Basin would present an impenetrable barrier to their westward migration, Japanese beetles have recently made landfall in the Pacific northwest. It is postulated that adult beetles hitched a ride on an airplane or that larvae arrived surreptitiously in the root soil of imported plants. [9] The economic costs have grown accordingly. The Japanese beetle larva is the worst turf-grass pest in the United States; control costs are estimated at $460 million annually. This estimate is not inclusive of crop damage and the devastation of ornamental shrubs like rose bushes.  While this is hardly chump change, it pales in comparison to the annual costs of invasive species, which is on the order of $20 billion. The highest invasive species costs are attributed to mammals, primarily due to rodent crop damage. Plants are next due to strangling vines like Amur honeysuckle. Insects place third, led by the red imported fire ant (or RIFA) of the southwest with an annual cost of $1.5 billion. [10] The Japanese beetle has the distinction of being one of the first invaders and one of the most visible if not the most costly. It can only get worse with a warmer climate.

In an attempt to mitigate some of the economic and aesthetic damage, farmers and homeowners usually start with chemical warfare, primarily with pesticides  based on permethrin and carbaryl. The former is known as one of the best deterrents to ticks when applied to clothing for hikers and soldiers but the latter is more widely used because it is cheaper. The insect killing euphemism pesticide captures the insidious effects of chemicals when widely applied to farm fields and home gardens. The extermination of “pest” species like Japanese beetles also eliminates beneficial insects like butterflies and bees. The insect Armageddon of the last several decades is an unsettling result due in no small part to its food chain effect; many birds rely on bugs for protein. There are certainly eco-friendly alternatives based on botanicals, but they are for the most part deterrents that only last for several days. The main effect is to shunt the beetles temporarily to another location like your neighbor’s garden. A second line of defense utilizes Japanese beetle traps that emanate vapors made from a combination of virgin female pheromones and a treacly blend of fruits. The problem is that the traps are much more effective at attracting beetles (especially males) than they are at capturing them. The end result follows the law of unintended consequences: more traps, more beetles.[11]

The obvious but complicated alternative is biological control. Difficulties arise not only in the identification of the appropriate control organism  but also in ensuring that the cure does not become a curse. It is a factual matter that invasive species come from somewhere where they are not invasive … held in check by their native evolved ecology. While the first step is to scour home turf for potential predator imports, an assessment of  viability to the new environment is equally mandatory. Among the notable failed biological control attempts was the introduction of mongooses to Hawaii to kill crop-eating rats. The diurnal mongooses never hunted the nocturnal rats, decimating the bird population instead. In the case of beetles, the task is not as onerous since many wasps are masters of insect parasitism and, not infrequently, one species of wasps specializes in one species of beetle. The Spring Tiphiid Wasp (Tiphia vernalis) was introduced to North America in the 1920s for its known parasitism of Japanese beetles. As one of natures more insidious predators, the female wasp burrows into the soil to locate a beetle grub, paralyzes it with a sting, and lays its egg that hatches to a larva that feeds on the now immobilized carcass. While effective, the tiphiid wasps alone have failed to check the Japanese beetle onslaught and other controls have been identified. The Winsome fly (Istocheta aldrichi) was also imported from Japan as a control vector. It deposits eggs on the thorax of adult female beetles which hatch to maggots that burrow under the outer wing covers to consume the softer body parts. There are also insect eating nematodes and several types of bacteria that are employed in the never ending battle to thwart the Japaneses beetle invasion. But so far, it is at best a standoff. [12]

 The impracticality of eradicating an invasive species like the Japanese beetle renders damage control as the only feasible alternative. The ounce of prevention as a pound of cure method is to establish protocols to halt the human-assisted migration of beetles from an infested part of the country to new territory. Nine western states have signed on to the USDA Animal and Plant Health Inspection Service (APHIS) Plant Protection and Quarantine (PPQ) program to monitor Japanese beetle populations and stop migration. Airports are assessed for local beetle populations and aircraft are treated to minimize the chances for the spread from infested areas to the protected states. [13] While this will lower the risk, it will not eliminate it. With some irony, it has been pointed out that, for all the human chemical, control, and programmatic efforts, the Japanese beetle has outsmarted us. Therefore, the first rule of Japanese beetle control is that you can’t control Japanese beetles. It is possible to reduce the damage by using chemical sprays selectively on their favorite plants like roses and kill enough to prevent their spread to other plants, a process called trap cropping. Another possibility is to encourage limited growth of plant invasives such as multiflora rose and Japanese knotweed that Japanese beetles demonstrably prefer. But be ever mindful of who is in charge. The final rule of Japanese beetle control is that they will “seek revenge for their dead relatives.” [14]

References:

1. Milne, L. and Milne, M. National Audubon Field Guide to North American Insects and Spiders, Alfred A. Knopf, New York, 1980, pp 561-562

2. Marshall, S. Insects, Their Natural History and Diversity, Firefly Books, Buffalo, New York, 2006, pp 257-258.

3. Haldane, J.B.S. What is life?  The Layman’s View of Nature,  L. Drummond, London,1949,  p 258.

4. Gressitt, J. “Coleoptera”, Encyclopedia Britannica, 15th edition  Macropedia Volume 4 William Benton Publisher, Chicago, Illinois, pp 828-837.

5. Viaud, J. “Egyptian Mythology” New Larousse Encyclopedia of Mythology, Hamilton Publishing Group, Ltd. London, 1973, pp 9-43.

6. Fleming, W. “Biology of the Japanese Beetle” USDA Technical Bulletin Number 1449, July 1972. https://naldc.nal.usda.gov/download/CAT87201410/pdf     

7. Gyeltshen, J. et al “Japanese Beetle” University of Florida. https://entnemdept.ufl.edu/creatures/orn/beetles/japanese_beetle.htm     

8. “Japanese Beetle Ravages”, Reading Eagle Newspaper Article 22 July 1923 extracted from New York Herald.

9. Betts, A. “Japanese beetle count passes 20,000” Washington State Department of Agriculture Ag Briefs. 3 September 2021. https://wastatedeptag.blogspot.com/2021/09/japanese-beetle-count-passes-20000.html     

10. Fantle-Lepczyk, J. et al “Economic costs of biological invasions in the United States” Science of the Total Environment, Volume 806, Part 3, 1 February 2022. https://www.sciencedirect.com/science/article/pii/S0048969721063968?via%3Dihub   

11. Potter, D. et al “Japanese Beetles in the Urban Landscape” University of Kentucky College of Agriculture, Food, and Environment Entomology Department. https://entomology.ca.uky.edu/ef451

12 “Managing the Japanese Beetle: A Homeowner’s Handbook. USDA, Washington, DC https://www.aphis.usda.gov/plant_health/plant_pest_info/jb/downloads/JBhandbook.pdf   

13. USDA Animal and Plant Health Inspection Service (APHIS) Japanese Beetle Handbook https://www.aphis.usda.gov/import_export/plants/manuals/domestic/downloads/japanese_beetle.pdf

14. Gillman, J.  “Disney and Japanese Beetles”. Washington State University, 18 March 2010

Rosebay Rhododendron

Common Name: Rosebay Rhododendron, Rosebay, Great-laurel –  Rosebay is used as a descriptive name for several plants with characteristic rose-like blossoms. Rhododendron is one of the few plants with its genus as common name.

Scientific Name: Rhododendron maximum – The genus is a combination of the Greek words rose (rhodon) and tree (dendron), an indication of its well established association with civilizations in antiquity, “rose tree” being an apt description. Maximum is a widely used term especially in mathematics as an adjective to convey the largest in size or quantity. It is derived from the Latin magnus, meaning great or big, which, for a rhododendron, it is.  

Potpourri: The lush, dense thickets of rhododendron that dominate the understory of upland elevations are testimony to an evolutionary path that produced a competitive combination of successful traits. It is one of the relatively few broad-leaved flowering plants of the angiosperm (enclosed seed) clade that is evergreen, retaining foliage year-round like the largely needle-leaved gymnosperm (naked seed) clade. The prominent rose-like blossoms that extend from the end of nearly every branch are a bouquet to attract pollinators, mostly bees, that flit from one to the other collecting nectar and pollen. With successful fertilization, an elongated, egg-shaped fruit with five cells splits open to release thousands of seeds that scatter to extend the grove ever outward. The lack of any evidence of insect damage or animal browse is a matter of chemistry. Rhododendrons, like many other members of the Ericaceae or Heath family, evolved strong chemicals to deter predation. Most animals give it a wide berth; deer supposedly are able to browse without harm but there is little evidence that they do so regularly. Rhododendron leaves can be fatally toxic to cattle and sheep. [1]

There are over one thousand species of the Rhododendron genus that extend globally across the temperate climates of the northern hemisphere. Based on fossilized pollen found in strata dating from near the end of the Cretaceous Period and fossil leaves from the beginning of the Tertiary Period, it is postulated that Rhododendrons first appeared just after the breakup of Pangaea in southeastern Asia about 50 million years ago. Speciation spread globally across a wide band of latitude during the pre-glacial epochs when a warmer climate prevailed. It is probable that the subsequent glacial cooling cycles of the current Quaternary Period resulted in the isolation of rhododendron populations in remote mountainous regions, just as Balsam Firs are isolated in elevated areas of the Appalachians. This explains the rich diversity found on the slopes of deep valleys in southeast Asia in a band extending from just east of the Himalayas through the Malaysian archipelago and the single species rhododendron diaspora to Japan, the Appalachian Mountains, and the Caucasus region of eastern Europe. [2]

While “mad honey” may also evoke marital discord, it has historical rhododendron relevance. Mesopotamia, the land between the Tigris and Euphrates rivers, is where western civilization arose from Neolithic farm villages that planted the first tentative crops. Rhododendron had spread across the Anatolian peninsula that is now Türkiye from their epicenter in the Caucasus, attracting swarms of honeybees.  In 400 BCE, the soldier of fortune Xenophon led his mercenary Greek army eulogized as the “ten thousand” on a forced march of 1500 kilometers westward  through hostile territory of present day Kurdistan and Armenia to the Black Sea. Lacking adequate provisions, they lived off the land, raiding bee hives for honey. As Xenophon later recorded in his chronicle Anabasis,  “the soldiers who ate the honey went off their heads, and suffered from vomiting and diarrhea … so they lay there in great numbers as though the army had suffered a defeat, and great despondency prevailed.” [3] While no one died, the debilitating effects of rhododendron honey were put to nefarious use near this same location three centuries later.

Mithridates Eupator became the ruler of Pontus in 115 BCE when his mother, who had tried to kill him as a youth, was deposed in a coup d’état. To protect himself against the conspiracies inherent to governance of that era, he followed a regimen of microdoses of poison to acquire immunity over time, becoming an expert on toxins and their antidotes. Uniting the diverse population of Greeks, Persians, and Thracians along the northern tier of Asia Minor, he became a serious rival to the Romans encroaching ever eastward. In the First Mithridatic War (88-84 BCE), his navy of 400 ships and army of 290,000 took over the Black Sea and the Greek cities on its banks, putting an end to the flow of tribute money to Rome and nearly bankrupting their economy. [4] The Romans rallied in two ensuing wars that eventually drove Mithridates from power as the last major eastern threat to their burgeoning empire, but not before he had tricked them on at least one occasion with mad honey of the rhododendron. In 67 BCE, the Roman general Pompey was advancing eastward along the Black Sea coast near Trabzon to engage the Pontic forces. Mithridates, employing his mastery of poisons, placed bee hives in clay pots along their route. Three squadrons of Roman soldiers succumbed and were slaughtered in their stupor. In spite of this tactical success, the forces of Rome eventually prevailed, and Mithridates was deposed and exiled to Crimea, where he was stabbed to death by the agents of his son since poisoning was not an option. [5]  The genus Eupatorium which includes the poisonous white snakeroot and the medicinal boneset is named for Mithridates Eupator in recognition of his contribution to toxicology.

Honey from rosebay rhododendron in North America is not nearly as common nor as virulent as the legendary Caucasian rhododendron of Xenophon and Mithridates. Nonetheless, the mad honey trope persists. In 1801, an account of rhododendron honey inducing nausea, muscle spasms, and blurred vision was published in the Transactions of the American Philosophical Society.[6]  A report in the most venerable scholarly publication in the Americas (established in 1771 when the states yet to be united were still colonies) affords some credence to this assertion. However, there is little evidence of any significant incidence of what is sometimes euphemistically called “honey intoxication” in North America. There are several reasons for this. R. maximum is neither as toxic as R. ponticum, the plant eponymously named for Mithidates’ Pontus homeland, nor as widely dispersed. There is also the fact that honey bees are indigenous to Europe as native pollinators for many wild plants. They were introduced to the Americas for crop pollination and are largely relegated to that role, even as some have become naturalized. The few reports of  mad honey illnesses in the US are at least in part attributable to an alternative medicine herbal treatment that links “sexual performance enhancement” to the consumption of bespoke beekeeper-induced rhododendron-mad honey.  Of twenty one honey-related emergency room visits due to symptoms that included dizziness, nausea, vomiting, and syncope (loss of consciousness due to low blood pressure), most were men of middle age who sought to regain virility [7]―another good reason to call it mad honey.

Rhododendron maximum produces a  poison named grayanotoxin which has been and is sometimes still referred to as andromedotoxin, acetylandromedol, or rhodotoxin (from the genus). While concentrated in honey, it also permeates the leaves and flowers. Grayana is an Asian Heath Family plant species from which the toxin was first extracted and analyzed. The name derives from the American botanist Asa Gray who supported Darwin’s work with the observation that many plants in eastern North America were similar to those of east Asia (like rhododendron), indicating similar evolutionary progressions. Grayanotoxin interferes with the operation of neurons by disrupting “voltage-gated sodium channels.” The effect is that  the neurons that carry the signals from one part of the body to another that make everything happen … from the beating of the heart to the thinking of the brain … can no longer do so in the prescribed order with proper timing. [8] The mechanism employed by neurons to carry out their quintessential task is electrochemical.  Electrical signals travel along the neuron from the dendrites at one end to the axon at the other where they pass over a gap called the synapse to the next neuron in the sequence path using sodium ions as transport. This is the main reason that electrolytes (ionic fluids) are so important and that hyponatremia (low sodium) can be fatal. It has been long been established as most likely that this ionic neural mechanism was a random (Darwinian) mutation that evolved only once, and, owing to its sensory and mobility efficacy, was replicated in every animal ever since.  However, it may be much more complex than that as sea sponges, which have no neurons, and comb jellies, which do, have DNA similarities. [9] The details of evolution are still evolving.

The effects of rhododendron grayanotoxin poisoning are what one might expect considering the disruption of nerve function as its cause.  Dizziness, confusion, and blurred vision are sure to follow a diminution of neuron signaling in the brain. Likewise, insidious side effects on autonomous systems take a toll; the heart beats more slowly and  blood pressure can drop to induce a loss of consciousness. Since nerves do everything, a panoply of effects have been reported that range from numbness around the mouth and excessive salivation to vomiting and diarrhea. Since humans don’t as a rule eat leaves and flowers, most reported human health effects concern the consumption of  toxic honey, which is brown and bitter and not golden and sweet. Since bitter is a  taste sensor variant to protect against inadvertently consuming poisons, it is unclear why anyone would eat tainted honey in the first place (excepting virility which trumps reason). However, cattle, sheep, goats, and donkeys do eat rhododendron leaves and consequently fall victim to its poison. The toxic dose for cows is 0.2 percent body-weight (about one kilogram) with symptoms appearing about three hours later that last for several days. Fatalities are not uncommon in part due to the ruminating mastication of cows; chewing toxic cud can only release more poison. Domestic cats and dogs will on occasion consume the azalea type of rhododendron that is widely planted in gardens, the characteristic symptoms of gastrointestinal distress result. [10]

Plants create toxic chemicals for a reason – usually to deter animal predation. Heath Family plants are no exception. Grayanotoxin likely was an evolutionary mutation that kept herbivorous animals at bay, which it does. In some cases, a priori plant chemical defenses can be co-opted by humans to take advantage of their toxicity. This is especially true when a plant (or fungus) has evolved to ward off microbes or bacteria that are equally threats to the health of humans, becoming an antibiotic. In that grayanotoxin acts to disrupt neural activity, it would seem an unlikely candidate for medicinal use owing to its profound, disturbing effects. However, there is ample evidence that it was used by Native Americans for a variety of applications. [11] Cherokee used it both as an external poultice for rheumatic pain and as treatment for skin abrasion. This may merely have been a placebo that was thought to work, so it did. The rhododendron was apparently also used for various purposes having nothing to do with health, such as to “throw clumps of leaves into a fire and dance around it to bring cold weather.”  [12] It is also reported that Native Americans made a tea from the leaves that was “taken internally in controlled doses for heart ailments.” The same guide notes “leaves toxic, ingestion may cause convulsions and coma.” [13] There has been some recent research concerning the use of rhododendron compounds for specific ailments. For example, diabetic rats treated with grayanotoxin produced more insulin, presumably due to some form of nerve stimulation. All things considered, it is probably best to avoid it altogether, in spite of any number of herbal remedies containing rhododendron extract that supposedly produce salubrious affects. [14]

Heath Family shrubs (Ericaceae) are masters of their chosen environments that include the    understory of trees at higher elevations and craggy, berry bogs. They have help in the form of specialized fungal partners that envelop their roots, providing soil nutrients like phosphorus and nitrogen in exchange for the sugars generated by photosynthesis. This relationship is called mycorrhizal derived from the Greek words for fungus mykes and root rhiza, literally “fungus root.” While almost all (~90 percent) plants have mycorrhizal fungal partners, most are either in the form of fungal sheathes surrounding the outside/ecto of the root (ectomycorrhizal – mostly trees), or fungal branches that penetrate into/endo root cells (endomycorrhizal – other plants) to form little tree-like structures called arbuscules. Ericoid mycorrhizas combine the two forms in that they both surround the roots and penetrate the cells so that this effect is even more efficacious. It is now well established that trees and shrubs (like rhododendron) share and balance nutrients to maintain a healthy ecosystem through their interconnecting fungal-root networks; facetiously the “wood wide web.” [15] The effectiveness of the outer and inner “ectendomycorrhizas” of heaths in promoting interconnected communities is such that they can and do completely take over a habitat. This can be a problem when rhododendron are introduced to non-native environments. For example Rhododendron ponticum was introduced to the UK from Iberia in 1763 and has spread to crowd out native trees, covering over three percent of all woodlands. Once established, it is almost impossible to extirpate. [16]

Rhododendrons, in spite of invasive tendencies in some regions, are among the most popular horticultural plants. It is the most diverse genus of the Heath Family with more than a thousand identified species. There is a Global Conservation Consortium for Rhododendron that seeks to promote and protect all species from extinction. Their relevance to the ecosystems is of particular importance to “underpin livelihoods in regions where they protect watersheds and stabilize steep mountain slopes in the areas where some of the most significant river systems in Asia begin.” [17] The rhododendron collection at the renowned Royal Botanical Gardens at Kew is among its most cherished, with over 3,000 species of which 300 are threatened with extinction. They were in many cases discovered, named, bred, and donated by the generation of British plant hunters that plied the globe during the nineteenth century. [18] So far as is known, none of them were affected by mad honey, their virility apparently well established.

A near impenetrable stand of rhododendron crowd out all other vegetation

References:

1. Brown, R. and Brown, M. Woody Plants of Maryland, Port City Press, Baltimore, Maryland, 1999, pp 247-254.

2. Irving, E. and Hebda, R.  “Concerning the Origin and Distribution of Rhododendrons”. Journal of the American Rhododendron Society. 1993 Volume 47 Number 3.

3. Xenophon. “4.8.19–21”. In Brownson CL (ed.). Anabasis. Perseus Hopper. Department of Classics, Tufts University. https://www.perseus.tufts.edu/hopper/text?doc=Xen.%20Anab.%204.8&lang=original

4. Durant, W. Caesar and Christ, The Story of Civilization Volume 3, Simon and Schuster, New York, 1944, pp 516-519.

5. Lane R. and Borzelleca J. “Harming and Helping Through Time: The History of Toxicology”. In Hayes AW (ed.). Principles and methods of toxicology (5th ed.). 2007, Boca Raton: Taylor & Francis.

6. Harris, M. Botanica North America, Harper-Collins, New York, 2003, pp 60-61

7. Demircan A. et al. “Mad honey sex: therapeutic misadventures from an ancient biological weapon”. Annals of Emergency Medicine. 15 August 2009 Volume 54 Number 6 pp 824–829

8. “Grayanotoxins”  Bad Bug Book: Handbook of foodborne pathogenic microorganisms and natural toxins (2nd ed.). Food and Drug Administration. 2012. https://www.fda.gov/media/83271/download   

9. Dunn, C. “Neurons that connect without synapses” Science 21 April 2023, Volume 280, Issue 6642, , p.241, 293.

10. Jansen S et al . “Grayanotoxin poisoning: ‘mad honey disease’ and beyond”. Cardiovascular Toxicology. 19 April 2012 Volume 12 Number 3 pp 208–215.

11. Popescu, R and Kopp, B. “The genus Rhododendron: an ethnopharmacological and toxicological review”. Journal of Ethnopharmacology Volume 2 May 2013, 147 Number 1 pp 42–62.

12. Ethnobotany database at http://naeb.brit.org/uses/search/?string=rhododendron

13. Duke, J. and Foster, S. Medicinal Plants and Herbs, Houghton-Mifflin, Boston 2000, p. 260.  

14. Jansen, op. cit.

15. Kendrick, B. The Fifth Kingdom, Third Edition, Focus Publishing, Newburyport, Massachusetts, 2000, pp 257-278.

16. Simons, P. “A spectacular thug is out of control”. The Guardian. 16 April 2017

17. https://www.globalconservationconsortia.org/gcc/rhododendron/  

18. https://www.kew.org/

Starling

Common Name: Starling, European Starling, Common Starling – The vocal, gregarious songbird extended across broad swaths of Eurasia even as the Indo-European language groups were differentiating. The Old English stærlinc was probably derived from stearn, a type of tern. The similarity to the Old German stara and the Prussian starnite  are indicative of a pan-European origin without any meaning beyond that of the well known bird.

Scientific Name: Sturnus vulgaris – The Latin name for the starling is sturnus with similar Indo-European origins. Vulgaris means “common’ in Latin, as the epithet vulgar suggests.

Potpourri: The European or common starling was intentionally introduced to North America in the nineteenth century as part of a cultural movement that sought to ameliorate habitats from both an aesthetic and a practical perspective. This practice extended to medicinal plants and herbs like coltsfoot and plantain but was expressly focused on birds. The starling, noted for its ravenous consumption of insects, was considered a boon to farmers in the extirpation of crop pests prior to the adaptation of chemical pesticides in the middle of the last century. It was also considered a cultural icon in Europe for its prodigious and varied song, frequently mimicking other birds, and, as pets, human speech. What’s not to like? The starling has thrived to the extent that it has become a problem on a scale comparable to pigeons in the park and Canada geese on the golf course. Bird as pest is a contradiction in terms. While society bemoans the loss of birds to glass buildings and wind farms, urban jurisdictions must manage huge starling flocks with acres of droppings and rural agronomists must account for purloined produce. It is a complicated story that begins in New York City’s Central Park.

The hackneyed version of starling invasion blames a wealthy patrician from Manhattan who had made his money in drugs, presumably legal, named Eugene Schieffelin. As an amateur ornithologist, he became a member of the American Acclimatization Society with the stated goal of introducing every one of the 600 avian species included in the copious works of William Shakespeare. To that end, Schieffelin released approximately 100 starlings in Central Park between 1890 and 1891. This initial introduction incontrovertibly resulted in the 200 million starlings flocking from coast to coast, wreaking havoc to harvests and despoiling city streets. Accounts typically include a passage from Shakespeare’s Henry IV in which a bothersome rebel named Hotspur proposes to disturb the king’s sleep by teaching a starling to say the name “Mortimer,” an earl Henry distrusted (Henry IV, Part I, act 1, scene 3). [1] The account of Schiefflin’s starlings is usually trundled out to lambast the arrogance and ignorance of the powerful elite of the past in instigating environmental disasters of the present.

Histories that fail to account for the culture and knowledge of the time and place then and there are sophistry. The Schieffelin account is true so far as the act of starling release but widely misses the mark as to motivation and expectation. The exchange of flora and fauna between Eurasia and the Americas had been going on for over four hundred years by 1890, sometimes intentional and beneficial but frequently happenstance and harmful. Horses, wheat, and cattle were introduced by colonists for work, transport and food. Influenza, smallpox, and diphtheria stealthily disembarked decimating native populations. In return, turkeys, potatoes, and tobacco offered new and exotic tastes and temptations to the Old World. Syphilis was purportedly carried back to Spain by Columbus’s sailors and spread throughout Europe as the “French Disease.” [2] By the nineteenth century, global integration had run its course with largely benign results.

The acclimatization movement arose in France in the 1850’s as an idea proposed by the naturalist Isidore Geoffroy Saint-Hilaire. The introduction of new species from one continent to another in order to better understand the adaptation of new species to new environments was one of its primary enterprises. The American Acclimatization Society was organized in New York in the 1860s with a more nuanced goal of improving beauty and diversity with an emphasis on birds. In 1877, a Mr. Conklin of the Central Park Museum reported at a meeting of the society that the commissioners of Central Park had released 50 pairs of English sparrows and that they had “multiplied amazingly.” They also freed some starlings because these birds were “useful to the farmer and contributed to the beauty of the groves and fields.” [3] This was just one of  numerous attempts on both coasts to acclimatize the starling to the New World.

Problems with species introduced to a new region absent the checks and balances of native predation and other environmental limits first became manifest in the late nineteenth century. In 1886, Clinton Merriam, the first Chief of the USDA Division of Ornithology and Mammalogy warned of the damage to grain, seed, and vegetable crops caused by the importation of harmful birds (notably English sparrows) and mammals (notably European rabbits). Ten years later, Theodore Parker, the Assistant Chief of the USDA Biological Survey advocated for federal legislation, because  “the animals and birds which have thus far become most troublesome when introduced into foreign lands are nearly all natives of the Old World,” specifically calling out the European starling for crowding out benign insectivorous native birds in addition to eating farmed crops as food. The Lacey Act of 1900 was the first major Federal legislation concerning  wildlife management, named for its originator, a representative of the farmers of Iowa. Introducing the term “injurious” as a type of animal behavior, its intent was to “regulate the introduction of American or foreign birds or animals in localities where they have not heretofore existed.” [4]  It is still in force to this day; invasive has supplanted injurious as the pejorative of choice.  

What about Shakespeare? Schieffelin’s contribution to starling scatology would have escaped notice altogether had he not been named as perpetrator by Frank Chapman, a preeminent American ornithologist who initiated Audubon Magazine and the annual Christmas bird count. During his long career at New York’s American Museum of Natural History, he came to know Schieffelin who would periodically stop by to check on the status of starlings. In the seminal 1895 Handbook of Birds of Eastern North America, Chapman effected Schiefflin’s responsibility for their introduction. Fifty years later, the nature writer Edwin Teal published an account stating unequivocally that Schlieffen’s “… curious hobby was the introduction into America of all the birds mentioned in the works of William Shakespeare.” This assertion was apparently an extrapolation of the development of a garden in Central Park where plants associated with the bard were planted … starting in 1916, ten years after Schlieffen’s death. [5] The attribution of starling introduction to Henry IV is surely poppycock.

There is an aesthetic aspect of starlings that has been overshadowed by the cacophony of their massive flocks―they are mimics nonpareil.  According to the diary of Wolfgang Amadeus Mozart, he purchased a pet starling on 27 May 1784, annotating the entry with the musical transcription of its whistled song. Three years later he led a funeral procession of dirge-singing mourners and eulogized his avian companion’s death at its gravesite with poesy: “A little fool lies here whom I held dear, a starling in the prime of his brief time, not naughty quite, but gay and bright, and under all his brag, a foolish wag.” The starling’s tune as recorded for posterity by Mozart was nearly identical to the final movement of his Piano Concerto in G Major, K. 453 he composed at about the same time as he adopted the starling. This factual yet eerie account can only have occurred if the starling had learned the tune from Mozart who truly admired its musicality and therefore mourned its death. In all probability, Mozart strolled about Vienna whistling his compositions and wandered into a pet shop, perhaps more than once. The starling therein can only have learned the tune from him, earning the eternal sobriquet as “Mozart’s Starling.” Circumstantial evidence of Mozart’s reputation for whistling tunes as they came to his head and his fondness for birdsong … he had a canary as a youth … support this thesis.

The vocalization skills of the starling were well known to the Romans and certainly also to the Greeks whose culture they absorbed. The naturalist Gaius Plinius Secundus known as Pliny the Elder wrote that starlings “practiced diligently and spoke new phrases every day, in still longer sentences” in both Latin and Greek. Certainly Shakespeare and his sixteenth century audience were well aware of the tonal dexterity of the mimicking starling that could be taught to invoke the name “Mortimer”―the jest would otherwise fall flat. In a recent quasi-scientific  experiment with a group of starlings sharing a house with a small group of bird researchers, their innate audio habits were manifest: various birds repeated phrases including “we’ll see you soon,” “give me a kiss,” and fragments of the Star-Spangled Banner. Mozart composed a piece called A Musical Joke (K 522) shortly after the death of his pet. It is described as “awkward, unproportioned, and illogical” that goes on interminably to end in “a comical deep pizzicato (plucking) note.” This would also be a good description for the starling’s repertoire of screeches, clicks, and whistles from which it concocts a verisimilitude of human speech. Was this Mozart’s epitaph to his pet starling? It is more than a possibility, as he is otherwise known for melodic virtuosity. [6]

The starling of Mozart’s affection and Schlieffen’s obsession morphed into the scurrilous scavenger of the twenty-first century by being a too successful species.  In 1915 the USDA launched a comprehensive survey of the effects of the starling in North America that included surveys of farmers and the examination of the stomach contents of thousands of birds. Based on the findings that starlings ate more pests and consumed fewer crops than native birds, the researchers concluded that “the starling possesses an almost unlimited capacity for good.”  After over a century of profligacy, the limits of starling goodness have become manifest. According to an updated USDA study, starlings consume or otherwise despoil $800 million worth of agricultural crops every year, spread infectious diseases to both humans and farm animals that result in an additional $800 million, and crowd native birds out of nesting sites. A database of starling migration paths was recommended to track nuisance concentrations to allow for targeting them with “improved baits and baiting strategies,” clearly a euphemism for poisoning. Starlicide is a  USDA approved product to control starlings and blackbirds even though it is “toxic to other types of birds in differing amounts. But this is supposedly all right because the birds experience a “slow, nonviolent death.” [7] This policy begs a research project to assess its efficacy. Adding poisons to the environment to control highly adaptable birds that will evolve to avoid or tolerate it cannot be good public policy.

A flock of starlings is called a murmuration, not so unusual as bird collectives go―convocations of eagles and parliaments of owls among them. The name is an onomatopoeia for the sound made by careening masses of starlings maneuvering in giant formations with wings flapping and muted calls creating low, indistinct noises. These individual starling murmurs combine to create a murmuration that can comprise well over half a million birds. Rising in the late afternoon, murmurations pulsate in amorphous blobs of organized chaos that has long intrigued ornithologists. The prevalent theory is that it is driven by instinctive group behavior motivated by safety in numbers to attract outliers to join so all  can more safely settle on a place to roost for the night. Using multiple cameras from different angles to track individual birds and combining them in 3D computer models, it emerges that there is no leader, each bird synchronizing with its seven nearest neighbors. The undulating bulges of birds correlate to perturbations attributed to the “selfish herd effect” as birds on the edges move inward to the safety of the center. After about an hour, they descend en masse. [8]  

According to the International Union for the Conservation of Nature (IUCN), the starling (along with the myna in the Starling family Sturnidae) is among the world’s 100 worst invasive species based on its  “serious impacts on biological diversity and/or human activities.” [9]  Rome, Italy is the epicenter of roosting starlings that  have been coming south from all over Europe to overwinter in its balmy Mediterranean climate since the 1920s. Spending days feasting in groves of olive trees and farmland of  the surrounding countryside, starlings congregate in the late afternoon to meet-up for the nightly roost. Once situated, they relieve themselves of excrement that coats whatever lies below with a slick mass of olive-oily slime. Street closures must be invoked to prevent motor bike crashes. Parked cars are encased with an implacable sarcophagus of starling scat. Attempts to stem the avian tide ranging from outright poisoning to the introduction of predatory raptors like hawks have failed due to the adaptability of starlings, the reason for their ubiquity. The only effective strategy has been relocation. Rome’s environmental department devised a technique employing a recording of a starling screeching in distress (induced in a laboratory) that is broadcast with amplified bullhorns to disrupt the roost. Generally, after the third day of being chased away, starlings opt for a less contested and congested roost as bird-man compromise. [10]

The starling’s overwhelming success as an individual species is a serendipitous result of natural selection. Other than proliferation and vocalization, they are undistinguished as just one of about 6,500 species of the order Passeriformes that make up about half of all species of the class Aves.  Usually called song birds, they are classified in taxonomy according to the configuration of their feet. Three claws forward and one back promote grasping and perching on tree branches―they may best be thought of as perching birds that sing. [11] Like almost all other birds, starlings are monogamous, sharing parental duties in nest building, egg incubating, and chick feeding (up to 20 times per hour). In fact, there is some evidence that the male and female birds coordinate these activities so that they share equally. [12]  Depending on latitude, they produce up to two clutches of six eggs every year with a success rate of up to 80 percent. While this would nominally result in a Malthusian progression of an additional ten birds per couple every year, only about 20 percent of the chicks survive to reproductive age. Two chicks per couple annually is still enough for a population explosion. Starlings are omnivores, with a daily consumption of about 15 grams of mostly insect animal food and 30 grams of plant food. Foraging in locations that range from orchards and feed lots to urban landfills, they can readily provision their nests,  typically tucked away in nooks of man-made structures. [13]

Starlings have figured out how to make a living in a world otherwise overrun by humans, taking advantage of the terraforming that defines our habitats. While they have become invasive, one might offer the same assessment of Homo sapiens. It should come as no surprise that the class Aves produces individual species that manage to overcome the most challenging environments with unsurpassed survival skills; the penguins of Antarctica and the goony birds of Midway Island among many others. Avian survival of the meteor impact darkness of  the Cretaceous-Paleocene Extinction 66 million years ago as the only representatives of the dinosaurs established a genetic heritage of resilience. According to recent DNA evidence, the starling family emerged about 6 million years ago during a less dramatic but equally challenging global climate transition. Originating in Asia, they spread during at about the same time as C3 plants were being replaced by C4 plants that characterize a drying climate as a part of the global carbon cycle. These plants, like corn or maize, sedges, and sugar cane, are more efficient in conditions with high levels of carbon dioxide. It is likely that the peculiar starling jaw muscles first evolved to meet the C4 food challenge. Unlike most birds with strong muscles to close the bill, starlings have the opposite with protractor muscles to open the bill. This provides the ability and propensity to penetrate narrow slits and prying them open expose the plant or animal food otherwise protected. The clever and adapted starlings radiated westward, becoming the European starling. [14]

Cities are the anthropomorphic monuments of civilization. The natural world is buried beneath megatons of concrete interwoven with tunnels for trains, sewers, water, and electricity. The plants and animals that were displaced are banished to waste areas if they survive at all. In the grim and gray concrete canyons, there is no life other than planted trees, manicured lawns, and an occasional park to remind the humans that abide therein that nature really does exist. The few animals like birds and squirrels  that have learned to live with the hubris of human occupation are, if anything, a blessing. Aside from providing a reminder that we are not really alone, they offer the beneficent function of clearing the streets of the uneaten bread crumbs sourced from food trucks and tossed aside as a measure of disdain for the earth we live on. The stolid starlings do not let it go to waste, following their exceptional bird survival skills.

Starlings scramble after breadcrumbs on Pennsylvania Avenue in Washington DC

References:

1. Mirsky, S. “Antigravity: Call of the Reviled.” Scientific American, June 2008

2. Smithsonian History of the World Map by Map, Random House, London, 2018, pp 158-159

3. “American Acclimatization Society” New York Times, 15 November 1877.

4. Jewell S.“A century of injurious wildlife listing under the Lacey Act: a history”. Management of Biological Invasions  Volume 11 Issue 3 pp 356–371. https://www.reabic.net/journals/mbi/2020/3/MBI_2020_Jewell.pdf   

5. Miller, J. “Shakespeare’s Starlings: Literary History and the Fictions of Invasiveness.” Environmental Humanities 1 November 2021.  Volume 13 Number 2 pp 301–322. Shakespeare’s Starlings | Environmental Humanities | Duke University Press (dukeupress.edu)   

6. West, M. and King, A. “Mozart’s Starling”  American Scientist. March–April 1990.  Volume 78 Number 2 pp 106–114.

7. Linz, G. et al  “European starlings: a review of an invasive species with far-reaching impacts”Managing Vertebrate Invasive Species. USDA  Paper 24 pp 378–386.

8. Langen, T. “Why do flocks of birds swirl in the sky?” Washington Post, 12 April 2022.

9. http://www.iucngisd.org/gisd/search.php  

10. Harlan, C. and Pitrelli, S. “A stunning spectacle – and a huge mess.” Washington Post, 15 January 2023

11. Alderfer, J. ed  Complete Birds of North America, National Geographic Society, Washington, DC, 2006, pp 502-504.

12. Enns, J. “Paying attention but not coordinating: parental care in European starlings, Sturnus vulgaris” Animal Behavior 2022. USDA Agricultural Publication.

13. Linz, Ibid.

14. Zuccon, D. et al. “Phylogenetic relationships among Palearctic – Oriental starlings and mynas”  Zoologica Scripta 10 April 2008 Volume 37 No. 5 pp 469–481.

Narcissus (aka Daffodil)

The Harbinger of Spring

Common Name: Daffodil  – The origin of the word daffodil is obscure. The prevalent theory is that it is a corruption of asphodel (from the Greek asphodelos which has no etymology beyond the name of the flower), a wild flower native to Eurasia noted for its association with the underworld in Greek mythology. The addition of the letter “d” is attributed to the French use of “de” as in Charles de Gaulle to indicate origin. When this precedes a vowel, then the apostrophized  version is used, as in D’Artagnan, the fourth of Dumas’ Three Musketeers. Presumably daffodil started as “d’asphodel” and was gradually Anglicized.  

Scientific Name: Narcissus spp – The genus name is the common name for the flower outside the lingua franca influence of the nineteenth century British Empire where daffodil prevailed. Spp is an abbreviation for species when the subject at hand is all of the species within a genus. Narcissus is derived from the Greek narkissos which is a variant of narke, meaning numbness. This is attributed to the use of the plant for its narcotic properties, a word with the same narke etymology.

Potpourri: The daffodil or narcissus is one of the most well known, storied, and beloved flowers of the Mediterranean Basin. It is not considered a wild flower in North America because it was introduced from Europe, becoming naturalized over the ensuing centuries. It is just as much a native of the Americas as are European, Asian, and African Americans whose ancestors also came from abroad. It is nonetheless wrongfully shunned by wild flower aficionados as a horticultural imposter, meant for gardens but not nature. As if in spite, the flowers have spread far and wide from their initial inception by settlers during the diaspora inland from coastal colonies. Daffodils frequently are found in isolated forest tracts as vestige of antebellum homesteads long since abandoned, marking their location in perpetuity, a microcosm  of the comely Narcissus of Greek mythology.

The Oreads of Greek mythology were nymphal deities of forests and mountains, noted  for their charm and beauty in contrast to their bestial counterparts, the goat-bearded satyrs and horse-tailed centaurs. An Oread named Echo was an attendant of Hera,  chattering incessantly to distract her from curtailing the sexual exploits for her husband Zeus. In punishment, Hera rendered Echo mute except only to repeat the last syllable of a word spoken to her. Echo fell in love with a young Thespian (from Thespis, a Greek poet and allegedly the first actor)  named Narcissus who haughtily spurned her affections. In grief, she fled to a lonely cavern, where she perished, leaving only her voice as echo. Narcissus was punished by the gods for his hubris with an ironically appropriate curse … to fall hopelessly in love with his own image. While leaning over the reflecting surface of a mountain spring, he was so stricken that he could not tear himself away and expired, the flower there to sprout as his namesake. [1] In the words of Ovid:

Narcissus on the grassy verdure lies

But whilst within the crystal font he tries

To quench his thirst, he feels new thirst arise

For as his own bright image he surveyed

He fell in love with the fantastic shade

And o’er the fair resemblance hung unmoved

Nor knew, fair youth! It was himself he loved. [2]

It is generally thought that the myth of Narcissus gave rise the flower named narcissus; in all probability it was the other way around. The word narcissus has the same etymology as narcotic and was almost certainly first applied to the flower for its use as an herbal remedy. The name Narcissus was not all that unusual. The Roman emperor Claudius was an able administrator who ruled with distinction, winning the admiration and affection of the citizens of Rome. Following the practices of Caesar and Augustus, he appointed ex-slave freedmen to administrative positions. Narcissus was the most prominent as ab epistulis (meaning “for communications”), essentially secretary of state. He became the richest man in Rome with a net worth of 400 million sesterces ($60B) ill-gained through extortion and coercion. When Claudius’s fifth and final wife Agrippina gained control from her aging husband and convinced him to adopt Nero, her son from a previous marriage, as his heir apparent, the days of Narcissus were numbered. As codicil to the sordid tale, Agrippina did Clausius in with a poisonous mushroom―Nero as subsequent emperor concluded that “mushrooms must be the food of the gods, since by eating them Claudius had become divine.” [3] Narcissus was stripped of wealth and power and ended up in a (flowerless) dungeon.

Narcissus of reflecting pool fame has retained name recognition in the modern era as a term rooted in psychiatry. The classification of mental illnesses is a greater challenge than physical illnesses because there are essentially no quantitative measures. In almost every case, diagnosis must primarily be inferred qualitatively from what an afflicted patient says with some correlation to observed behaviors. The American Psychiatric Association (APA) first established “a statistical classification of institutionalized mental patients’ in 1844 to “improve communications about the types of patients.”  The Diagnostic and Statistical Manual of Mental Disorders, generally abbreviated as DSM, was started after the Second World War and is now in its fifth edition to include everything from ADHD to Voyeurism Disorder (12% in males and 4% in females). Narcissistic Personality Disorder is defined as a “pervasive pattern of grandiosity, need for admiration, and lack of empathy.” Among its indications are exaggeration of achievements (inaugural crowd), fantasies of brilliance (stable genius), and a sense of entitlement (6 January 2020). [4] No flowers there either.

The beauty and tantalizing attraction of the floral narcissus was well established in Ancient Greece. The poet Theocritus wrote of the fair Europa who entered with her nymphs into a meadow to gather the sweet-smelling narcissus. There she spotted a gentle and majestic bull. As he graciously offered his back, she climbed on, festooning his horns with flowers. She was unwittingly abducted and carried across a vast sea. The rape of Europa by the taurine Zeus on this far-flung shore is the unlikely source of its name, the flowers must surely have been narcissi. A more telling mythological account of narcissus provides a direct association to its narcotic origins. Persephone was the daughter of (the undisguised) Zeus and Demeter, the goddess of the soil and the original “Earth Mother.” According to one version of the “abduction of Persephone,” she was lured to a field by the presence of striking yellow flowers created for that purpose by Hades, god of the underworld. Taking advantage of her floral distraction, Hades pounced, abducting her to his realm deep within the bowels of the earth to become his wife. The narcissus thus became both the flower of deceit and the flower of imminent death. The choice of narcissus as the flower of the goddess of the underworld was indicative of its widely known and potentially deadly toxic properties. [5] It is of equal note that asphodel, the flower that gave rise to the name of the daffodil, is also associated with the mythological underworld.

The medicinal properties of the bulbs of plants in the Amaryllis family to which the genus Narcissus belongs were well established in antiquity. The choice of medicinal as opposed to toxic is intentional, as many herbals used in treatments against disease are effective because they are toxic to some organisms or cells. Dosage for a specific application is critical, overdose often associated with demise. Hippocrates (460-370 BCE), the father of medicine and the alleged originator of the “first do no harm” oath as physician’s touchstone, used narcissus in his practice as a treatment for tumors. Narcissus as chemotherapy to eradicate cancerous cells was still in use four centuries later by Pedanius Dioscorides and included in De Materia Medica, the first pharmacopeia. [6] The physicians of the subsequent Roman Empire spread the use of narcissus throughout the Mediterranean Basin north to Gaul and Britain. Gaius Plinius Secundus, known as Pliny the Elder, extended the use of narcissus to the treatment of sixteen different conditions ranging from the original tumors to burns and “cure of contusions and blows inflicted by stone.” He further points out that narcissus is “injurious to the stomach and hence it is that it acts both as an emetic and as a purgative. It is prejudicial also to the sinews and produces dull, heavy pains in the head.” Because of this, Pliny asserts that “it has received its name from “narce” and not from the youth Narcissus, mentioned in fable.” [7]

The Dark Ages that followed Pax Romana were noted for religiosity absent the humanism of Greece. Petrarch’s perusal of Cicero’s letters launched the Italian Renaissance and the eventual rediscovery of medicine as science supplanting superstition. By the late sixteenth century, Greco-Roman treatments were dutifully transcribed into various publications and made widely available. John Gerard’s Herball attributes his information on narcissus to Galen, physician to several Roman emperors, as “having such wonderful qualities in drying that they consound and glew (sic) together very great wounds.” [8] It took another three centuries for the maturation of the scientific method to rescue the suffering population from bloodletting and quacks with magic potions. In the late nineteenth century, a smelly, yellow extraction named, appropriately enough, narcissine, was extracted from the flowers and an alkaloid named pseudo-narcissine was isolated from the bulbs. While narcissus extracts were noted for their use as emetics and narcotic in the treatment of a range of conditions including fever, diarrhea, and worms, it was considered to be, in large doses “an active and even dangerous article.” Several grains of the powder were enough to induce vomiting. [9]

Modern chemical and laboratory methods have revealed that plants from the Amaryllis family have over 300 alkaloids most of which are unique. One third of these compounds are found in the genus Narcissus. Alkaloids are amine (nitrogen containing) bases produced by many plants; many are toxic.  Due in part to the extensive history of the use of narcissus in the treatment of various diseases, which in some cases must certainly have been effective, there has been some academic and even pharmaceutical interest in characterizing them. About forty species of wild narcissus have been assayed, revealing that each species has a predominantly different group of related alkaloids. [10] Some clinical research has been conducted using modern methods and protocols to demonstrate that, in fact, Hippocrates was right. Lycorine, the very first compound extracted from narcissus in 1877, has been shown to be effective in the treatment of cancers, notably leukemia and melanoma. The largely surgical and chemotherapy cancer treatments of the past are increasingly being supplanted by plants. In the last three decades, a full 80 percent of all new cancer drugs have been derived from natural products. [11] This goes well beyond cancer. Narcissus extracts have been shown to be antiviral, antibacterial, antifungal, antimalarial, insecticidal,  emetic, antifertility, pheromone, and last but not least, plant growth inhibitors (to stifle competition). [12]

The alkaloid diversity of the genus Narcissus suggests that each species independently generates compounds that must in some way be related to habitat and physiology. The phenological growth of narcissus in early spring when there are few other food sources for the hungry animals that survived winter can only have been possible by being unpalatable. It certainly makes sense that plants that have struggled to survive for millennia must have done so through trial and error of random mutation and natural selection. The proliferation of amaryllids in general and narcissus in particular is indicative of a successful evolutionary path that has expanded their numbers in kind and in quantity. Daffodils are everywhere. There is a very good reason. They readily hybridize and reproduce, using both seeds and roots to expand radially from the epicenter of a single bulb. [13] The Royal Horticultural Society of Great Britain lists 162 cultivars, ranging from ‘Abigail Collette’ named for registrant’s granddaughter to ‘Zara’s Delight’ named for registrant’s daughter and including ‘Grumpy Penguin’ named for the characterization of the registrant in a video made by his grandson Jake. [14]

Narcissi cum daffodils are long-term survivors in nature’s combative arena. This is evident not only in their geographic reach outward from Iberia, but also in their persistence once established―they not only flourish but expand radially over time. This is in part due to the alkaloids that are repellent to bulb-digging mammals and gnawing insects. It is also due to extraordinary reproductive diversity. The color, shape, and scent of flowers has nothing to do with human perception.  Flowers rather function to attract mobile pollinators to transport male pollen from the stamens of one flower to the female pistil of another. The cross pollination that ensues is Darwin’s random mutation for choices by natural selection and why flowering angiosperm plants have been so successful. Narcissus is a master of floral diversity, having styles, the connecting tubes of the pistil that lead to the ovary, that vary both in length and in number, technically heterostylous polymorphism. There can be no doubt that these mutations were the result of different pollinators in different habitats having different behaviors. The overall design of narcissi is to attract long-tongued solitary bees. [15] And should the bees never arrive, there is a work around. The narcissus is self-pollinating, an adaptation that virtually guarantees fertilization from its own pollen, sacrificing diversity for survival.

Spread of Narcissus in Shenandoah National Park marking a homestead long abandoned.

The narcissus is a bulbous perennial that can also reproduce asexually. Starting in spring from a germinating seed, roots extend downward to form a small bulb where food reserves from the photosynthetic leaves are stored. At the end of the first year, the roots and stem detach, leaving only the bulb to overwinter. Bulb growth continues in the second year and the plant initiates production of calcium oxalate crystals called raphides which renders it unpalatable and therefore protected from ground dwelling animals. Bulb growth continues for the next several years as the narcissus has only leaves and no inflorescence. At full maturity which occurs between five and seven years, the bulb has enough stored energy to create the stalk and blossom for sexual reproduction. Full maturity also results in the formation of a lateral shoot that extends horizontally, eventually developing its own roots and breaking away as a separate, cloned bulb. This is the mechanism whereby one bulb and one flower become many bulbs and a garden of flowers over the years. To make sure that the bulb is at the correct depth in the soil for optimal growth potential, the roots that extend from the bulb are contractile, pulling it downward as needed. [15] So what could be better for a flower to festoon human habitations? The narcissus is almost indestructible, its golden flutes the harbingers of the renaissance of spring.

Solar Energy

It is tempting to think of solar energy as panacea for the climate change problem in providing limitless carbon free electricity. The sun has been radiating the power of fusion for 4.6 billion years. While only a small percentage is directed toward the Earth, it is enough to have sparked the evolution of life that it has since sustained. It is the source of the energy of coal, natural gas and oil and the font of photosynthesis on which plants, fungi, and animals all ultimately depend. The Panglossian fix to the global warming problem is to stop using the sunlight energy stored as fossil fuel and start collecting sunlight energy directly. Back of the envelope calculations provide that the sun’s nominal one kilowatt per square meter power provided globally is more than adequate. Take any large tract of sun-drenched desert, fill it with solar panels―voila, case closed. The Sahara is usually the desert of choice due to its size and sunlight extremes. The energy falling within its torrid borders is three million billon watts per day, two hundred times the current global energy demand. Similarly, the western deserts of North America could be empaneled to produce fifty times the energy needs of the United States. [1]

Why this is not really the case is a matter of chemistry, physics, engineering, and economics. Solar panels are called photovoltaic (PV for short) because they collect sunlight energy (photo) and convert it to electrical current generated by a voltage gradient (voltaic). The individual solar cell is the sine qua non for a solar system of energy to supply electricity to the grid. Solar cells have their origin in research into the properties of semiconductors that also led to the development of transistors in the 1950s. A semiconductor is any material that has a conductivity between that of metals such as copper and insulators such as glass. The former conducts electrons readily and the latter impedes their movement. Resistance is the inverse of conductance; metals have low resistance and insulators have high resistance. Semiconductors are elements, notably silicon and germanium, that have the number and arrangement of electrons that is favorable to the generation and transport of a relatively small electrical current that can be controlled with high precision.

The chemistry of semiconductors is established by electrons. A fundamental property of science is that the components of any system will gravitate to a condition of greater stability, which is generally at the lowest energy level. This propensity is manifest in the chemical bond, as the electrons in the outermost or valence subshell of an atom seek to establish a stable state. The idea that stability at the ground, lowest energy state was the basis for all chemical bonding was suggested by the noble or inert gases (helium, neon, argon, krypton, xenon, and radon) that don’t combine with anything else. Argon, the first inert gas to be discovered in 1894 by Lord Rayleigh and Sir William Ramsay as a mysterious trace element in air which is otherwise nitrogen and oxygen, was named for the Greek word argos, which means “lazy.”  In 1923, the American chemist Gilbert Lewis proffered the eponymous Lewis theory of chemical bonding that has four fundamental tenets: (1) elements enter into compounds so as to share or exchange electrons; (2) in some cases, the electrons are transferred from one atom to another (an ionic bond); (3) in some cases, the electrons are shared between the two atoms (a covalent bond); and (4) each of the constituent atoms ends up with an “inert gas” outermost, or valence, electron shell.

The periodic table is arranged according to the progressive filling of electron shells with elements exhibiting similar characteristics in vertical columns called Groups numbered left to right from I to VIII (1 to 8). The inert gases are located on the far right. The elements that range across the middle are called metals and those that are near the inert gases on the right are called non-metals. In between metals and nonmetals are a smaller group of transition elements called the metalloids that exhibit both metal and non-metal properties. [2] The semiconductors are metalloids in the same group as carbon (Group IV) with the same bonding characteristics. Carbon is perhaps the most versatile of all elements due to its need for four electrons to complete its outer shell to the inert and stable configuration. It must therefore combine with four other elements by sharing electrons in covalent bonds. The entire field of organic chemistry concerns carbon compounds, the basis for life. If the four combining elements are also carbon atoms, the resultant combination is diamond, the hardest natural material known. The versatility of carbon bonding is shared by the semiconductors silicon and germanium that lie just below it in the periodic table―they also form four covalent bonds. Since the shells that contain the valence electrons in these elements are further away from the nucleus (higher energy states) than carbon, they can more readily be moved into a conducting state. The propensity of semiconductors to release an electron for use in and electrical circuit is enhanced by the addition of elements on either side (Group III or V) into a bonding arrangement, a process called doping. [3] Solar cells are made from doped semiconductors.  

The physics of solar cell semiconductors is based on the observed phenomenon that radiant energy in the form of photons impinging on some surfaces will result in a flow of electrons. The  German physicist Heinrich Hertz named it the photoelectric effect in 1887 after observing that ultraviolet light changed the voltage at which sparking occurred between a pair of metallic electrodes. By the early 1900s, it was determined through further experimentation that the number of electrons released was proportional to light intensity (measured in candlepower- now the candela) and that the energy of the electrons was dependent on the incident light frequency f (or wavelength λ as they are related by the equation f = c/λ where c is the speed of light). That this could not be explained by classical physics was the impetus for Albert Einstein to propose what is now the fundamental theory of light. He posited that light could be considered as particles (now called photons) instead of waves and that these particles could penetrate an atom and collide with and impart enough energy to its electrons for them to escape from their orbit around the nucleus. The paper he wrote in 1905, entitled ”On a Heuristic Viewpoint Concerning the Production and Transformation of Light” was the basis for the award for the Nobel Prize in physics in 1922. His work stimulated the then nascent field of quantum theory promoted by the Danish physicist Niels Bohr who conceived the atomic model of electrons orbiting the nucleus in discrete energy levels called quanta. [4]

Physics also establishes the inherent limitations of solar panels because the photoelectric effect only occurs according to inviolate rules. Incoming solar energy must have sufficient intensity at the appropriate frequency to remove one of the  outer shell or valence electrons of an atom to become part of the electrical current flow output of the solar panel. Electrons occur around the nucleus in discrete orbits that are separated into discrete quantum energy levels. The photoelectric effect in semiconductors can only be understood in terms according to the rules of quantum mechanics. An incoming photon of light with sufficient frequency and intensity strikes an electron, knocking it from the valence energy band into the conduction energy band; literally a quantum leap. However, the electron must then make its way through the rest of the atoms in the panel to reach the surface, expending energy with every encounter. Einstein called this the work function with the symbol omega (ω). The work function varies with many factors, notably the surface condition of the material, its purity, and what is called the packing arrangement of its atoms in crystalline form. The optimization of the amount of electricity that can be extracted from sunlight must take these factors into account. [5]   

The use of the chemistry of semiconductors and the physics of the photoelectric effect to produce electricity requires engineering, the practice of putting scientific knowledge to practical use. Engineering is the bridge from the laboratory solar or photovoltaic cell to a practical solar cell that can be used as part of a fielded electrical power supply system. The era of solid state electronics started at Bell Laboratories in the 1950s, the epitome of electrical research and development rivaled only by Thomas Edison’s Menlo Park for its relevance to modernity. William Shockley was hired just after World War Two to lead the efforts to expand on prewar research that had led to the discovery of what were called P type for “positive” and N type for “negative” silicon semiconductive materials. The serendipity of chance  discovery here played a key scientific role. Shipments of silicon that had been received at Bell Labs from various manufacturers were found to have different properties leading to the hypothesis that the differences were caused by impurities. Further experimentation revealed that P type silicon was contaminated with boron, just below silicon in the periodic table (Group III) and that the N type silicon had phosphorous, just above silicon in Group V. On Friday the 13th of April 1945, Shockley drew a diagram in his lab notebook for a P-N junction which he called “a solid state valve drawing small control current” that could be used for “controlling the flow of electricity in a conducting path.” [6] The solid state transistor to control current in an electrical circuit that he imagined was the harbinger of the information age.

Solar cells followed using the same P-N junction principle. Here the object is not to amplify and otherwise control electrical current but to make electricity out of sunlight. The key to doing this was a matter of materials engineering using different combinations of semiconductor materials with different additives called dopants to improve efficiency―the amount of electrical energy out relative to the amount of sunlight energy in. For single silicon cells, the maximum theoretical efficiency of 33.7 percent imposed by physics is called the Shockley-Queisser Limit with more than 50 percent of the sun’s energy lost as heat. The importance of doping is straightforward. Antimony from Group V with five valence electrons added to Silicon with four valence electrons yields one extra electron that can then be readily removed as current with both atoms having their “inert” configuration of covalent bonds. Bell Labs produced the first operating silicon solar cell in 1954 with an efficiency of 6 percent. [7]

Solar cells for spacecraft became the first practical application of photovoltaic technology. The International Geophysical Year of 1957 to 1958 was initiated in 1950 by scientists from across the globe to promote scientific cooperation.  The US and the USSR announced plans to launch earth satellites in 1955. The US program consisted of two publicly announced and progressed projects: Vanguard, a three-stage rocket designed by the Naval Research Laboratory and Explorer to be launched on a missile designed by the US Army Ballistic Missile Agency. The Soviets were mum until the surprise launch of Sputnik, the world’s first artificial satellite, on 4 October 1957 followed by Sputnik 2 one month later carrying a dog named Laika. The Vanguard I launch collapsed in a huge fireball, which the press dubbed “Flopnik” in December. The Explorer was launched successfully in January. The new and improved solar cell powered Vanguard II was launched on 17 March 1958; it is still in orbit. [8] The Vanguard solar cells had a  total power of one tenth of a watt in an array of one tenth of a square meter, the equivalent of 1 watt/m2 with and efficiency of 10 percent. They only work out as far as the orbit of Jupiter where the sun’s radiant energy fades. Beyond that, nuclear cells that produce their own radiation from radioactive decay become necessary.

Solar cell technology advanced as an integral part of the space race between the US and the USSR in the second half of the twentieth century. With design constraints that necessitated minimum weight and surface area due to payload launch constraints, aerospace applications favored higher power density cells without regard to unit cost. The key parameter is specific power, which is watts per kilogram.  By using multiple layers of solar cells with different materials to take advantage of different wave lengths of incident solar radiation, efficiencies of over 45 percent have been achieved. The subsequent world-wide roll out of solar cell technology was precipitated by the need for  stand-alone powering capabilities where transmission lines would not reach or where batteries were too expensive to install and maintain. Ironically, the oil and gas behemoth Exxon-Mobil provided part of the funding to develop affordable solar cells using lower grade silicon and cheaper materials to drive the cost from $100 to $20 per watt. The motivation was to provide power for remote pumping stations and off shore rigs primarily for signal and alarm systems. The cheaper cells made it cost effective for the US Coast Guard to implement solar cells to replace batteries on ocean buoys and for railroads to upgrade to wireless solar cell signaling systems.  The closing decades of the twentieth century raised the ante for solar cells with the advent of roof-top panels for buildings and solar powered pumps for irrigating far flung fields. [9]

The twenty-first century opened with the inconvenient truth that the Industrial Revolution had an unintended consequence. The United Nations Environmental Programme established the Intergovernmental Panel on Climate Change (IPCC) in 1988 to provide “an assessment of the understanding of all aspects of climate change, including how human activities can cause such changes and can be impacted by them.” The Third IPCC Assessment Report at the turn of the century was clarion call to action, confirming that over the course of the twentieth century, temperature had risen 0.6°C, snow and ice cover had fallen by 10 percent leading to an average sea level rise of 15 centimeters, and that precipitation had increased by 5 percent. [10] The search for carbon free energy on a global scale was on and photovoltaics was in the crosshairs of innovative engineering. Cost would be the determinant figure of merit. To manufacture and install solar panels in acres of arrays to generate electricity at a cost per watt comparable to fossil fuel became the “over the rainbow” goal. After a decade delay that followed the post 9/11 global war on terror and the financial meltdown that followed, the US government was finally able to focus on climate.   

The crux of the economics issue is that high efficiency solar cells are expensive and cheap solar cells are inefficient. To be affordable as an integral part of an energy grid of the future requires solar cells to be both cheap and efficient. Silicon is the semiconductor of choice because it is abundant and therefore cheap; it is second only to oxygen as the most common element in the earth’s crust (28.2 percent). However, raw silica must be chemically treated to convert it into a crystalline form that will conduct electricity. Silicon PV cells are made by cutting crystalline silica into thin slices that are doped to produce the PN junction of a diode with metallic contacts to conduct the photon generated current flow. The crystal structure determines the efficiency of the cell. Single crystal cells are the most efficient but they are more expensive to manufacture than cells with multiple crystals. The efficiency of the best commercial solar cells using single crystal cells is about 20 percent. This can be improved by adding additional cells that are designed to capture photons at different frequencies. When these are combined in a single panel, known as multijunction cells, efficiencies of nearly 50 percent can be achieved. These PV cells are at the efficient but expensive end of the spectrum. At the opposite end of the spectrum are thin film solar cells that are applied to a substrate of metal, glass, or plastic that can be flexible to allow for contoured surface installations. Thin film solar cells trade off efficiency for expense. The ultimate goal of any successful solar cell is to produce electricity at the lowest dollar per watt value after all factors, including installation, maintenance, replacement, and materials cost, are included in the calculation.[11]

That achieving the right balance between cost and efficiency would be difficult was evident early on. The Energy Policy Act of 2005 empowered the Department of Energy (DOE) to “spur commercial investments in clean energy policies that use innovative technologies” through the use of federal loan guarantees to private companies. Solyndra, a California company that had developed copper indium gallium di-selenide thin-film solar cell technology seemed a sure bet and was richly endowed with federal funding. The technology worked to reduce power cost but the economics didn’t―they were unable to compete with conventional, flat silicon solar panels. When the company went bankrupt two years later it was considered the “first serious financial scandal of the Obama Administration.” When the dust settled, it was generally concluded that the government’s ability to pick technology winners was inherently flawed; federal funding should be directed at research and development with the marketplace promoting viable technologies. Bloomberg News concluded that “If the Solyndra debacle gets U.S. policy pointed in the right direction, the loan-guarantee losses won’t have been totally in vain.” [12]

 The Advanced Research Projects Agency for Energy (ARPA-E) was established and funded in 2009 to advance “high-potential, high-impact energy technologies that are too early for private-sector investment” using the Defense Department DARPA model that pioneered the Internet. Of the 46 ARPA-E energy research centers funded in 2010, 24 were working on solar energy issues. These initiatives are rightly in the areas of basic research to try to develop a solar cell that is easy to manufacture from cheap materials with sufficient efficiency to be cost competitive. Basic research is long term by its nature with failures outnumbering the rare success by an order of magnitude. Among the programs in the works are Solar Agile Delivery of Electrical Power Technology (ADEPT) to improve PV performance to Full-Spectrum Optimized Conversion and Utilization of Sunlight (FOCUS) to expand the range of solar cells to encompass a broader bandwidth of solar radiation frequencies. It goes without saying that a mnemonic acronym is nearly a prerequisite for government funded programs. While there have been no eureka breakthroughs to date, there is every reason to hope that there will be. [13]

While a super solar cell may be in the offing at some point, there is something to be said for Adam Smith’s tried and true economies of scale. Spaceship Earth is not payload limited like Vanguard rockets. Manufacturing myriad, large, cheap solar panels in an assembly line manner to cover large swaths of surface area is sure to drive the cost per unit down, just as it did pins according to Smith’s dictum. In 2006, one of the world’s largest semi-conductor manufacturers embarked on a program to manufacture garage door sized glass panels coated with thin films of amorphous silicon, a focused attempt to sacrifice efficiency for size to lower the dollar per watt cost. The assembly line process started with 60 ft2 glass panels precoated with a thin metal oxide film run on a conveyor belt through an automatic laser scribe to define the boundaries of 216 individual cell panels. Three layers of amorphous silicon that each absorb light from different parts of the spectrum were robotically added sequentially using vapor deposition. With the addition of metal contacts and a junction box, the panels were ready for shipment. With a cost of $3.50 per watt that was projected to decrease to $1.00 per watt as production ramped up, the prognosis for large scale arrays was sanguine. [14]  The company shut down the assembly line in 2010 due to lack of demand. [15]

The difficulties with manufacturing solar cells in the United States to satisfy market demand at both the high efficiency, high cost and low efficiency, low cost ends of the spectrum is indicative of a global economic megatrend. Solar panel supply and demand imbalance is a microcosm of the effects of China’s manufacturing juggernaut. The Chinese produced 85 percent of all solar panels sold across the world in 2022 with almost the entire balance from other Asian-Pacific (APAC) nations, mostly Vietnam. The United States and Europe produced less than one percent each. This contrasts with the total of 1,000 terawatts (TW) of PV panels installed globally in 2022 with 50 percent in China and APAC and about 17 percent in both Europe and in North America. This sounds like a lot of power, but it is only about 15 percent of the total renewable capacity of 7,500 TW which is only about 10 percent of the global energy supply. This means that solar energy comprises only one percent of world total electrical generation.  The 150 gigawatts of solar energy added in 2021 was a record amount; it is one third of the average annual addition in PV power needed annually over the next decade to meet the goal of carbon neutrality in 2030. [16]

Returning now to the original thesis that the sun produces ample energy to empower human enterprise many times over. Even if PV cell chemistry and physics could be engineered imaginatively into  cheap and efficient solar panels, two intractable problems remain: diurnal and seasonal  sunlight as an energy source and a lack of a repository to store electricity generated as supply that exceeds immediate use demand. Most of the industrial world is geographically situated between 30 and 50 degrees north of the equator. This means that the 1,000 watts per square meter that falls on the equator at midday is reduced to about 600 watts per meter in the industrial zone. It is only midday at noon, so the overall energy delivered must be also discounted by half to 300 watts per meter to account for mornings and afternoons. Cloud cover, which in some locations like the UK amounts to more than half the day on average, results in a diminution of solar cell production by an additional factor of ten. The net effect is that the actual amount of solar energy that impinges on panels ranges from about 100 watts per square meter in Germany and New York  to 200 watts per square meter in Spain and Texas. With commercial solar cell efficiency at 10 percent and unlikely to ever exceed 20 percent, the output electricity is only about 10 to 20 watts per square meter. [17] This means that Gargantuan solar panel “farms” are needed to provide for a city size load in the gigawatt (GW – billions of watts) range. These are only likely to be economical in those areas that are closer to the equator and are relatively near to the cities they supply. The largest solar farm in the world is in the desert state of Rajasthan just west of Delhi, India covering 14,000 acres producing just over two gigawatts. The largest facility in the United States is in California with 579 MW (0.6 GW) covering 3,000 acres. The largest solar farm in the state of Delaware produces 15 MW on 80 acres.

It is conceivable that enough solar panel mega farms could be built in some places to make enough electricity to meet demand when the sun shines. But what do you do at night and during winter? And what do you do when PV power supply exceeds grid demand? The answer to both questions is energy storage. Saving the excess current of PV cells during cloudless, sunny days in summer to be used at night and over the winter is the Achilles’ heel of renewable energy. Long-duration energy storage (LDES) is the collective name for methods, both real and imagined, that seek to alleviate the renewable storage problem. Rechargeable batteries cannot store energy on a large enough scale because they have low energy density, a short life cycle, and, ultimately, cost too much. The most well-established LDES technology is pumped-storage hydropower, the name a literal description of its modus operandi. Excess renewable electricity produced is used to pump water from a low elevation catch basin to an elevated reservoir. The stored potential energy is converted back to electricity on windless nights by water turbines. There are also proposals to use the excess solar power electricity to make hydrogen gas with electrolysis. One may conclude that, while solar energy may be one of many technologies that will need to be employed to reduce fossil fuel demand, it is hardly a panacea.

References:

1. Laughlin, R. Powering the Future, Basic Books, New York, 2011, pp 91-93.

2. Petrucci, R. General Chemistry, Principles and Modern Applications, Macmillan Publishing Company, New York, 1985. Pp 198-203, 364-401.

3. Semiconductors and Insulators, Theory of, Encyclopedia Britannica, Macropedia 15th Edition  William Benton, Chicago, Illinois, 1974, Volume 16, pp 522-529.

4. Marton, L. “Photelectric Effect” Encyclopedia Britannica, Macropedia 15th Edition  William Benton, Chicago, Illinois, 1974, Volume 14, pp 296-300.

5. Neamon, D. Semiconductor Physics and Devices, McGraw Hill Boston, MA, 2003. p 104-106. http://www.fulviofrisone.com/attachments/article/403/Semiconductor%20Physics%20And%20Devices%20-%20Donald%20Neamen.pdf

6. Riordan, M. Crystal fire: the invention of the transistor and the birth of the information age, W.W. Norton Company, New York, 1988. pp 97-113.

7. Smil, V. Energy in Nature and Society, MIT Press, Cambridge, MA, 2008, pp 255-257

8. https://www.nasa.gov/feature/65-years-ago-the-international-geophysical-year-begins

9. Perlin, J. “Late 1950s – Saved by the Space Race”. SOLAR EVOLUTION – The History of Solar Energy. The Rahus Institute. http://californiasolarcenter.org/old-pages-with-inbound-links/history-pv/

10. Climate Change 2001 Synthesis Report, Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, UK. 2001.

11. https://www.energy.gov/eere/solar/solar-photovoltaic-technology-basics     

12. Lott, M. “Solyndra — Illuminating Energy Funding Flaws?” Scientific American.  September 27, 2011.

13. https://arpa-e.energy.gov/technologies/programs

14. Bourzac, K. “Scaling up Solar Power” MIT Technology Review, March/April 2010, pp 84-86. 

15. Kanellos, M. “Applied Materials Kills its SunFab Solar Business”. Greentech Media 21 July 2010.

16. https://origin.iea.org/data-and-statistics/charts/solar-pv-manufacturing-capacity-by-country-and-region-2021

17. MacKay, D. Sustainable Energy – without the hot air UIT, Cambridge, UK, 2009 pp 38-49.

Greenhouse Effect and Global Warming Gases

The greenhouse effect is the warming of the Earth due to its atmosphere. Solar radiation passes through the atmosphere like the panes of glass forming the roof and walls of a greenhouse. Radiant energy impinging on Earth’s surface and the floor of the greenhouse causes them to heat up. Since heat flows from hot to cold as a matter of basic physics, both of the now warmer surfaces heat the surrounding air by radiating upward. The greenhouse effect results because the solar radiation passes through the atmosphere and the glass with little absorption, but the surface heat radiation is partially absorbed as it seeks to escape. The reason for the difference is that the wavelengths of electromagnetic energy of the two are different. Solar radiation that reaches the Earth is shorter wave ultraviolet and visual. The heat radiation emanating outward from the surface is longer wave infrared. The terms ultraviolet and infrared refer to the wavelengths that are shorter than, or “beyond” the violet end of the visible spectrum and those that are longer than or “below” the red end (keeping ROY G BIV in mind). The significance of different wavelengths should come as no surprise. The microwaves used to heat up lunch while listening to the radio waves of music broadcast remotely are part of the same electromagnetic spectrum.

As diagrammed above, incoming solar radiation that reaches the top of the atmosphere is 342 watts per square meter (Wm-2 is shorthand for W/m2). Watt is the eponymous unit of power familiar from light bulb notoriety to honor James Watt, the inventor of the condensing steam engine. He coined the term horsepower so that people would understand what a steam engine could do; one horsepower is about 746 watts. Only 168 Wm-2 is absorbed by and heats the surface of the earth as 77 Wm-2 is reflected by clouds, aerosols, and atmospheric gases, 30 Wm-2 is reflected by the earth’s surface, and 67 Wm-2 is absorbed by the atmosphere. Thus, the sun’s primarily ultraviolet and visible light short wavelength incoming radiation mostly passes through the atmosphere, heating up the surface of the earth like it does a greenhouse. The outgoing surface radiation of 390 Wm-2 is shown on the bottom right. This is the longer wavelength infrared radiation of the Earth’s surface rising into the atmosphere. The change in wavelength between incoming and outgoing is because the sun is much hotter than the earth. [1]

The radiation spread or spectrum between high energy ultraviolet and low energy infrared is based on the temperature of the radiating body. Some of the infrared radiation (40Wm-2) escapes, but over 80 percent (324 Wm-2) is reflected back to the surface by the gases in the atmosphere,  which are called greenhouse gasses for this reason. The other heat energy components in the diagram are those associated with the hydrologic cycle; the evaporation and condensation of water is also a function of heat and temperature. The climate changing equation is that incoming short wavelength solar radiation must be either reflected back into space, or balanced by outgoing longwave radiation, mathematically 342 = 107 + 235. It is clear that the greenhouse gases play a key role in this balance. If more gas is added, more heat is radiated back from the atmosphere and surface temperature must go up to compensate. Global warming results. [2]

Trapping the heat of the sun under a warming blanket of atmosphere makes life on Earth possible. Without the greenhouse effect of its atmosphere, Earth would be like our planetary neighbor in the next outward orbital. Mars has an average temperature about 75°F below zero. If no action is taken to stem the tide of rising temperature, Earth will become more like Venus, where the mostly carbon dioxide atmosphere creates a super greenhouse effect with an average temperature of over 800°F. The planet hunting astronomers call the region near a star which falls in the range where liquid water can exist the Goldilocks Zone indicating that life as we know it could be possible there ― the circumstellar habitable zone. It is necessary but not sufficient that Earth is in one. It must also have sufficient moderating atmosphere with enough (but not too many) greenhouse gas molecules.  

The French mathematician Jean-Baptiste Fourier is credited with making the first observation that the earth must be warmed by solar radiation due to atmospheric containment: “Tous les effects terrestres de la chaleur du soleil sont modifiés par l’imposition de l’atmosphère.”  (all of the sun’s heat effects on earth are modified by the atmosphere). [3] This philosophical observation was rooted in science by the Swedish physicist Svante Arrhenius who first quantified the effect of carbon dioxide (then called carbonic acid) on temperature, which he called the “hothouse” effect (which, ironically, is probably the better term).  His conclusion was “… if the quantity of carbonic acid increases in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression.” [4] The theoretical musings about the greenhouse effect became more factual after the first four decades of the 20th century. British engineer Guy Callendar reviewed historical data in 1938 to conclude that: “by fuel combustion man has added about 150,000 million tons of carbon dioxide to the air during the past half century” resulting in a measurable global temperature increase “at an average rate of 0.005°C per year.” [5] While convincing, the correlation of carbon dioxide to temperature does not prove causation – that the accumulation of atmospheric carbon dioxide is sine qua non for the measurable rise in global temperature. 

The first scientific experiments were carried out by the Irish physicist John Tyndall who realized that the absorption of radiation by gases was “a perfectly unexplored field of inquiry” in 1859. He constructed the world’s first spectrophotometer, a tube that could be filled with different gases and subjected to radiation. It was instrumented with a recently invented device called a differential thermopile that could measure miniscule changes in temperature. Six months after he began his experiments, he presented his eureka results to Britain’s Royal Society: Different gases varied markedly in their ability to absorb and retransmit radiant heat. Nitrogen and oxygen, which make up over 99 percent of atmosphere were found to be essentially transparent to radiant heat, but other more complicated molecules, including water vapor, carbon dioxide, ozone, and (quixotically) perfume absorbed heat much more readily, even in small concentrations. Tyndall stressed the importance of water vapor, because “comparing a single atom of oxygen or nitrogen with a single atom of aqueous vapor, we may infer that the action of the latter is 16,000 times the action of the former.” [6] He concluded that water vapor was the most important gas controlling the surface temperature of the earth. This, then, became Royal Society gospel and accepted science for over a century.

The emergence of carbon dioxide as the true climate chimera was only a matter of time and science. Tyndall’s primitive experiment demonstrated only that humid air absorbed heat energy. Why it did so was another matter. The physics is complex, relating to the quantum energy levels of atoms of the greenhouse gas molecules. Spectroscopy, the study of the absorption and emission of light and other radiation as related to its wavelength, evolved rapidly in the early decades of the twentieth century. The emission or absorption of light within a narrow frequency and energy band is called a spectral line. Carbon dioxide has thousands of spectral lines that are responsible for the absorption of the infrared radiation of heat energy. A detailed understanding only became possible with accurate measurements at different heights in the atmosphere. These vary in intensity and width with temperature and pressure and therefore with altitude … a multivariable problem in three dimensions presenting a tangle of interrelated calculations.  High speed computation was needed to run the iterative  sequences of differential equations. By the 1950s, the measurements were available and the computers were programmed.  The absorption of heat energy by molecules of carbon dioxide became settled cause and effect science. As early as 1956, there was convincing evidence that “…if the carbon dioxide content of the atmosphere should double, the surface temperature would rise by 3.6 degrees Celsius.” [7]

There remains the vexing problem of water vapor.  There is a rational reason why water and its vapor loom large in debates about climate change causation. Weather, the fluctuating state of the atmosphere with elements of wind, rain, and sunshine that determines climate only when averaged over decades, is dominated by water. Rain in summer and snow in winter come from clouds that are condensed water vapor evaporated from liquid oceans, lakes, and rivers. Water  is the most variable component of atmosphere and is central to climate variability and change.  Oceans cover 70 percent of the Earth’s surface, contain over 96 percent of its water, produce 86 percent of all evaporation, and receive 78 percent of all rain. Spinning this sloshing volume at speeds of up to 1000 miles per hour  between and around the embedded land mass continents of a tilted, heated globe produces weather.

Water vapor is a natural greenhouse gas. It is also the most heat absorbing of all greenhouse gases. The hydrologic cycle of evaporation, rain, and runoff has been going on for billions of years ― the planetary plumbing system. The storing of the sun’s heat energy as the latent heat of evaporation of water into the atmosphere (note figure above) and its release when the vapor condenses to fall as rainwater provides the energy for weather. To complicate matters, water vapor produces positive feedback. Warmer weather means more evaporation which increases the water vapor in the atmosphere which traps more heat which causes warmer weather. Positive water vapor feedback  is considered the most important factor in amplifying the increase in surface temperature. Further, water vapor condenses into clouds, which are not gases, but contribute nonetheless to the greenhouse effect by absorbing and emitting the infrared heat radiation. But clouds also act a shield, cooling the climate by reflecting solar radiation. The variability of cloud formation and movement is one of the most profound conundrums of climate science. The only plausible way to address the chaotic interplay of sun, wind, and water was to develop increasingly sophisticated models that require high speed supercomputers. That has now evolved to many different models that can be compared and contrasted to narrow the uncertainty.

The Coupled Model Intercomparison Project (CMIP) was started in 1995 as a collaboration among models to compare results. First generation Atmosphere – Ocean General Circulation Models (AOGCM) used the physical dynamics of atmosphere, ocean, land, and sea ice as impacted by greenhouse gases and particulates called aerosols. State of the art Earth System Models (ESM) were more recently added to include the effects of  biochemical carbon, sulfur, and ozone cycles. Model validation consists in part of  inputting historical data to compare model output with the known result. The latest CMIP round was based on data collections that ended in 2013 to evaluate the relative efficacy of 56 different models from twelve countries including the United States, China, Russia, and Norway (where weather forecasting started). The conclusion was that doubling the amount of carbon dioxide in the atmosphere would result in a temperature increase of 2.1 to 4.7 degrees Centigrade. [8] It is worth noting that the 3.6 degree rise estimated in 1956 is consistent with this result.  Modelling continues as carbon dioxide emissions and temperature keep rising. 

Even though water vapor is the dominant greenhouse gas, it is essentially irrelevant to climate change just as it is paramount to weather. Its variability in the short term of weather is offset by its consistency over the long haul of climate. The rising concentrations of other atmospheric gases due not immediately impact weather ― but they are at the epicenter of the climate change problem because they have been and are being added to the atmosphere continuously.  The conclusion made by the United Nations was that the only way to arrest climate change was to reduce the atmospheric emission of greenhouse gases over time. The Kyoto protocol was a United Nations (UN) treaty initiated in 1997 and which went into effect in 2005 after ratification by Russia and Canada as the last two of the stipulated fifty-five nation quorum. It specified limits on the six greenhouse gases that were found to be the most damaging due to heat absorption characteristics and concentration in the atmosphere: carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), sulfur hexafluoride (SF6), hydrofluorocarbons (HFCs), and perfluorocarbons (PFCs). The Global Warming Potential or GWP was established as a parameter with a molecule of carbon dioxide having a value of 1 in order to quantify the effects of the other gases. The UN Conference of Parties (COP) that constitute the signatories to the treaty agreed to proceed “with a view to reducing their overall emissions of such gases by at least 5 per cent below1990 levels in the commitment period 2008 to 2012.” [9,10]

The last three “minor” greenhouse gases are frequently grouped together as the “F-gases” to indicate that they contain the element fluorine;  taken together, they constitute less that 1% of the total greenhouse gas emissions. Sulfur hexafluoride (SF6) gas is used primarily in high voltage electrical distribution systems due to its insulation properties. It was a replacement for oil-filled electrical components that contained polychlorinated biphenyls (PCBs) which were banned in 1979 by the Toxic Substances Control Act (TSCA).  Each SF6 molecule is the equivalent (GWP) of 23,900 molecules of CO2.  HFCs and PFCs consist of a number of different compounds that were formulated to replace chlorofluorocarbons (abbreviated as CFCs) that were banned by the Montreal Protocol of 1987 due to their ozone depleting effect (ozone filters damaging UV radiation). Their GWP values range between 140 and 11,700 for HFCs and between 6,500 and 9,200 for PFCs . In the 1950’s, Barry Commoner, a prescient scientist at the forefront of the environmental movement, devised four laws of ecology. [11] The irony of introducing greenhouse gases (HFC and PFC)  to replace an ozone depleting substance (CFC) is direct evidence of his fourth law  – “There is no free lunch” –  every environmental solution (ozone depletion) has a cost (greenhouse gases). This applies equally to SF6 and PCBs.

Nitrous oxide (N2O) is the least known of the three “major” greenhouse gases, its provenance usually listed as “agricultural soil management.” With a GWP of about 300, it constitutes about 8% of the total greenhouse gas composition. The main culprit is fertilizer, which is about 10 percent nitrogen that must be added to the soil to compensate  for the nitrogen removed with the harvest of the crop –  about 100 pounds of nitrogen are removed with the harvest of every acre of corn. Fertilizer is necessary and sufficient to “manage”  soil agricultural productivity. This added nitrogen is acted upon by the bacteria in the soil as a source of energy for their own growth and reproduction – a process that is called nitrification, basically the conversion of ammonium  (NH3)  into nitrate (NO3). Nitrous oxide is a naturally occurring by-product of bacterial nitrification of the added nitrogen-based fertilizer. Not to get too technical but to be complete, there is also a process called denitrification in anaerobic (lacking oxygen) soils where bacteria reduce nitrate to gaseous nitrogen; denitrification, like nitrification,  releases nitrous oxide as a by-product. Thus, as more crops are grown for the ever-expanding global population for either food, fodder or fuel (ethanol), more nitrogen enriched fertilizer must be used to reconstitute the depleted soil – and therefore more nitrous oxide results. The Anthropocene nitrogen cycle has been called the Wibbly-Wobbly Circle of Life. [12] Commoner’s first law  of ecology is  “Everything is connected to everything else.” The earth is such a complex and balanced ecosystem that every disturbance (added fertilizer) has far-reaching effects (greenhouse gases and global warming).

The three primary sources of methane (CH4) with a GWP of around 30 are enteric fermentation, natural gas systems, and landfills. Taken together, they contribute more that three fourths of the total methane emissions in approximately equal shares. Enteric fermentation methane is from the normal digestion of food by ruminant animals, particularly cattle. Ruminants are named for the rumen, the first of their four stomachs –  the repository for the fibrous material that they consume. Microbes in the rumen break down the tough cellulose as part of the digestive process; methane is a byproduct of that process that is expelled by the animal as exhalation. Over 95% of enteric fermentation methane is from beef and dairy cows. Other animals, including humans, produce the remainder of the enteric (intestinal) fermentation methane as flatulence. Methane is the primary constituent of natural gas that is widely used for heating and to generate electricity – some of this natural gas escapes into the atmosphere.  Landfills are the largest of the three major sources of methane, comprising almost 40% of the total – the source is anaerobic bacterial decomposition of human trash. Commoner’s second law of ecology applies to methane – “Everything must go somewhere” – there is no way to simply throw things (trash) away, because it will still be there and you have to live with the results (greenhouse gases).

And last but certainly not least is carbon dioxide, the scion of the industrial age and perhaps the harbinger of its demise; it makes up more than 80% of all greenhouse gasses – by definition it has a GWP of 1. The carbon cycle is the essence of life;  carbon dioxide is input to plant photosynthesis and output of organisms like humans oxidizing food for energy. The majority of excess carbon dioxide in the atmosphere comes from the combustion of fossil fuel – oil, gas and coal.  It is the energy released by the oxidation of hydrocarbons that is both the boon and the bane of the modern world. For example, the natural gas reaction is:

                          CH4    +  2O2      ―>   CO2     +   H2O  +    energy 

The level of CO2 in the atmosphere has historically been about 280 parts per million (ppm). It is now over 420 ppm.  The energy we use to make electricity and to operate vehicles is increasing greenhouse gas concentrations which are causing the earth to heat up. “Nature knows best” is Commoner’s third law of ecology – every human made change is likely to be detrimental to the balance of nature.  Anthropogenic greenhouse gases are the most obvious and potentially existential example. Our mother is nature.

References:

1. Third Annual IPCC Report – https://www.ipcc.ch/report/ar3/wg1

2. Dessler, A. and Parson, E. The Science and Politics of Global Climate Change, Cambridge University Press, New York, 2006, pp 6-11.

3. Fourier, J. “Remarques Generales sur les Temperatures Du Globe Terrestre et des Espaces Planetaires”. Annales de Chimie et de Physique. 1824 Volume 27 p 165.

4. Arrhenius, S. “On the influence of carbonic acid in the air upon the temperature of the ground”  The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. April 1896, Volume 41 No. 251: pp 237–276.

5. Callendar, G. “The artificial production of carbon dioxide and its influence on temperature” Quarterly Journal of the Royal Meteorological Society April 1938 Vol. 64 Issue 275 pp 223-240.

6. Fleming, J. Historical Perspectives on Climate Change, Oxford University Press, New York 2005. pp 66-74.

7. Plass G. “Carbon Dioxide and the Climate.” American Scientist, 1956, Volume 44 pp 302-316.

8. Flato, G. et al Evaluation of Climate Models. Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change Cambridge University Press, Cambridge, UK. pp 741-827.

9. Kyoto Protocol to the United Nations Framework Convention on Climate Change. Conference of the Parties. FCCC/CP/L.7/ADD.1, Kyoto, Japan, 10 December 1997.

10. https://www.epa.gov/enviro/greenhouse-gas-overview   

11. Miller, Stephen. “Early Voice for Environment Warned About Radiation, Pollution”. The Wall Street Journal. Retrieved June 2018. In his 1971 best seller The Closing Circle, Commoner posited four laws of ecology: Everything is connected; Everything must go somewhere; Nature knows best; and There is no such thing as a free lunch.

12. Essay in The Economist, 24 December 2022.