Hemlock for a Happy New Year

Hemlocks are among the many pines and fir evergreens that are symbolic of the holiday season. This hemlock is a new generation growing to replace those lost to an invasive species and a devastating hurricane at Limberlost in Shenandoah National Park.

Common Name: Eastern Hemlock, Canada hemlock, Hemlock spruce – Hemlock is the name for the hop plant in both the Germanic (homele) and Finno-Ugric (humala) language groups. The hop plant is the source of “hops” used for centuries across much of northern Europe to impart a bitter flavor to liquors made from malted grain. The small flowers of the hop plant are similar to the flowers of the poison hemlock (Conium maculatum) which shares the same etymology and from which the hemlock tree gets its name (by indirect association). In other words, the poison hemlock looks like and was named for  the hop plant and the hemlock tree shares a number of attributes with poison hemlock. The Carolina hemlock is very similar and difficult to distinguish from its collocated cousin.

Scientific Name: Tsuga canadensis – The generic name is from the Japanese word for the larch tree which, like the hemlock, is a member of the pine family. Most of the other trees in the genus Tsuga are indigenous to east Asia, primarily Japan. The species name is reference to the first classification of the tree in the Linnaean taxonomic system based on a specimen first sighted and identified in Canada. The Carolina hemlock is Tsuga caroliniana first distinguished in the Appalachian uplands further south.

Potpourri: Hemlocks are members of the ubiquitous Pinaceae or pine family which consists of conifer or cone-bearing trees that grow throughout the temperate regions of both the Northern and Southern Hemispheres and in mountainous tropical regions. The Pine family includes pines (Pinus), spruce (Picea), firs (Abies), hemlocks (Tsuga), larches (Larix), and Douglas-firs (Pseudotsuga or false hemlock). [1] Since they are large trees that grow in dense clusters, they are among the  most important trees of the timber industry, providing 75 percent of all lumber, and 90 percent of paper  pulp.  There are over 200 species worldwide of which about 60 are indigenous to North America. Pine family trees are self-pollinating, or monoecious, contributing to their evolutionary success at the expense of genetic diversity. The “naked seeds” that literally define the Gymnosperms (gymno is Greek―gymnasiums were places for naked exercise) are at the base of the female pinecone scales fertilized by male cone pollen wind-blown from the same tree. The pollen that is deposited on the megasporangium of the female cone in the spring ceases growth through the winter, consummating fertilization the following year. [2] In good time, you get a pine.

Hemlocks can most easily be distinguished by their needles, a term referring to the narrow, pointed leaves that, except for the larch, do not fall off over winter giving rise to the more general term evergreen. Hemlocks needles are short, arrayed in two neat rows, one of nature’s better options for higher mountains and boreal forests. However, needles do have a lifespan. Pine trees lose about one fourth of their needles every year resulting in trails coated with a soft cushion of decaying needles that suppresses almost all other plant growth, one of the best treads for foot travel. The “evergreen” needle as a leaf form is an evolutionary result of several factors involving both latitude and geology. The primary determinant is the length of the growing season, which can vary from as short as 65 days in New England to an average of 250 days in the southeast. All things being equal, a plant will trend toward greater leaf area exposed to as much sunlight as possible. Photosynthesis in the chloroplast cells of the leaves converts sun photon energy to the hydrocarbon molecules of biology. Broadleaf trees grow where they can, and evergreen needle trees grow where they can’t.

Hemlock needles (with woolly adelgids)

When the non-growth colder season approaches, broadleaf trees are better off  wintering over with bare branches, having adequate time to replenish their foliage the following spring. In northern latitudes, there is simply not enough time to restock the canopy with sun gatherers, so they persist year-round as narrow needle-like leaves. Temperature is a second factor due primarily to physics; when the freezing point is reached, the uptake of water is squelched and growth is curtailed.  Since average temperature drops about 3 degrees F every 1,000 feet, mountainous terrain has the same effect as latitude on the growing season so evergreens also prevail in higher elevations. Needle trees are also favored in northern latitudes and uplands because they are winterized with wax-coated  needles and resin-infused wood and roots. The conical shape of many conifer trees with their one dimensional needles are also better at survival in heavy snowpack. It should be noted that the pine barrens of New Jersey and the wide expanses of scrub pines across the south are neither mountainous nor northern. Some species of pine thrive in dry sandy soils where periodic wildfires have historically been the norm. Their cones are serotinous, which means that they evolved to burst open after a fire to spread the seeds of restoration, eventually becoming the dominant species. [3]  

That hemlock trees have the same name as the poisonous hemlock plant cannot be a matter of chance etymology. They have some things in common, but not the notorious toxins of the latter. The “drinking of the hemlock” was the standard method of execution in Ancient Greece. One of history’s most enduring dramas is the trial of Socrates by the popular court or dikasterion comprised of 500 Athenian citizens in 399 BCE. He was prosecuted for undermining religious faith in the  “gods that the state recognizes” by introducing new “demonical beings” and for “corrupting the youth” and found guilty by a slight majority. The hemlock execution of Socrates is considered by many historians to mark the end of the Golden Age of Greece. [4] Poison hemlock was thus well known throughout Europe by the Middle Ages both for its toxicity, and, in small doses, for treatment of a variety of ailments. There is evidence of its use for the treatment of cancer, as a narcotic or analgesic, and even as an anti-aphrodisiac (perhaps by killing the object of desire). [5] Because of this, many Europeans were familiar with its shape when growing and its smell when ground into powder. However, since there were no hemlock trees in Europe, it took the discovery and exploration of the Americas to associate the poison hemlock plant with its namesake tree.

The hemlocks of North America were almost certainly first sighted along riparian riverbanks by French explorers who penetrated the mainland by sailing up the St. Lawrence from the North Atlantic in the 16th century. Their knowledge of the smell and branching pattern of the poison hemlock led to applying the familiar name to the unfamiliar evergreen tree due to its similar characteristics. This is corroborated by the British Cyclopedia of 1836 in noting that the hemlock tree was “so called from its branches in tenuity and position resembling the foliage of the common hemlock.”  Conium, the genus of the poison hemlock, was purposely chosen because the plant looked like a miniature cone-bearing tree. In the New World, where there were so many new and strange plants, any means of distinguishing one species from another by using a mnemonic brought some order to the chaos. To differentiate the evergreen version of hemlock from its doppelgänger, the compound name “hemlock spruce” was applied. [6] Spruce trees of the genus Picea prevail in boreal forests across North America and Eurasia. Spruce is an anglicized version of “from Prussia” due to the prevalence of native spruce trees along the Baltic Sea near present day Lithuania. Prussia was  the ancestral home of the medieval Teutonic Knights that grew in prestige and power, uniting the disparate Germanic states to form a unified Germany in the 19th century. The hemlock spruce is called Pruche du Canada in Quebec, further evidence of  Prussian origin. It was later moved from the spruce to the pine family.

Eastern hemlock or hemlock spruce is the most shade tolerant of all tree species and can survive with as little as 5 percent full sunlight. Since the conversion of solar energy to produce hydrocarbon energy is the foundation of life, its lack can only be compensated for by slow growth. Like Treebeard, the ent of Tolkien’s mythical Fangorn Forest, hemlock growth is slow but inexorable. A one-inch diameter (usually reported as dbh―diameter at breast height―to account for irregularities) hemlock can be over 100 years old. Since hemlocks can grow to over six feet dbh with a height of over 150 feet, it follows that longevity is another characteristic trait. The record age for a hemlock is 988 years, older than Noah’s 969-year-old grandfather Methuselah, the epitome of lifetime endurance. Once established, a hemlock canopy blocks sunlight from penetrating to the understory, snuffing out most arboreal competition. The subsequent microclimate of dense shade with a deep duff layer retains moisture and sustains uniformly reduced ambient temperatures. Not surprisingly, the relatively exacting moisture and temperature requirements for hemlock germination are met by the conditions that they create. [7] But there is more to forest soil management than trees. There are also fungi.

Hemlock polypore growing on dead hemlock.

Pine family trees like hemlock are connected through their root systems with fungi that surround them, an arrangement know as ectomycorrhizal, “outside fungus root” in Greek. About 90 percent of all plants form mutualistic partnerships with fungi to gain access to essential soil nutrients like phosphorus and nitrogen, with the plant providing up to ten percent of its hydrocarbon sugar output to root fungi in return. For most plants, the mycorrhizal relationship is an option that results in more robust growth. For trees of the Pine family like hemlock, the mycorrhizal relationship is universal. Many different species of fungi are involved with the roots of any given tree. While there have been no studies for hemlocks, the closely related Douglas firs (Pseudotsuga menziesii) are estimated to have over 2.000 different species of associated fungi. [8] The kingdom Fungi is not uniformly benign, however, as all living things must find their niche in the tangled web of life as a matter of survival. The subsurface soils kept moist by the hulking hemlocks are an ideal habitat for mold, another broad category of fungi. Seven species of fungi attack the seeds of hemlock resting on the moist soil awaiting the magic of germination. One mold species, Aureobasidum pullulans, was found growing on almost three fourths of all hemlock seeds, impeding their full function. Hemlocks, when they eventually keel over, provide yet another form of fungi, the saprophytes that feed on the dead. Were it not for the fungi that consume the cellulose and lignin from which tree trunks are made, the world would be covered with tree trunks and none of their carbon would be returned to the atmosphere. Because hemlocks are so pervasive, one species of fungus aptly named Ganoderma tsugae or hemlock polypore, subsists exclusively on its deadwood.  Also called varnish shelf, it is one of the most recognizable of all fungi and is closely related to one of the most important fungi in Asian medicine (see full article for further details).

Hemlock growing adjacent to fallen old growth hemlock trunk in foreground.

The hemlock is listed on the International Union for Conservation of Nature Red List as near threatened. [9] This surprising state of affairs is not the result of clear cutting and overharvesting, although human impact has surely had deleterious effects. The high point of hemlock harvest was at the turn of the last century when the wood was used primarily for home construction roofs and flooring. As the population surged in the decades that followed and newspapers of the golden age of Hearst and Pulitzer proliferated, hemlocks became one of the primary sources for paper pulp.   The effects are exemplified by Michigan’s growing stock decreasing by over 70 percent between 1935 and 1955, a result of the slow growth of hemlock relative to its removal. However, the real culprit that threatens hemlocks is a sap sucking insect closely related to aphids, the bane of gardeners and food for ladybugs. The woolly adelgid was probably introduced from Japan in the early 1950s somewhere in New England and has now spread to 19 states and two Canadian provinces.[10] The larvae of the adelgid suck the body fluids from hemlock needles at their base, covering themselves with a fluffy white layer (hence woolly) to protect against predation (see full article for further details). A death by a literal thousand cuts ensues that can take decades but is in most cases inevitable. The hemlocks of Limberlost were the only old growth tract in Shenandoah National Park. They had been so weakened by woolly adelgids that they toppled during hurricane Fran in 1996. The hemlocks are just starting to recover almost thirty years later (note fallen hemlock trunk in foreground in photo). 

Unlike its poisonous namesake, hemlock is not only edible but salubrious. It has been attested that the entire Pine family “comprises one of the most vital groups of edibles in the world.” [11] This would mostly apply to northern latitudes where the paucity of winter food could result in starvation absent the resort to eating pine tree inner bark, a thin layer called the cambium.  The nutritious cambium is responsible for the formation of the water transport xylem on the inside and the hydrocarbon food transport phloem on the outside; in other words, it makes the tree trunk. For soft wood pine trees stripping off the outer bark layer to gain access to the cambium can be readily accomplished with primitive scraping tools. The native peoples of North America collected cambium which was cut into strips eaten either raw, cooked, or dried and ground into flour to make bread, a practice adopted by early colonists. The Adirondack Mountains of New York derive from the Mohawk word haterỏntaks, which means “they eat trees.” The healthful benefits of hemlocks and other pines are further enhanced by high concentrations of anti-inflammatory tannins and anti-oxidant ascorbic acid/vitamin C in all parts of the tree. The various Indian tribes had diverse uses, extending from pine tea tea to treat colds to thick pinesap paste applied to wounds as poultice.[12] One early colonist wrote in his diary in the mid 19th century that “I never caught a cold yet. I recommend, from experience, a hemlock-bed, and hemlock-tea, with a dash of whiskey in it merely to assist the flavor, as the best preventive.” [13]

References: 

1. Little, E. The Audubon Field Guide to North American Trees, Eastern Region, Alfred A. Knopf, 1980, pp 276-301.

2. Wilson, C. and Loomis, W. Botany, Holt, Rinehart and Winston, New York,1967, pp 549-570

3. Kricher, J. and Morrison, G. A Field Guide to Eastern Forests of North America, Peterson Field Guide Series, Houghton Mifflin Company, Boston. 1988, pp 9-10.

4. Durant, W. The Life of Greece, Simon and Schuster, New York, 1966, pp 452-456.

5. Foster, S. and Duke, J. Medicinal Plants and Herbs of Eastern and Central North America. Peterson Field Guide Series. Houghton Mifflin Company, Boston, 2000, pp 68-69.

6. Earle, C. Tsuga, The Gymnosperm Database, 2018, at https://www.conifers.org/pi/Tsuga.php      

7. Godman, T. and Lancaster, K. “Pinaceae, Pine Family” U.S. Forest Service Report at https://www.srs.fs.usda.gov/pubs/misc/ag_654/volume_1/tsuga/canadensis.htm   

8. Kendrick, B. The Fifth Kingdom, Focus Publishing, Newburyport, Massachusetts, 2000. Pp 257-278.

9. https://www.iucnredlist.org/species/42431/2979676    

10. https://explorer.natureserve.org/Taxon/ELEMENT_GLOBAL.2.131718/Tsuga_canadensis  

11. Angier, B. and Foster, K. Edible Wild Plants, Stackpole Books, Mechanicsburg, Pennsylvania, 2008, pp 168-169.

12.Ethnobotany Data Base at http://naeb.brit.org/uses/search/?string=tsuga+canadensis   

13. Harris, M. Botanica, North America, Harper Collins, New York, 2003, pp 44-46.

Wind Energy

Wind Turbines along Allegheny Ridge south of Mount Storm, West Virginia

Wind energy comes from the sun. Counterintuitive but nonetheless true. The transfer of energy from the sun to the earth is fundamental physics, giving rise to weather in the short term and climate when averaged over decades. The sun’s energy in the form of radiant solar heating increases the temperature of the land surface of the earth by transferring electromagnetic energy to individual molecules, causing them to vibrate. Temperature is the empirical measure of the movement caused by the kinetic energy vibration of molecules. More vibration, higher temperature. Sun radiation similarly warms the ocean but mixing water current mitigates the surface heating effect. The heated land surface warms  the air immediately above it. Warmer air is less dense since the mostly nitrogen and oxygen molecules move farther apart due to the movement of vibration. The less dense, warmer air rises to create an area of lower pressure in the heated area relative to surrounding air masses. Similarly, cold air falls to create areas of higher pressure. The energy of the sun generates low and high pressure areas.

Temperature differences give rise to pressure differences. Wind occurs when air from an area of high pressure moves to an area of low pressure. On a global scale, the equatorial tropic regions are heated by the sun’s rays causing the air to rise and move toward the colder north and south poles. The tilt of the earth on its axis of rotation concentrates heating in the area between the Tropic of Cancer marking midsummer noon in northern latitudes and the Tropic of Capricorn where it is then midwinter. The rotation of the earth causes the global wind movement from equator to pole to shift in the direction of rotation. This gives rise to counterclockwise rotation in the northern hemisphere and clockwise rotation down under which is called the Coriolis effect. [1] The relatively simple flow of wind curling away from the tropics is complicated regionally by ocean thermal effects and land height differentials. The resultant winds can range from the doldrums of the Horse Latitudes to the fury of a cyclone. Capturing the sun’s wind energy can be a daunting proposition.

The use of wind by humans extends to the dawn of the historical record. Rock carvings of boats with sails have been found in the Nile Vallery at a site named Wadi Hammamat dating from about 3300 BCE, the pre-dynastic period before the union of Upper (southern) and Lower (northern) Egypt. Corroborating evidence in the form of Egyptian vases depicts reed-hulled ships with a single mast holding a square sail probably made from either papyrus or cotton which were likely limited to excursions along and across the Nile.  The Phoenicians, the sea-faring people of antiquity, ranged throughout the Mediterranean region and possibly passed through the Strait of Gibraltar to reach the British Isles. A rough hewn terra cotta ship model from about 1500 BCE found near Byblos on the Lebanese coast provides archaeological evidence. Supplemented with oar-wielding human crews,  the sail-powered galleys of the Greeks vanquished the  Persian fleet  at Salamis in 480 BCE and the long boats of the Vikings began their centuries long raids along the coastlines of Europe at Lindisfarne in Northumbria in 793 CE. [2] The sailing ship bereft of oars became the agent of change during the Age of Discovery that began with Columbus and literally established a New World order.

Wind power was the driving force for merchant ships seeking global trade in spices and silks and for warships seeking global dominance with cannons for centuries.  The language and units of wind are thus rooted in nautical applications. The knot or nautical mile per hour for wind speed is a good example. Ships navigate without landmarks in the open ocean, their horizons uniform in all directions. Starting from home port as a known datum, ships proceeded by dead reckoning, a means of determining current position by using only course and speed. The magnetic compass provided a reasonably reliable course but the speed was as variable as the wind. The mile predates the kilometer by centuries, having been introduced by the Romans as the distance travelled by its legions in 1,000 double steps (mille in Latin) or about 5,000 feet, which was standardized by Queen Elizabeth I to 5,280 feet, exactly eight furlongs. The nautical mile has a different provenance as one sixtieth of one degree of arc of earth’s circumference at the equator which works our to 6,080 feet. Since degrees of latitude and longitude along the surface of the earth define geographical position, the nautical mile providing increments of degree change is the best measure. An ingenious method was devised to determine ship speed in nautical miles per hour. A weighted sea anchor called a drogue attached to a rope was dropped over the gunwale (ship’s sides above the deck used for gun support). The rope had knots every 47 feet 3 inches which were counted as they played out for a period of 28 seconds (measured by sand glass) as the ship moved away from the stationary drogue. Every knot counted meant that 47.25 feet had been traveled in 28 seconds, which equates to nautical miles per hour. Since ship’s speed was knots, the wind that created it was given the same units. [3]

The age of sailing ships ended with the advent of steam boats powered mostly by coal, the first of fossil fuels. Before Thomas Newcomen invented the steam engine later improved by James Watt as a practical alternative power source, the only way to do work was with humans or animals, and, much later, flowing water or blowing wind. Manpower, now implausibly mangled as person-power, was paramount, and aggressive dynasties throughout the Old World cast about for humans as slaves to carry out the chores of manufacture. The word slave derives from the capture of Slavs from the southern steppes of Europe by the Tatars, who raided up and down the Dnieper and Don River basins to satisfy the demands of their Ottoman employers whose religion forbade the enslavement of Muslims. The Cossacks originated as a roving band of nomads that fought against the Tatarian slave trade. [4] The heavy stones of the pyramids of Egypt and Mesoamerica were cut, hauled, and hoisted by humans. The Africans kidnapped from their homeland and sold to colonists of the New World perpetuated forced slave labor into the nineteenth century.  Animal power came later.

Dogs were first domesticated from wolves about 10,000 years ago primarily as human hunting companions. Sheep, goats, and pigs followed over the next two thousand years as ready sources of animal protein to augment and eventually replace unreliable hunting for elusive prey. But it was the domestication of the cow/ox from the aurochs 6,000 years ago and the horse two millennia later that transformed human endeavor by incorporating beasts of burden. It has been argued that the prevalence of large domesticable herbivorous mammals in Eurasia (13 out of a total of 14 with only the llama as an American outlier) led to the historical dominance of this area in world history. [5] Workhorse became an idiom for any durable and dependable device as testimony to the centrality of equine employment for everything from chariots to plows. Watt invented the term horsepower to provide understandable equality to his steam engines to convince skeptical buyers of their efficacy. Both units of power are now in use, one mostly for cars and the other for lightbulbs (1 horsepower = 745.7 watts) … and wind turbines.  

The first wind machine was the windmill. Now a synonym for rotating, the term windmill originated as a compound word to describe the process of using wind to mill grain. As agriculture supplanted foraging in the Neolithic (New Stone) Age,  populations grew as more food became available on a regular, seasonal basis. The need to supply more grain to meet  burgeoning demand drove innovation. The small, hand operated grindstone that sufficed for the individual hearth grew in size and weight to the millstone of mass production. The role of grain mill operator, or millwright, evolved as innovators employed first human and eventually animal strength to operate centralized flour processing facilities. Windmills first appear in the historical record in 644 CE operating in the region now called Sistan or Sakastan in eastern Iran near the Afghanistan border noted for persistent strong winds and lack of flowing water. The Asbads (Persian for windmill) of Sistan, now a UNESCO World Heritage Site, consisted of a vertical axis directly connected to a pair of millstones at the bottom with wind-catching sails mounted horizontally in a stone structure configured with entrance and exit wind portals. [6] The use of vertical axis mills persisted into the 13th century and spread eastward to China and westward to the Crimean Peninsula that extends into the Black Sea.  

The first European windmills repurposed the Persian asbad with the use of the gearing that had been developed independently for the waterwheels of the Roman Empire. The result was the iconic post mill with two to four elongated sails made from canvas ship sailcloth stretched over a wooden frame and held vertically in the direct path of wind by a wooden post. The wind-rotated sails turned a horizonal axle which was connected to the grinding millstones with a 90-degree bevel gear. Post-type windmills first appeared in France in 1180 and were introduced to England a decade later, evolving over the next century to a tower windmill that included a movable roof that could be turned on a track to adjust to changes in wind direction. The windmill as water pump was developed in the Netherlands in the 15th century to drain low lying areas for plantation.  With the mill stone replaced by a bucket wheel, water was elevated by over six feet and deposited in purpose built drainage ditches. The windmill gained symbolic distinction as the epitome of Holland, complementing the tulips that were planted in the now arable land and the wooden sabots worn by peasants to traverse boggy fields. By the 19th century, the windmill as generic  power source contradicted its grain grinding etymology. In addition to water pumping, windmills were used to saw wood, polish stone, grind paint, press seed oil, make paper, and a variety of other mechanized processes including the traditional grain milling. The Zaan region just north of Amsterdam had more than 900 windmills in the 19th century. [7]

The first wind machine exclusively for power generation was constructed in Cleveland, Ohio by the electrical pioneer Charles F. Brush. After designing and patenting a dynamo for generating electricity for arc lights in 1876, he formed the Brush Electrical Company which sold arc lighting systems across the United States from San Francisco to New York, providing the first lights on Broadway. After selling his company to what was to become General Electric he retired to his mansion on Euclid Avenue in Cleveland, devoting himself to research and invention. In 1888, he designed and built a massive wind turbine with a 56 foot diameter rotor with 144 cedar blades to provide power to charge 12 direct current (DC) batteries to power 350 light bulbs in the  mansion. An 1890 article noted that “The reader should not think that electric light from energy obtained in this way is cheap because the wind is free … However, there is great satisfaction in making use of one of nature’s most unruly forces of motion.” [8] Perhaps that was Brush’s motivation. At about the same time, the Danish inventor and physicist Poul la Coul took a different tack, using a small number of rapidly turning blades to generate electricity at the Askov Folk High School, where he taught classes on wind electricity and founded the Society of Wind Electricians. His rather surprising choice for energy storage was hydrogen produced by the electrolysis of water which was used directly for gas lights in the school. Explosions caused by oxygen contamination blew out the windows on several occasions. In 1957, Johannes Juul, one of la Coul’s students, pioneered the first wind turbines to generate alternating current  (AC) electricity using the now standard three blade wind turbine. [9]  

The latter half of the 20th century was dominated by cheap fossil fuels for conventional power plants and, for a time, the promise of nuclear energy. The Arab oil embargo imposed in reaction to the 1973 Yom Kippur War sent shock waves throughout the industrialized world, eliciting a reassessment of dependence on foreign oil. The resultant impetus for alternative power generation sources led to a renaissance in wind energy research and development. The wind-wise and wind-resourced Danes took matters into their own hands. In 1975, a group of teachers from three schools that shared a large campus on the former Tvind farm in Western Denmark near Ulfborg placed an ad in a major paper “seeking windmill builders.”  The resultant Windmill Team, comprised of an eclectic group of 400 idealists with no prior experience and an average age of twenty-one, set out to build the world’s first megawatt (MW) wind turbine from scratch with funding provided by the teachers. Three years later, the Tvindkraft, with three pitched rotating blades made from fiberglass and a computer-controlled frequency converter to account for variable speed, rose above the Jutland plain. At a height of over 150 feet and a power capacity of 2 MW, the first modern wind turbine was the largest in the world for several decades. It is still in operation, providing electrical power to the three schools and the co-located Tvind Climate Center. Denmark subsequently became the world leader in wind energy, as copies of the design were built throughout the country. The Windmill Team sought no patents in order to promote the shift to wind power and away from fossil fuels, an act of notable altruism. [10]

Altamont Pass wind farm, California

In the United States, federal level wind turbine research and development sparked by the oil embargo followed the more traditional pathway of public funding to private companies. The Department of Energy (DOE) established the NASA Lewis Research Center in 1973 to oversee demonstration projects selected from proposals submitted. The NASA/DOE MOD-0 was a Lockheed design erected in 1975 in Ohio with two blades producing 100KW atop a 100-foot tower. Designs progressed over the years to MOD-5B, a Boeing installation on the Hawaiian island of Oahu in 1987 producing 3.2 MW on a 200-foot tower. None of these designs were ever commercialized and the prototypes were all eventually shut down and dismantled. [11] At the state level, rising oil prices coupled with nascent environmental concerns provoked the California Energy Commission to establish the Altamont Pass Wind Resource area in 1980. With favorable tax incentives, conditional use permits were awarded to commercial interests to build wind farms in Alameda and Contra Costa counties just east of San Francisco. This resulted in the world’s first modern large scale wind farm. With an average wind turbine power of only 94 KW, these relatively small turbines were combined in groups of up to 400 to generate city size megawatts of power.[12] Although interest waned when oil prices dropped in the mid 1980s, the Altamont Pass project was never abandoned and served as the nexus for increasing California wind energy capacity to address the rising temperatures of global warming.

The United Nations established the Intergovernmental Panel on Climate Change (IPCC) in 1988 to provide “an assessment of the understanding of all aspects of climate change, including how human activities can cause such changes and can be impacted by them.” The panel consists of an international team of recognized experts in the interrelated scientific fields that play a role in climatology. The First Assessment Report was issued in 1990 after having reviewed the preceding decades of research with two broad findings: (1) The greenhouse effect is a natural feature of the planet and its fundamental physics is well understood; and (2) The atmospheric abundances of greenhouse gases were increasing largely due to human activities. [13] After almost three centuries, the cost of the fossil-fueled Industrial Revolution had become clear. The environmental free lunch was over. By the turn of the century, wind energy was back on the table and resources poured into the design and construction of ever larger and more efficient turbines to be placed in dense clusters wherever the winds blew best. And because of wind variability, its use as a controllable force for reliable and consistent electricity posed an engineering challenge.

That wind force can pack a punch is evident in coastal communities hammered by hurricanes and in trailer parks torn apart by tornadoes. Wind derives power from the force it exerts on any surface that is in the path of its movement to equalize pressure. The basic wind power (P) equation is fairly simple:

                                                          P = ½ρAv3   

where ρ is air density, A is turbine area, and v is wind speed. Since the density of air is relatively uniform at 1.25 kg/m3, the only way to increase the power of a wind turbine is to make it bigger or to locate it in a windy area. Wind speed is the most important factor due to the cubic function, which means that if you double the wind speed, power goes up by a factor of eight (2x2x2=8). Area is the circle swept by the rotating blades that define its radius r, the familiar A = πr2. It is convenient to use metric units to produce power in watts. For example, a wind turbine with 10-meter blades (an area of 320 square meters) rotating in wind at 10 meters per second (1000 when cubed) would result in a theoretical maximum power of  (0.5)(1.25)(320)(1000) = 200,000 watts or 200 kilowatts (KW). Note that 1 meter per second is about 2 knots, the nautical wind speed unit. A kilowatt is approximately the amount of power required for a mid-sized single-family home. The megawatt (MW) is more useful for the energy needs of a city. Terawatt is global.

In the real world, there are both physical and practical limits on the calculated power of a wind turbine that together reduce the usable power by about half. The physical limitation is based on the fact that if all of the wind passing through a turbine generated power, then the wind would have no more energy. In other words, the wind would stop blowing. Since the wind does not stop, it stands to reason that only a portion of its energy can be extracted. The limit imposed by the physics of fluid flow is known as the Betz Limit for the German physicist Albert Betz who first proposed it. The Betz Limit is 59 percent. Therefore, the maximum power that could be generated from the 200 KW wind turbine would be 120 KW. This maximum would only be achieved when the wind maintained an average speed of 20 knots or 10 miles per second and if the blades were effective over the entire area A. That this is not the case is reflected in the use of  Cp,  the power coefficient or the performance coefficient. Cp varies according to the pitch angle of the blades, which are adjustable on all modern wind turbines, and on the rotational speed at the tip of the blades relative to the upstream wind speed, which varies according to blade length. A maximum Cp of 45 percent is the result of a tip speed that is 7 times faster than wind speed with a 0 degree pitch angle. [14]. Finally, it is necessary to account for wind variability over time in a given geographic area. The term capacity factor (CF) is used to adjust the wind energy that can be extracted relative to the nameplate or nominal KW or MW capability of the wind turbine. An economically viable wind turbine requires a CF of about 30 with a maximum in near perfect conditions approaching 45. [15] The bottom line is that it takes a lot of wind turbines to capture enough wind energy to power a city. Hence the wind farm.

Wind installations are divided into two broad categories according to placement: onshore and offshore. Onshore wind turbines are cheaper to build and maintain, but are limited by the lower average wind speeds over land and by human nuisance factors like noise and landscape aesthetics. Offshore wind turbines take advantage of the more consistent winds over water but can only be sited in countries with suitable littoral areas with adequate capital to finance the higher construction costs. Because many of the industrialized nations of the world abut the oceans and have limited land area available, offshore wind is sometimes the better option. Growth statistics since 2005 have been impressive. According to the Global Wind Energy Council, offshore wind grew by 21 percent annually over the last ten years, bringing the total installed offshore wind power to 64.3 GW. While the United States has only 42 MW of offshore wind installed, the Inflation Reduction Act incentivized offshore wind installations with about 50 GW of added capacity now in early planning phases.  However, onshore wind is still by far the most prevalent, comprising over 90 percent of global installed wind power. [16]

Onshore wind towers followed the historical pattern of windmills of past centuries that were installed locally where needed and feasible. The benefits that accrued to populations in areas hosting them offset most pushback complaints about land use and landscape clutter. In Holland they are and were the hallmark of Dutch industry. The only technical constraint for onshore wind is adequate wind. In general, this restricts installations to rows along mountain ridges and in phalanxes on windswept plains arranged to prevent wake interference. Financial constraints depend on the cost of electricity offsetting capital intensive construction. Political constraints are largely dependent on the local perception of climate change and on financial incentives for community services and local landowners. Onshore wind partnered with solar photovoltaics comprise the lion’s share of renewable energy that has burgeoned over the last decade. 440 GW of renewable energy, enough electricity for Germany and Spain, was added in 2023, 107 GW more  than that added in any previous year. The total amount of renewable power globally by the end of 2024 is expected to reach 4,500 GW (4.5 TW), the amount of electricity consumed annually by the United States and China. [17] These rosy projections are certainly good news as testimony to the oft-stated goal of carbon neutrality by mid-century. However, it is not likely that  continued growth at this pace will be sustainable.   

World population has increased exponentially for centuries. Exponential means that it follows the progression 1-2-4-8-16 ad infinitum, doubling every generation if the exponent is two.  The supporting world market economy has expanded in proportion to the number of people it serves as substantiated by annual percentage increases in GNP. However, all growth is limited by inherent constraints. Globally, the earth and its resources are finite and there will eventually be a population maximum (estimated at 10 billion in 2050). Technologies like wind energy are also constrained by both physical and geographic limits. Wind turbine power went from a few kilowatts in the 19th century to 100 KW one hundred years later. The climate change impetus to improve wind turbine performance resulted in taller towers, longer blades, better generators, and lighter materials produced a tenfold increase to the megawatt range by the end of the 20th century. The largest wind turbines now being built are in the 5MW range. There will be no gigawatt wind turbines. The reason is that there are constraints on the maximum power that wind turbines can produce due to height, weight, and the physics of both wind and electricity. The resultant S-shaped curve accounts for these systematic shortfalls over time. It is a calculated or logistic result unlike the mathematical exponential. It called logistic growth. [18]  

Growth can be exponential only in the beginning when “low hanging fruit” is harvested. This hackneyed engineering axiom refers to things that are easy to change since they are within arms reach (low off the ground) and are fully developed (ripe fruit). An example would be changing from steel to aluminum to reduce weight to build a taller tower.  Improvements become harder over time until the carrying capacity is reached. This phenomenon is called “decreasing returns to scale,”  characterized by an inflection or turning point where exponential growth transforms to an asymptotic approach the maximum sustainable size (or power). Though the terms are not synonymous, the logistic S shape is the result of logistics.  Logistics is defined as managing the details of an undertaking. The aphorism “an army marches on its stomach” refers to the need to have food supplied to it by a logistics chain that is frequently called the supply line. An unfed army cannot continue to march.  This phrase is attributed to Napoleon, who ironically lost most of his Grande Armée on the plains of Russia due to not following his own logistical dictum. The design, manufacture and deployment of complex technologies like wind turbines requires a steady logistical stream of materials to build them and industrial engineers to install them. From the megawatt powering standpoint, it may be concluded that wind turbines are at or near their logistical megawatt limit due to logistics factors.

Wind turbines are equally, and perhaps more dramatically, limited by geographic and social constraints. Sites with the highest average winds and most favorable demographics were filled with the initial round of wind towers now in operation and generating “current” electrical statistics. The original investors were rewarded with sustainable profit margins necessary for a market economy. Expansion to less desirable sites with less wind will change the equation. At some point, revenue from the sale of electricity is no longer sufficient to  finance the capital investment needed to build the wind turbine in the first place. At the same time, higher wind turbine manufacture and installation costs accrue due to the  increase in demand for critical materials with a limited supply. Supply/demand mismatch is a harbinger of inflation, the gradual rise of all costs. Financial strain is starting to occur in 2023. Orsted, the largest energy company in Denmark and world leader in wind energy, cancelled two major wind projects off the coast of New Jersey called Ocean Wind that would have produced 2.2 GW. According to one of the Orsted executives “macroeconomic factors have changed dramatically over a short period of time, with high inflation, rising interest rates, and supply chain bottlenecks impacting our long term capital investments.” This decision was further justified based on local opposition.  Residents of New Jersy’s coastal Cape May filed a lawsuit to block a tax break for the wind farm claiming that “offshore wind development could threaten fisheries and marine mammals.” [19]  With short sightedness approaching myopia, the shoreline loss that the rising sea levels of global warming will ultimately induce, no doubt to the benefit of both fish and whales and the detriment of future generations, was not mentioned.  

Onshore wind projects have technical and social challenges that are in many cases even more trenchant than the nearly out of sight offshore projects. Land-based wind machines straddle ridgelines and dot wind swept plains in remote places―far from the industries and populations they serve. Transmission and connectivity is a serious problem in continent-sized countries like the United States and Australia. A good example is the plight of the Southwest Power Pool (SPP) that manages the electricity grid across the Great Plains from the Dakotas to the Rio Grande through over 60,000 miles of transmission lines. New wind and solar generation sites seeking to hook up to the grid must wait in what is called the “interconnection queue” until a computer simulation can be run to ensure that the grid remains stable and effective. A wind energy firm in Virginia named Apex drew up plans to install 135 wind turbines in New Mexico generating 300 MW of power in 2013 and applied for connection to SPP in 2017. By the time SPP got around to running the simulation in 2022, there were dozens of projects totaling over 10 GW. The model showed that a new 100 mile long high voltage power transmission line would be necessary to accommodate the disruption at a cost of over $1B that would need to be paid by the projects seeking grid admission. With a bill of over $250M, the Apex project was no longer financially viable and was cancelled. The Federal Energy Regulatory Commission (FERC) that requires the simulation testing is working to ameliorate the situation, but the inherent variability of wind and solar power on an otherwise continuous  and necessarily stable power grid must be taken into account lest blackouts prevail.  [20]

The 26th United Nations Conference of Parties or COP26 held in London in November of 2021 established a global benchmark of reducing net carbon emissions by 50 percent in 2030. The lion’s share of the emissions (69%-89% depending on the model) are from power generation and transportation. In order to meet the 2030  COP goals, the power sector will need to reduce carbon dioxide emissions by over 50%. Meeting this threshold will require the elimination of all coal power plants and the increase in solar and wind power by about five times the growth levels of the last decade―nothing less than exponential will do. Similarly, the transportation sector will require an increase in electrical vehicles (EVs) from 4% in 2021 to an average of 67% in 2030, placing even more strain on the grid. A continuation of current US public policy will produce only 6% to 28%  of net carbon emission reduction by 2030. [21] It is not unreasonable to suggest that the goal cannot be reached using only wind and solar power which must be used when generated or stored in nonexistent long term energy storage repositories (like batteries). A stable source of electricity is needed to sustain the grid. Fusion will never be ready in time. “Politicians need to tell voters that their desires for an energy transition that eschews both fossil fuels and nuclear power is a dangerous illusion.” [22] The fate of Earth as human habitat is at stake.

References:   

1. Fovell, R. Professor of Atmospheric Science, UCLA, Meteorology: An Introduction to the Wonders of the Weather. The Teaching Company, 2010.  

2. Capper, D. Commander, Royal Navy “Sails and Sailing Ships” Encyclopedia Brittanica Macropedia, William Benton, Chicago. 1972, Volume 16, pp 157-163

3. Whitelaw, I, A Measure of All Things, St. Martin’s Press, New York, 2007, pp 30, 101.

4. Plokhy, S. The Gates of Europe, A History of Ukraine, Revised Edition, Basic Books, New York, 2021, pp 74-76.

5. Diamond, J. Guns, Germs, and Steel, W. W. Norton and Company, New York, 1997, pp 157-175, 355.

6. https://whc.unesco.org/en/tentativelists/6192  

7. Wailes, R, “Windmills” Encyclopedia Brittanica Macropedia, William Benton, Chicago. 1972, Volume 19, pp 861-862.

8. “Mr. Brush’s Windmill Dynamo”, Scientific American, New York Volume 63 Number 26, December 20, 1890.

9. The History of Modern Wind Power (Danish with English translation)    http://xn--drmstrre-64ad.dk/wp-content/wind/miller/windpower%20web/da/pictures/index.htm

10. https://www.tvindkraft.dk/stories/wind-and-the-environmental-crisis-windmill-denmark/#

11.  https://www.windsofchange.dk/WOC-usastat.php

12. Wind Turbine Projects – Current Development Projects – Policies & Plans Under Consideration – Planning – Community Development Agency – Alameda County (acgov.org)

13. Climate Change 2001 Synthesis Report, Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, UK. 2001.

14.  Aliprantis, D. Fundamentals of Wind Energy Conversion for Electrical Engineers,  Purdue University School of Electrical and Computer Engineering, 2014   https://engineering.purdue.edu/~dionysis/EE452/Lab9/Wind_Energy_Conversion.pdf  

15. Kalmikov, A. Wind Power Fundamentals, Department of Earth, Planetary, and Atmospheric Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, 2013. http://web.mit.edu/wepa/WindPowerFundamentals.A.Kalmikov.2017.pdf

16. Global Wind Energy Council, Global Offshore Wind Report 2023 https://gwec.net/global-wind-report-2022/

17. International Energy Agency (IEA) Renewable Energy Market Update Outlook for 2023 and 2014.   https://www.iea.org/energy-system/renewables/wind

18. Smil, V. Growth, The MIT Press, Cambridge, Massachusetts, 2019, pp 20-21, 181-184.

19. Puko, T. “Demise of N.J. wind projects imperils Biden’s offshore agenda” The Washington Post,  2 November 2023.

20. Charles, D. “Off the Grid” Science, Volume 32, Issue 6662, 8 September 2023 pp 1042-1045

21. Bistline, J. et al “Actions for reducing US emissions at least 50% by 2030” Science, Volume 376, Issue 6596, 27 May 2022, pp 922

22. “Power Struggle” The Economist Jume 25-July 1, 2022. P 11. 

Opossum

Common Name: Opossum, Opossum, Possum, Common opossum, Virginia opossum – The name is of Native American provenance and is from the Algonquian word apasum, which translates as “white animal.” The coarse fur ranges from white in the northern reaches of its range to almost black in warmer regions.

Scientific Name:  Didelphis virginiana – The generic name is from the Greek di meaning “two” and delphys meaning “womb.”  The opossum is a marsupial with a pouch that is used to nurture the young after gestation; it is metaphorically a “second womb.” An alternative etymology for the genus is the fact that the female opossum has two uteruses. The species name refers to its original identification and description in the colony of Virginia by European naturalists defining the taxonomy of North American fauna.

Potpourri: Opossums are the only marsupial mammals in North America; there are none in Eurasia.  They are in a sense living fossils, transitional forms that link the more primitive egg-laying monotreme mammals to those with a placenta. The global geographic dispersion of marsupials is a biological affirmation of continental drift and its plate tectonic motive force. They predominate in the Australian archipelago with their monotreme brethren and are well established in South America. Opossums arrived in North America as relatively new immigrants across the Panamanian isthmus when the two continents became conjoined about 2.8 million years ago; only yesterday in geological time. That they thrived amid the dominant placental mammals is testimony to their resilience and adaptation. This is especially noteworthy considering that they lack any form of plates or spines to protect their scrawny bodies and possess neither fangs nor claws to repel an attacker. Their most effective defense is to feign death, giving rise to the idiom “playing possum.”

Evolution rarely if ever proceeds in a straight line―it is a record of the past but not a plan for the future.  Random mutations yield periodic variations on the original theme; only a precious few result in an intelligent design that endures. Nature is the judge and every other living thing is the jury as newness strives for an unoccupied niche. The original mammal combined three key attributes that together resulted in permutation and dominance: warm blood; milk glands; and body hair. The last common ancestor of mammals and reptiles lolled about in muddy wetlands about 300 million years ago. It took 100 million years to make a mammal and another 20 million years to make a monotreme, the most primitive of extant mammals. Monotreme means ‘one-hole’ as they have a single cloaca, the external passage for intestinal, reproductive and urinary purposes, a trait shared with birds and reptiles. Cloaca means ‘sewer’ in Latin. The scant fossil record indicates that there was a proto-monotreme named Australosphenida that spread out over the continents that originally formed Gondwana, the southern portion of Pangaea. [1]

Only five monotreme species survived the scourge of extinctions caused by competitors, environmental changes, and cataclysms.  The inimitable duck-billed platypus and four echidnas or spiny anteaters are the only surviving representatives. These animals stymied taxonomists for more than a century due to uncertainty about lactation and reproduction.  They lack prominent nipples and they produce offspring infrequently and in seclusion. When the first platypus was sent to England in 1799, it was thought to be a hoax due to the improbable combination of a bird-like beak, the four feet of a quadruped, and a fishy aquatic habitat. The lactation issue was resolved by a British Army Lieutenant in New South Wales named Maule who skinned his pet female platypus after she was accidentally killed and noted milk emanating directly from her abdomen.  When this report reached London, the testimonial of an army officer was considered unimpeachable and the platypus was declared mammalian in 1831. The reproductive question was not resolved until 1884 when a British zoologist named Caldwell had killed over 1400 monotremes to try to solve the riddle of reproduction. His brutal quest ended when he came across a female in the process of laying eggs. [2] The monotreme was confirmed as mammalian.

The importance of monotremes was noted by Darwin during his brief sojourn in Australia in 1836 during the seminal circumnavigation of HMS Beagle: “I had the good fortune to see several of the famous Platypus or Ornithorhyncus paradoxicus; certainly, it is a most extraordinary animal … A little time before this, I had been lying on a sunny bank and was reflecting on the strange character of the animals of this country as compared to the rest of the world.” [3] Some years later, he wrote “We here and there see a thin straggling branch springing from a fork low down on a tree, and which by some chance has been favored and is still alive on its summit.” [4] The platypus was pivotal to Darwin’s tree branch conceptualization of evolution. He was, in fact, consistent in his perspicacity throughout, as the dearth of fossils indicate that monotremes are, indeed, a straggling branch.  They are the egg-laying mammals that evolved from the reptiles as lactating nurturers, setting the stage for the successor marsupials, the pouched mammals; marsupium is Latin for pocket.

Marsupial mammals are considerably more common than monotremes with about 330 species almost exclusively in Australia which has four orders and South America which has three.  These groupings are sometimes referred to as Australidelphia and Ameridelphia. Marsupials are phylogenic with placental mammals in having a single common ancestor after the branching of monotremes. To emphasize these relationships, the older monotremes are designated Prototheria, the more recent mammals Theria, marsupials are Metatheria, and placentals are Eutheria. The range and diversity of marsupial mammals offers one of the most compelling arguments for the veracity of Darwin’s epiphany about the origin of speciation. Evolution is only a theory according to scientific rules that require experimental validation and the clock cannot be set backward to precursors nor can it be sped up from geologic to human time to prove it. A great many marsupials independently adapted over time to have the same body forms as placentals, a phenomenon called coevolution. The Tasmanian Devil is a  pouched carnivore; kangaroos, koalas, and wombats are pouched herbivores. There are marsupial moles and anteaters in addition to Thylacine, an extinct marsupial dog and Thylacoleo, an extinct marsupial lion. The independent evolution of sophisticated pouched mammals upends the widely held view that placental mammals are superior to marsupials. They are in reality a separate and essentially equal branch of the family tree with all of the necessary attributes to establish them as members in good standing of class Mammalia. [5]

The fossil record of the marsupial Metatheria provides a geographic date stamp for the breakup of Pangea punctuated by the Cretaceous-Paleogene (KPg) extinction event 66 million years ago. The oldest marsupial fossil found so far is from northeastern Eurasia in 125 million year old sedimentary rocks. The first Eutherian/placental fossil is “only” 35 million years older. All of the other extant Metatherian fossils from the Mesozoic Era are from the northern continents of Laurasia, there are none in South America or Australia. In that this is their present habitat, mass migration must have occurred. The oldest marsupial mammal fossil in South America is 64 million years ago, indicating that it had crossed from North America when the two were connected for some time by a land bridge; The KPg extinction event must therefore have extirpated the northern non-migratory marsupial contingent while the Eutherians prevailed. The current theory is that marsupials from South America crossed to Australia via Antarctica, when they were contiguous at temperate latitudes about from 55 to 35 million years ago; 45-million-year-old marsupial fossils have been found off the coast of Antarctica. As the only mammals in Australia, the Metatheria flourished as the climate changed from rainforest to open woodlands during the Miocene Epoch, producing marsupial megafauna in parallel with the placental megafauna of North America. Ten-foot kangaroos bounded about with giant koalas and a huge rhinoceros-like marsupial. The larger marsupials became extinct shortly after the arrival of the Aborigines about 40,000 years ago, the same fate that befell their placental brethren coincident with the arrival of the Native Americans 12,000 years ago. While predation by human hunters played a role in these extinctions, it was mostly a matter of environmental, climate-related habitat changes. The smaller marsupials, like the opossum, survived and thrived, and one species moved north to become the Virginia opossum. [6]

Opossum is from the Powhatan dialect of the Algonquian language group, a variation of oposoum meaning “white animal,” its coarse fur ranges from white in the northern reaches of its range to almost black further south. Captain John Smith, one of the founders of the Virginia colony at Jamestown in 1607 described an animal that “hath an head like a swine … tail like a rat … and the bigness of a cat” in a compiled list of Native American words. [7] It was officially classified it as Didelphis virginiana from the Greek di meaning “two” and delphys meaning “womb,” as the females have two uteri, a trait shared with all marsupials, with Virginia as locale of first siting. While not as chimerical as the platypus, opossums do not lack distinction.  Their pointed white faces and piercing, beady black eyes appear ghostly and ghoulish, particularly after dark when such things are imagined. The reverie is not diminished by the demonic prehensile tail that extends for half its body length. The caudal appendage is simian in form and function, adapted to grasping branches for balance and leverage in establishing tree cavity nests. Arboreal acrobatics are further enhanced by opposable thumbs on their hind feet, an attribute they share only with primates and a very few other species, mostly marsupial mammals. The satanic image is complete with the thoroughly fanged jaw that smiles with reptilian menace with a display of fifty teeth, more than any other North American mammal. For the Europeans who first came to the Americas in the sixteenth century, the opossum was sui generis and of consequent great interest.

Vicente Pinzón, the commander of Christopher Columbus’s ship Nina, brought an opossum back to the Spanish regents Ferdinand and Isabella, describing it as a “monster” with the “hinder of a monkey, the feet like a man’s, with ears like an owl; under whose belly hung a great bag, in which it carried the young.”  As the Europeans had never seen a marsupial (there are none in Eurasia or Africa), the opossum came to epitomize the exotic fauna of the Americas. [8] The German cartographer Martin Waldseeműller, renowned for his assigning the name America on a 1507 world map drawn from information gathered on the voyages of Amerigo Vespucci, included the opossum in a later woodcut as an evocative symbol of the New World.  Sixteenth Century engravings of the opossum, such as Étienne Delaune’s “America” depicted the peculiar animal with sharp fags and exaggerated claws. Over the course of the next century, the myriad novel plants and animals of the New World inspired the nascent science of comparative anatomy and ultimately to the evolutionary ideas of Darwin. The opossum was transmogrified from monster to mammal by Edward Tyson of the Royal Society who described the anatomy of the female opossum in a treatise in 1698. He correctly surmised that the “feet like a man’s” were for grasping and the “great bag for the young” was a manifestation of maternal care. [9] The opossum was redeemed.

That the Virginia opossum is the only marsupial to thrive in North America is testimony to its synanthropic nature, thriving in and around human habitation. Consequent to their omnivorous adaptability, fecund reproduction and creative, steadfast defenses against predators, they proliferate.  Opossums can and will eat almost anything that is organic, including but not limited to insects, snails, small mammals, fruit, eggs, fledgling birds, and, on occasion, cultivated crops. Those who choose to leave pet food or any other scraps in accessible areas will soon attract the attentions of opossums.  They make their home nests as a sequestered sanctuary for their brood in the hollows of trees, taking advantage of prehensile tails and grasping rear feet to navigate the arboreal habitat. The female opossum is fertile at the age of six months and can have two litters every year; the gestation period is only about two weeks. The young are not born in the pouch but instinctively crawl there from the uterus; this is no mean task as they are blind, furless and bee-size, weighing 100 milligrams. Those that make it seek out one of 13 nipples (twelve in a circle around one in the center) to which they are nurtured for several months. Although senescence is rapid (the average life span of an opossum is only about 3 years), the population is adequately replenished by the number of joeys and the nurturing nature of the species. [10]

Opossums are peerless masters of pantomime, the idiom “playing possum” a linguistic testimonial. They must be, as they lack innate physical defenses like porcupine spines, armadillo shells or bobcat claws, are not very fast, and don’t burrow in hidden dens. When cornered, an opossum will play dead so realistically as to dissuade even the most determined predators.  The mimicry is quite convincing. Stiffened in feigned rigor mortis, teeth bared in the throes of death with the putrid smell of a cadaver from mephitic fluid emitted from glands near the anus; they look, feel and smell like a dead animal. They stoically endure bashing, scratching, and biting remaining mute and motionless until the ruse prevails and the assailant retires. Healed scars are the only evidence of a protracted struggle.  A rabid pretense defense is used to deter aggression from less egregious threats.  The symptoms of rabies are simulated by secreting excessive saliva at the corners of the mouth with the lips drawn baring the full suite of sharp teeth. In addition to overt defenses, opossums have covert protection of an immune system that evolved resistance to the venomous bite of pit vipers, a property first discovered and patented in 1996. Due to the small number   of snakebite victims in the United States and the expense of synthesis, there was no incentive at that time to exploit opossum-based antivenom. With the advent of biomedical engineering, an E. coli bacteria was modified to produce the necessary opossum peptides at a price affordable enough for potential use in India, where there are over 100,000 snakebite deaths every year. [11] Opossums have succeeded against all odds with a combination of chemistry and comic opera, surviving as the fittest.

Like the bison and the turkey, the opossum is an iconic native North American animal. It embodies the spirit of the continent as the home of immigrants, including those of Native American heritage whose Asian genetic heritage is only some 12,000 years removed, almost yesterday in the grand swath of earth-time. The opossum is an equally itinerant immigrant as the only marsupial amid a menagerie of competitive placental mammals of equal need to eat and reproduce.  The ‘possum’ is central to the cultural cuisine of Appalachian and Ozark hill country, where it is hunted as a game animal and consumed as a choice entree according to recipes that have endured for generations. In January of 1909, President-elect William Howard Taft was served an eighteen-pound possum for dinner while visiting Georgia and was quoted in a New York Times article as having remarked “Well, I like possum, I ate very heartily of it last night.”   Numerous live opossums were sent to the White House from his southern constituents in consanguinity. A stuffed “Billy Possum” was created as an alternative to Roosevelt’s established “Teddy Bear,” which was also occasioned by a newspaper article. However, cuddly is an oxymoron for the rat-like, sneering possum and sales failed to meet expectations. [12] Walt Kelly’s Pogo is perhaps the most notable cultural testimony to the opossum; the kindly denizen of the Okefenokee Swamp famously said, “we have met the enemy and he is us” in the poster Kelly made for the first Earth Day in April 1970.

References:

  1. Weisbecker, V. and Beck, R. Marsupial and Monotreme Evolution and Biogeography. Nova Science Publishers, 2015
  2. Drew, L. I, Mammal the Story of What Makes us Mammals   Bloomsbury Sigma, London, 2017. pp 41-60.
  3. Keynes, R. D. ed. Charles Darwin’s Beagle Diary. Cambridge University Press. 1988.
  4. Darwin, C. Journal of the Researches into the Natural History and Geology of the Countries Visited during the voyages of HMS Beagle Round the World, John Murray, London 1845.
  5. Weisbecker, V. and Beck, R. Op. cit.
  6. Ibid.
  7. Mithun, M. The Languages of Native North America. Cambridge University Press. 2001 p. 332
  8. https://www.motherearthnews.com/nature-and-environment/opossum-facts-behavior-and-habitat-zmaz03aszgoe
  9. Tyson, E. “Carigueya Seu Marsupiale Americanum, or The Anatomy of an Opossum, Dissected at Gresham-College by Edw. Tyson, M. D., Fellow of the College of Physicians, and of the Royal Society, and Reader of Anatomy at the Chyrurgeons-Hall in London,” Philosophical Transactions of the Royal Society April 1698 no. 239 p 102 
  10. https://opossumsocietyus.org/general-opossum-information/opossum-reproduction-lifecycle/
  11. Davenport, M. “Opossum Compounds Isolated to Help Make Antivenom, Scientific American March 2015.
  12. Fuller, J. “Possums and Politicians? It’s Complicated.” Washington Post, 24 Sep 2014.

Stinkhorns

Common Name: Stinkhorn, Carrion fungus – Stink can mean either emitting a strong, offensive odor or, ethically, to be offensive to morality or good taste. Both interpretations apply according to the context herein. Horn is a foundational word of the Indo-European languages that refers to the bony protuberances that adorn the heads of many ungulates like deer. In that it also is associated with supernatural beings like the devil, it may be that this connotation was the original intent for its use. Devil’s dipstick is an idiomatic name for some species of stinkhorn that suggest this interpretation.

Scientific Name: Phallaceae – Phallus is the Greek word for penis. There can be no doubt that the family name was selected due to verisimilitude, the remarkable resemblance of the stinkhorn to male, mammalian, and notably human anatomy.

Potpourri:  Stinkhorns are a contradiction in terms. For some they are the most execrable of all fungi and for others they are elegant, one species even being so named (see Mutinus elegans). They range in size and shape from the very embodiment of an erect male canine (M. caninus named for its resemblance to a dog penis) or human penis (like Phallus ravenelii in above photograph) to colorful raylike extensions ranging outward and upward like a beckoning, stinking squid (picture at right). In every case they are testimony to the creativity of the natural forces of evolution, seeking new ways to survive the rigors of competition. Like the orchids that extend in intricate folds and colors of “intelligent design” to attract one particular insect to carry out pollinator duties, stinkhorns have become “endless forms most beautiful and most wonderful” that defy the odds of probability that must therefore lead to evolution as an explanation. [1] The priapic and tentacled extensions can only have been the result of successful propagation for the survival of the species, just like Homo erectus.

The phallic appearance of some stinkhorns is not as outré as it seems at first blush. The priapic shaft elevates spores to promote dissemination. Like a fungal Occam’s razor, stinkhorns evolved the simplest solution―growth straight upward with no side branches, placing the spore gleba at the bulbous apex. The fungus accomplishes this in a manner similar to humans, using water pressure to hold the shaft erect in lieu of blood pressure; hydrostatic versus hemostatic. The phenomenon is part of the fungal life cycle that starts in the mycelium, the underground tangled mass of threadlike hyphae that is the “real fungus.” The stinkhorn starts in the mycelium as an egg-shaped structure called a primordium containing the erectable shaft surrounded by spore laden gleba held firmly in place with jellied filler cloaked with a cuticle. It is the fruiting body of the fungus.  When environmental conditions dictate, the “egg hatches,” and the water pressurized shaft grows outward and upward lubricated by the jelly at a rate of about five inches an hour until it reaches fly over country. Here the biochemistry of smells including hydrogen sulfide (rotten eggs), mercaptan (rotting cabbage), and some unique compounds aptly named phallic acids draws flies from near and far. In ideal conditions, the slime and spores will all be gone in a few hours, and the bare-headed implement of reproduction will soon become flaccid.

Stinkhorns belong to a diverse and now obsolescent group of fungi called Gasteromycetes from gaster, Greek for “belly ” and mykes, Greek for fungus. With the translated common name stomach fungi, they are characterized by the enclosure of their spores inside an egg-shaped mass called a gleba (Latin for “clod”). Hymenomycetes alternatively have their spores arrayed along a surface called a hymenium (Greek for “membrane”) and are by far the larger grouping. The hymenium surface can take the form of gills or pores on the underside of mushroom caps or any of a wide range of other shapes ranging from the fingers of coral fungi to the cups of tree ear fungi. The Gasteromycetes include puffballs and bird’s nest fungi. [2] In the former, the ball of the puffball is the gleba. On aging, a hole called an operculum forms at the top so that the spores can be ejected (puffed) by the action of falling raindrops for wind dispersal. Each “egg” in the bird’s nest is a gleba and is also forced out by the action of falling rain. The projectile gleba affixes to an adjacent surface from which spores are then also air dispersed. Stinkhorns evolved to distribute the spores from the gleba following a completely different evolutionary random sequence. They attract insects to the stink at the top of the horn.  

Flowering plants called Angiosperms are ubiquitous, successful in their partnership with many insects to carry out the crucial task of pollination. While this is primarily a matter of attracting bees and bugs with colorful floral displays and tantalizing scents promising nectar rewards, there are odoriferous variants. Skunk cabbage earned its descriptive name from the fetid aroma that attracts pollinating flies to jumpstart spring with winter’s snow still on the ground. Another member of the Arum Family, the cuckoopint, attracts flies with its smell and then entraps them with a slippery, cup-shaped structure embedded with downward pointing spines, releasing them only at night after they are coated with pollen to then transport. Stinkhorns produce an olfactory gelatinous slime containing its reproductive spores that some insects, mostly flies, are drawn to. It is not clearly established whether the flies eat the goo and later defecate the spores with their frass [3] or whether they are only attracted by the smell, perform a cursory inspection, and then fly off with spores that “adhere to the bodies of the insects and are dispersed by them.” [4]  Some insight can be gained according to entomology, the study of insects. Do they eat the slime or do they merely wallow in it? 

The primary insects attracted to stinkhorn fungi are the blow flies of the Calliphoridae Family and the flesh flies of the Sarcophagidae Family. The term blow fly has an interesting etymology that originates with their characteristic trait of laying eggs on meat that hatch into maggots, the common name for fly larva. Any piece of meat left uncovered in the open long enough for the fly eggs to hatch was once called flyblown, which gradually took on the general meaning of anything tainted. The reversal of the festering meat term gave rise to the term blow fly for its cause. As a purposeful digression, wounded soldiers in the First World War left unattended for hours on the battlefield were sometimes found to be free of the infections that plagued those treated immediately because the blow fly maggots consumed their necrotic tissue. It is now established the maggots also secrete a wound healing chemical called allantoin (probably to ward off competing bacteria) and they are sometimes intentionally used to treat difficult infections. Flesh flies, as the family name suggests (Sarcophagidae means flesh eating in Greek), are also drawn to carrion to eggs for their larvae to eat. [5] If blow flies and flesh flies are attracted to stinkhorns due to the smell of rotting meat, they would presumably lay eggs. So the conundrum is what happened to the maggots? While eggs could hatch in a few days and larvae would feed for a week or two, stinkhorns last for several days, their slime removed in half that time.

Field experiments have verified that stinkhorn fungal spores are indeed ingested by flies. Drosophila, the celebrated fruit fly of early genetic studies, had over 200,000 stinkhorn spores in their intestine when dissected in an experiment. Given the volume available in a fruit fly gut, this quantity adds some perspective to the vanishingly small size of spores. The larger blow flies were found to contain more than a million and a half spores in a similar field evaluation. It was further demonstrated that spores passing through insects and defecated in their frass were fully functional. [6] This is not too surprising as spores evolved for survival under hot, cold, or desiccated environmental extremes; the fly gut is relatively benign by comparison. It is true, then, that flies eat spore bearing material. It is equally evident that there are no maggots in stinkhorn slime, even though this is what the average blow fly does when offered smelly meat. Diversity provides a reasonable basis for this contradiction. There are over 1,000 species of blow fly each to some extent seeking survival within a narrowed niche. Flies of the order Diptera are  noted for their propensity to mutate and adapt. Some species of blow fly and flesh fly deviated from the norm to consume stinkhorn slime for nutritional energy and lay eggs elsewhere. The stinkhorn and the flies they attract are an example of mutualism. Flies are attracted to and gain nutrition from what is essentially a fungal meat substitute and the fungus gains spore dispersion. Many fungi are excellent sources of protein, containing all eight essential amino acids needed by humans.  Flies need protein too.

The startling, trompe d’oeil appearance of a penis in the middle of a forest no doubt attracted humans as soon as there were humans to attract. The first written account of stinkhorns is Pliny the Elder’s Natural History written in the first century CE based on observations made on his military travels throughout the Mediterranean basin. John Gerard’s sixteenth century Herball  identifies the stinkhorn as Fungus virilis penis arecti forma  “which wee  English call Pricke Mushrum, taken from his forme.” [7] The bawdiness of Shakespeare’s rude mechanicals gave way to  Victorian Age corsets and high collars where there was no place for a “prick mushroom.” Charles Darwin’s daughter is accredited with the ultimate act of puritan righteousness. Ranging about the local woods “armed with a basket and a pointed stick” she sought the stinkhorn “her nostrils twitching.” On sighting one she would “fall upon her victim, and then poke his putrid carcass into her basket.” The day ended ceremoniously with the day’s catch “brought back and burnt in the deepest secrecy on the drawing room fire with the door locked because of the morals of the maids.” [8] As the modern era loomed and sexuality came out of the bedroom onto the dance floors of the roaring twenties, stinkhorns regained respectability.

The Doctrine of Signatures was the widely held belief that God intentionally marked/signed all living things to help humans determine how best to exploit them. To those who ascribed to this philosophy, a penis shape could only mean good for sexuality, which in the rarefied view of the pious, could refer only to procreation. Eating stinkhorns undoubtedly arose as either a way to enhance virility or as an aphrodisiac, and probably both. Dr. Krokowski, in Thomas Mann’s The Magic Mountain, lectures about a mushroom “which in its form is suggestive of love, in its odour (sic) of death.” [9] The dichotomy of budding love and the stench of death leaves a lot of room for speculation across the middle ground. Stinkhorn potions have been proffered as a cure for everything from gout to ulcers and proposed as both a cure for cancer and the cause of it. [10] There is insufficient research to conclude that any of this is true.

Stinkhorns as food from both a nutritional and gustatory purpose is at the fringes of the brave new world of mycophagy, fungus eating. Food is a matter of culture that extends from the consumption of frog legs in France to the mephitic surströmming of Sweden. Mushrooms have been on the menu for centuries from the shiitake logs of Japan in Asia to the champignons of Parisian caverns in Europe, but almost everything else was considered a toadstool. From the strictly aesthetic standpoint, the consumption of the stinkhorn “egg” dug up before it has a chance to become a smelly phallus has some appeal. Charles McIlvaine, the doyen of mycophagists whose goal at the dawn of the last century was to make the public aware of the “lusciousness and food value” of fungi, describes stinkhorn eggs as “bubbles of some thick substance … that are very good when fried.” His conclusion is that “they demand to be eaten at this time, if at any.” [11] Of more recent note, Dr. Elio Schaechter wrote that sautéing stinkhorn eggs in oil resulted in “a flavorful dish with a subtle, radish-like flavor. The part of the egg destined to become the stem was particularly crunchy, resembling pulled rice cakes.” [12] I am reminded of a Monte Python episode in with Terry Jones is upbraided for selling chocolate-covered frogs made from real frogs, the bones necessary to give the confection a proper crunch.

Netted Stinkhorn

Not all members of the Stinkhorn family look like a penis. Some have lacey shrouds that extend downward from the tip like a hoop skirt with a hint of femininity. These scaffolds are not for decoration but for scaling. In that it has been established that each stinkhorn species is in partnership with some form of gleba eating insect, the rope ladder can only be to allow crawling bugs like carrion beetles to climb up to the top to access the sporulated slime. The local species is Dictyophora duplicata, (net-bearing, growing in pairs) which is commonly known as netted stinkhorn or wood witch. After the bugs have finished with their slime meal, the result reminds some of a bleached morel. While netted stinkhorns are relatively rare in North America, they are abundant in Asia.

Bamboo Fungus

The netted stinkhorn called Zhu Sun meaning bamboo fungus for its native habitat is one of the most sought-after delicacies of Chinese cuisine. It featured prominently in banquets of historical importance, including the visit of U. S. Secretary of State Henry Kissinger to China in 1970 to reinstate diplomatic relations during the Nixon administration. Kissinger reputedly praised the meal for its quality, but it was never clear whether this was a matter of diplomacy or taste.  Part of the bamboo stinkhorn’s esteem stems from its health benefits according to ancient Chinese medicine. Recent research has confirmed that consumption correlates to lower blood pressure, decreased blood cholesterol, and reduced body fat. In the 1970’s the price of bamboo fungus was over $700 per kilogram but efforts to cultivate it commercially were developed driving the price down to less than $20 per kilogram. [13] It can be found in many Asian markets. The back of the package depicted above offers that “bamboo fungus is a magical fungus. It grows in a special environment, free from pollution. Once mature, it emits a notable light fragrance. Its shape is light weight. Its flavor is delicious. Its texture is silky. It is very nutritious. It is an ideal natural food.” Kissinger may or may not agree.

References

1. Darwin, C. On the Origin of Species, Easton Press, Norwalk, Connecticut, 1976 (original London 24 November 1859). P. 45.

2. Kendrick, B. The Fifth Kingdom, 3rd Edition, Focus Publishing, Newburyport, Massachusetts, 2000. pp 98-101.

3. Wickler, W. “Mimicry” Encyclopedia Brittanica Macropedia 15th Edition,  William Benton Publisher, Chicago, 1974, Volume 12, pp 218.

4. Lincoff, G. National Audubon Society Field Guide to North American Mushrooms, Alfred A. Knopf, New York, 1981, pp 831-835.

5. Marshall, S. Insects, Their Natural History and Diversity, Firefly Books, Buffalo, New York, 2006, pp 407-408, 481-484.

6. O’Kennon, B. et al, “Observations of the stinkhorn Lysurus mokusin and other fungi found on the BRIT campus in Texas” Fungi, Volume 13, Number 3, pp 41-48.

7. Money, N. Mr. Bloomfield’s Orchard, Oxford University Press , New York, 2002, pp 1-8.

8. Raverat, G. Period Piece: A Cambridge childhood. London,  1960 : Faber and Faber  p. 136

9. Mann, T. . The Magic Mountain, translated by John E. Woods. New York 1927: Alfred A. Knopf. p 364

10. Aurora, D. Mushrooms Demystified 2nd Edition, Ten Speed Press, Berkeley. California, 1986, pp 766-778.

11.McIlvaine, C. and Macadam, K. One Thousand American Fungi, Dover Publications, New York, 1973 (originally published in 1900 by Bowen-Merrill  Company), pp xiii, 568-576.

12. Schaechter, E. In the Company of Mushrooms, Harvard University Press, Cambridge, Massachusetts, 1997, pp 168-173

13. Chang S. and  Miles P. “Dictyophora, formerly for the few“. Mushrooms: Cultivation, Nutritional Value, Medicinal Effect, and Environmental Impact (2nd edition). Boca Raton, Florida: CRC Press. 2004  pp. 343-355.

Wineberry

Common Name: Wineberry, Wine raspberry, Japanese wineberry, Purple-leaved blackberry, Hairy bramble – Wine is the color of the tiny hairs that cover the stem and carpels, a dark red similar to that attributed to red/burgundy grapes. Berry is a general term applied to any small fruit. It originally derived from the Gothic word weinabasi, a type of grape, evolving to the Old English berie.  Berry is one of only two native words for fruit, referring to anything that was like a grape. The other is apple, given to larger, pome-like fruits. Weinabasi à Wineberry.

Scientific Name: Rubus phoenicolasius –  Rubus is Latin for “bramble-bush” of which blackberry was the most well known of the many types of prickly shrubs that comprise the genus. The species name means purple colored. [1] The Greek word for the color purple is phoinik, which was also the origin of Phoenicia, the ancient land on the eastern Mediterranean Sea coast, present day Lebanon. This littoral area was the source of sea snails from which a very valuable purple dye was extracted. Clothing dyed purple was thus a symbol of wealth and prestige, the term “royal purple” a vestige of its importance. Before the advent of chemical dyes in the early 20th century, color could only be naturally sourced like blue from indigo.

Potpourri: Wineberry would not make a very good wine and it isn’t really a berry. The first wines were naturally fermented thousands of years ago absent any knowledge of the pivotal role of yeast. The sugars in fruit were the food source for natural local yeasts that gave off alcohol as a byproduct of their metabolism. Grapes are the only common and prolific fruits that have enough natural sugar to produce the “weinabasi” libation discovered by fortuitous accident. Wineberries, like all of the other fruits from which wines might be made, must be supplemented with extra sugar (called chaptalizing) to feed the yeast fungus. Wineberry wine, albeit with a tart berry-like taste, would be a far cry from the rich flavor that the best French terroir can impart.  A berry is a fruit with seeds imbedded in the pulpy flesh, like grapes, watermelons, and tomatoes. Wineberry, like all brambles that comprise the genus Rubus, notably blackberry and raspberry, is an aggregate fruit with a multitude of tiny, clumped “berries”. One could presumably refer to one wineberry fruit as wineberries. Regardless of its unlikely name, wineberry has spread far and wide, becoming a nuisance to the point of becoming an invasive species in the Appalachian Mountain and coastal regions of the Mid-Atlantic states including Maryland and Virginia.[2]

Wineberries are native to central Asia extending eastward to the Japanese archipelago. They were intentionally introduced into North America by horticulturalists in the 1890’s to hybridize with native Rubus plants. The goal was to potentially improve on nature’s accomplishment by hybridizing native plants with introduced species to produce new cultivars with a greater yield of bigger berries and/or resistance to plant diseases and pests.[3] The compelling rationale for new edible crops at this point in time was that world population had surpassed one billion eliciting global food shortage concerns first raised by Thomas Malthus one hundred years earlier. The eponymous Malthusian principle that population rose geometrically (1,2,4,8 …) while agriculture rose only arithmetically (1,2,3,4 …) leading to inevitable famine was impetus for improvements in agricultural products and methods. The first Agricultural Experimental Station in the United States was inaugurated in New York in 1880 with the express purpose of addressing this challenge. Its director E. Lewis Sturtevant established the precept of conducting experimental agriculture to develop new plant foods. By 1887, with 1,113 cultivated plants and another 4,447 plants with edible parts, research focus shifted to developing fruit varieties. The bramble fruits of the genus Rubus, with about 60 known species and a well-established penchant for hybridization, were considered good candidates for experimentation. Wineberries from Asia became part of the mix.[4] As it turned out, the first green revolution of manufactured fertilizer using the Haber-Bosch process (see Nitrogen Article) and the second green revolution internationalizing Norman Borlaug’s high yield wheat put off the impending Malthusian famine, at least so far. There is every reason for Rubus breeding to continue. [5]

Wineberries nearly ripe beneath sepal husks

Bramble plants of the genus Rubus are so successful at dominating disturbed habitats that bramble has become a euphemism for any dense tangle of prickliness. Wineberry is only a problem because it is better at “brambling” than many other species, even though the stalks are covered with wine-colored hairs and have no prickles. It spreads both vegetatively with underground roots and with seeds spread in the feces of frugivores, animals that eat fruit. The wineberry plant consists of a rigid stem called a cane that extends upward, unbranching at first, reaching lengths of up to 9 feet. Vegetative spreading is enhanced by tip-rooting which occurs when the longer canes (> 3 feet) arch over and reach the ground, where adventitious roots form to establish an extension. In dense clusters, tip-rooting predominates. It takes two years to make a wineberry, as the first year primocanes apply all growth to cane extension and leaf formation for photosynthesis. The second year floricanes become woody and produce flowers that become fruits if fertilized. Wineberry flowers are hermaphroditic and are therefore less dependent on pollinators since there is no need to transport male pollen from the stamen of one flower to the female pistils of another. [6] Each wineberry fruit is protected by husks densely covered with the signature wine-colored hairs that are remnants of the sepals that comprise the calyx at the base of the flower. [7]

Wineberry is just one of many invasive species that have come to dominate large swaths of the forest understory in the twenty first century. Like kudzu planted for soil remediation of the Dust Bowl and plantain imported as a vital European medicinal, wineberry was introduced with good intention―the improvement of native berry stocks through hybridization. But, as has become increasingly obvious, the complexities of local ecology can result in mountains from molehills as “Frankenplants” take advantage of their reproductive strengths over the competition. There are a number of reasons for the success of wineberry in its unwitting but instinctual quest to become the one and only species wherever it can. It is an aggressive pioneer plant in any disturbed area. One study in Japan found that wineberry covered almost two percent of an extensive ski area after clearcutting, showing high phenotypic plasticity in its adaptations. Its tolerance to shade from tree growth due to old field succession of open areas promotes dense wineberry thickets that are the hallmark of its aggression. [9] On the other hand, all Rubus brambles are apt to dominate disturbed areas like roadside cuts, where one typically finds both raspberries and blackberries in addition to wineberries. There is some irony in that recent DNA analysis of the genus indicates that the first Rubus brambles evolved in North America and subsequently invaded Eurasia without any human intervention. They are brambles, after all. [10].

A bramble of wineberry canes

On the positive side, wineberries are tasty and nutritious, providing a snack for the passing hiker and food for the birds and the bees. A popular field guide to edible plants includes wineberries with raspberries and blackberries as uniformly edible, notably “good with cream and sugar, in pancakes, on cereal, and in jams, jellies, or pies.” [11] The consumption of Rubus fruits by humans precedes the historical record. Given that Homo erectus evolved from the fruit eating great apes, the impetus would be a matter of wired instinct. It is hypothesized that the reason that primates are the only mammals with red color vision is evolutionary pressure to find usually reddish fruit for sustenance and survival in the jungle forest. Historical documentation of the consumption of aggregate fruits was established by Pliny the Elder in 45 CE. He noted in describing raspberries that the people of Asia Minor gathered what he called “Ida fruits” (from Turkey’s Mount Ida). The subgenus of raspberries which includes wineberries is appropriately named Idaeobatus. It is probable that the Romans began to cultivate some form of raspberry as early as 400 CE. [12] Rubus aggregates were also important medicines in addition to the more obvious nutritional attributes. They contain secondary metabolites such as anthocyanins and phenolics which are strong antioxidants, contributing to general good health. Native Americans used them for a variety of ailments ranging from diarrhea to headache, although there is no indication that the effects were anything beyond placebo. [13]

All things considered, it is hard to get worked up over wineberries as pernicious pests. Granted they tend to spread out and take over but then again, so do all of the other brambles. In most cases, the area in question falls into the category of a “disturbed” habitat. While this could be due to storm damage, it is almost universally due to human activities. Road cuts through the forest may be necessary for any number of reasons, but they are initially unsightly tracts of rutted mud unsuited for hiking. Once nature takes over, the edges, now in direct sunlight, become festooned with whatever happens to get there first and grows fast. And what could be more appropriate than a bunch of canes covered with wine colored fuzz bearing sweet fruits?   

References: 

1. https://npgsweb.ars-grin.gov/gringlobal/taxon/taxonomydetail?id=32416

2. https://plants.sc.egov.usda.gov/home/plantProfile?symbol=RUPH

3. “Wineberries”  Plant Conservation Alliances, Alien Plant Working Group. 20 May 2005.

4. Hedrick, U. “Multiplicity of Crops as a Means of Increasing the Future Food Supply” Science, Volume 40 Number 1035, 30 October 1914, pp 611-620.

5. Foster, T. et al “Genetic and genomic resources for Rubus breeding: a roadmap for the future” Horticulture Research, Volume 116, 15 October 2019 https://www.nature.com/articles/s41438-019-0199-2   

6. Innes, R.  Rubus phoenicolasius. In: Fire Effects Information System, [Online]. U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station, Fire Sciences Laboratory (Producer) 2009. https://www.fs.usda.gov/database/feis/plants/shrub/rubpho/all.html   

7. Swearingen, J., K. Reshetiloff, B. Slattery, and S. Zwicker  “Plant Invaders of Mid-Atlantic Natural Areas”. Invasive Plants of the Eastern United States. 2002 https://www.invasive.org/eastern/midatlantic/ruph.html   

8. Wilson, C. and Loomis, W. Botany, 4th Edition, Holt, Rinehart and Winston, New York, 1967, pp 285-304.

9. Innes op cit.

10. Carter, K. et al. “Target Capture Sequencing Unravels Rubus Evolution”Frontiers in Plant Science. 20 December 2019 Volume 10 page 1615.

11. Elias, T. and Dykeman, P. Edible Wild Plants, A North American Field Guide, Sterling Publishing Company, New York, 1990, pp 178-185.

12. Bushway, L et al Raspberry and Blackberry Production Guide for the Northeast, Midwest, and Eastern Canada, Natural Resource, Agriculture, and Engineering Service (NRAES) Cooperation Extension, Ithaca , NY. May, 2008. https://www.canr.msu.edu/foodsystems/uploads/files/Raspberry-and-Blackberry-Production-Guide.pdf     

13. Native American Ethnobotany http://naeb.brit.org/uses/search/?string=rubus%20&page=1

Jimsonweed

Common Name: Jimsonweed, Jamestown weed, Thorn apple, Devil’s trumpet, Mad-apple, Stinkwort, Locoweed – The plant was named by the early settlers of  the first permanent English colony in North America established in 1607 eponymously named for their sovereign. Jamestown weed became Jimsonweed as an elision.

Scientific Name: Datura stramonium – The generic name is Hindi for a similar plant that grows on the Indian subcontinent that derives from the ancient Sanskrit word dhattura. The species name is a combination of the Greek strychnos and makinos, which translates roughly to “nightshade-mad.” [1] Nightshade is the common name of  the plant family more formally called Solanaceae and mad attests to the psychoactive effects that ingesting the plant induces.

Potpourri:  Jimsonweed is more American than apple pie. It is named for the first port of entry established by the Virginia company inspired by (the virgin) Queen Elizabeth I and named for her successor King James VI of Scotland who became England’s James I in 1603.The former was the last of the Tudors and the latter was the first of the Stuarts. The striking, stinking flower could hardly be ignored and quickly became one of the cynosures of the colony. Its medicinal properties were of major import in the steaming, swampy caldron of the tidewater coastal area ― it was valued for its “cooling” effect. It was surely one of the first of the New World plants coopted by the Europeans from their Indian neighbors as native herbals. Along with tobacco, which was promoted to “purgeth superfluous fleame  (phlegm) and other gross humors [2],” it joined the other members of the Nightshade Family, inclusive of tomatoes and potatoes, in the reverse migration of plants back to Europe. Jimsonweed expanded with the population westward, becoming an agricultural nuisance plant. Its medicinal properties are now better understood, its unrestricted use tempered with caution. It is hallucinogenic in moderate doses and deadly in excess.

The trials and tribulations of Jamestown and the Virginia Colony in its early years are interwoven with its namesake weed. As tobacco growing English settlers moved inland in the early 17th century, displaced Native Americans fought back with justifiable ferocity. Following a series of deadly raids along the Potomac River by the Susquehannocks in 1676, a young planter named Nathanial Bacon led a group of settlers demanding that the royal governor, Sir William Berkeley, take action to prevent further bloodshed. Bacon’s Rebellion, seen by some as a herald of American independence a century later in prescribing universal suffrage, forced the royal governor to flee as his capital city of Jamestown was torched. The rebellion was eventually quashed by British troops sent by King Charles II, who had succeeded his father Charles I a decade after he had been beheaded in the English Civil War during which the Virginia colony remained royalist.[3] In that every army marches on its stomach, the campaigning soldiers were no exception, and jimsonweed was on the menu. According to the historical record, jimsonweed was gathered for a boiled salad “and some of them ate plentifully of it.”  The ensuing reverie in which “one would blow up a feather in the air; another would dart straws at it with much fury; and another stark naked was sitting up in a corner” required their confinement “lest they should in their folly destroy themselves.” After eleven days of “a thousand such simple tricks,” they regained their composure “not remembering anything that had passed.” [4] Jimsonweed was a weed to be reckoned with.  

Native Americans were masters of herbal medicines as a matter of survival in the eons preceding knowledge of or access to analgesics and antibiotics. The Rappahannock, one of the numerous tribes of Virginia, were well acquainted with jimsonweed, though certainly by another name. Decoctions of leaves were made into salves for the treatment of wounds and their incident inflammation and formed into poultices for fevers and pneumonias. The seeds and leaves were known to be poisonous, a knowledge shared with other major east cost tribal communities like the Iroquois. This was undoubtedly the result of trial and error by more than one individual in the distant past, the learned lore remembered.  While local Indians could have warned the English soldiers in advance of their folly, it is more likely that they would have encouraged it as relations were strained at best. The Cherokee, inland toward the Appalachians, smoked dried jimsonweed leaves as a treatment for asthma, which would seem to be at odds with reason. [5] This latter use, however, became one of the most popular treatments in Europe in the 19th century. Smoke from what was called stramonium (from the species name) was recommended by physicians all over the world. The noted French novelist Marcel Proust wrote to his mother that “I had an attack of asthma and incessant running at the nose, which obliged me to walk all doubled up and light anti-asthma cigarettes at every tobacconist’s I passed.” [6] There is every reason to believe this to have been effective, preceding modern inhalers that alleviate asthmatic symptoms.

Jimsonweed that was once a weed from Jamestown became in time one of global apothecary’s standard prescriptions as stramonium. The 16th century English herbalist John Gerard wrote of the thornapple, a name that refers to the large, spiny fruits of jimsonweed, noting that its blossom was “offending to the head when it is smelled unto.” Juice from thornapples “boiled with hogs grease to the form of an unguent or salve, cures all inflammations whatsoever” and “doth most speedily cure new and fresh wounds.” [7] That this is similar in form and function to its uses among Amerindians lends credence to at least its vulnerary qualities. By the early 20th century, stramonium was included in most national Pharmacopoeias … the process to extract the medicinal compounds from jimsonweed leaves was specified in the United States Pharmacopeia. A yield 0.35 percent of its alkaloids, noted for their “unpleasant narcotic odor and a bitter, nauseous taste,” was expected with a collector able to capitalize its leaves for 2 to 5 cents a pound. In addition to dilation of the eye, a feature common to many Nightshade Family plants (especially belladonna, or beautiful woman, as eyes thus darkened were considered alluring), narcotic, diuretic, and anodyne uses were prescribed. As a validation of the common practice “in asthma, they are frequently employed in the form of cigarettes which are smoked or the fumes are inhaled.” [8]

What to make of this unusual plant that cooled Jamestown’s summer heat, evoked outbursts of exuberance from staid British soldiers, and healed the bloody wounds of war? Datura stramonium almost surely evolved in the tropics of the Americas and made its way north and south into more temperate regions. [9] It achieved this feat with an egg-shaped seed capsule that is covered with prickles―a botanical oddity. [10] Since plants are sessile, they must employ external agents to disperse seed. One of the more successful ways of doing this is to grow a fruit that is both colorful and sweet like an apple to attract animals. Consumption of the fruit results of the deposition of the seeds, hardened against gastric acids, in a mound of excrement, an ideal fertilizer, at some distance from the parent. Spines or thorns have the opposite purpose. The sharp points that stick into sensitive mouth tissues are there to prevent animal ingestion. This is why they are normally found along stems in plants like roses and barberries. Jimsonweed evolved a novel solution. The seed pods burst open (technically called dehiscence) with enough force to eject the seeds up to 10 feet from the parent plant. If a stream happens to be within range, the seeds are buoyant and can stay afloat for over a week. Each plant can contain as many as 50 seed pods, ejecting over 30,000 seeds that are both consistent and persistent. In one field trial over 90 percent of the seeds germinated almost 40 years after pod ejection. Once established, it can wreak havoc on crops, notably reducing crop yields of soybeans and tomatoes by up to 50 percent. [11] The plant from Jamestown is a serious weed.

But that is only half of the story. Jimsonweed produces powerful chemicals that deter almost all herbivores from eating it, killing many that try … but most don’t since the alkaloids that act as deterrents are distinctly bitter in taste. Bitter is one of the five nominal tastes sensed by most animals that indicates poison, just as sweet is nutrition, salty is minerals, sour is unripe and savory is protein. The most common form of animal jimsonweed poisoning is contaminated hay as silage for cows and horses and contaminated grain fed to chickens. This occurs when harvesters fail to carefully inspect fields for infestation prior to threshing. [12] The most serious jimsonweed poisoning problem is humans, particularly juveniles for which the lure of intoxication is unconstrained by the wisdom of years. The main culprit is atropine, mnemonically described by clinicians as having symptoms of being “blind as a bat, mad as a hatter, red as a beet, hot as a hare, dry as a bone, the bowel and bladder lose their tone, and the heart runs alone.”  Since over 80 percent of all cases also involve hallucinations, the “tune in … turn on” experience is considered by some to be worth the risk; there were over 300 emergency room visits in 1993 alone. [13]  A popular medicinal plant field guide provides the following dire warning concerning jimsonweed: “Violently toxic. Causes severe hallucinations. Many fatalities recorded.” The bold typeface is in the original. [14]

The effects of jimsonweed are a matter of neuroscience. The circuitry that sends signals to trigger heartbeats and climb stairs relies on neurons that convey action impulse to muscle momentum. Neurons operate in sequences with the dendrite at one end  signaling to the  axon in the next in line across a gap called a synapse. The signal is carried across the synapse with molecules called neurotransmitters. There are about twenty, including the well known serotonin which is linked to the management of anger and dopamine which signals pleasure. Acetylcholine (ACh) is not so well known, but it is perhaps the most important. It is defined as “the neurotransmitter at many synapses in the peripheral and central nervous systems, including the neuromuscular junction.” Atropine, the alkaloid produced by jimsonweed disrupts the proper operation of acetylcholine. In the lexicon of psychiatry, it is an agonist. ACh is the primary neurotransmitter of the Autonomic Nervous System (ANS) which carries out most if not all of unconscious signals that operate the organs like pulsing hearts and breathing lungs. This includes the sympathetic nervous system, which is essentially crisis control central with its functions known suggestively as the four F’s (fight, flight, fright and sex). ACh also operates to trigger the conscious operation of muscles, the essence of all bodily movement from walking to chewing. [15] Any disruption to the proper operation of acetylcholine is bound to have consequence, which jimsonweed does. Mad-apple is an apt common name.

References:

1. Center for Agriculture and Bioscience Compendium https://www.cabidigitallibrary.org/doi/10.1079/cabicompendium.18006

2. Boorstin, D. The Americans, The Colonial Experience, The Easton Press, Norwalk, Connecticut, 1958. pp 209-210.

3. Mapp, A. Virginia Experiment, The Old Dominion’s Role in the Making of America 1607-1781, Lanham, Maryland, 1985, pp 119-172.

4. Beverly, R. The History of Virginia, in Four Parts, London,  Printed for F. Fayram, J. Clarke, 1722 https://www.gutenberg.org/cache/epub/32721/pg32721-images.html#Page_109     

5. The Native American Ethnobotanical Database http://naeb.brit.org/uses/search/?string=datura        

6. Jackson, M. “”Divine Stramonium”: The Rise and Fall of Smoking for Asthma”. Medical History. April 2010  pp 171–194.   https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2844275/             

7. Gerard, J. Generall Historie of Plantes, London, 1597, pp 191-193 https://archive.org/details/herballorgeneral00gera/page/n5/mode/2up        

8. Henkel, A. “Jimson weed”. American Medicinal Leaves and Herbs. U.S. Government Printing Office. 1911.  p. 30.

9. “Datura stramonium”. Germplasm Resources Information Network (GRIN). Agricultural Research Service (ARS), USDA   https://npgsweb.ars-grin.gov/gringlobal/taxon/taxonomydetail?id=13323    

10. Niering, W.  and Olmstead, N. National Audubon Society Field Guide to North American Wildflowers. Alfred A. Knopf, New York, 1998, pp 802-803.

11. Michigan State University Department of Plant, Soil, and Microbial Sciences https://www.canr.msu.edu/weeds/extension/jimsonweed    

12. Cornell University Weed Identification for New York State https://blogs.cornell.edu/weedid/jimsonweed/

13. Arnett A. “Jimson Weed (Datura stramonium) poisoning”. Clinical Toxicology Review. December 1995. Volume 18 Number 3. https://www.erowid.org/plants/datura/datura_info5.shtml  

14. Duke, J. and Foster, S. A Field Guide to Medicinal Plants and Herbs, Houghton Mifflin Company, Boston, 2000, p 205.

15. Bear, M, Connors, B., and Paradiso, M. Neuroscience, Exploring the Brain, 4th edition, Wolters Kluwer, Philadelphia, 2016,

Japanese Beetle

There is no mistaking the brown and green wing covers of the Japanese beetle.

Common Name: Japanese Beetle – Unlike many animals and plants broadly referred to as Asian in origin, there is no doubt that this beetle was inadvertently introduced from Japan to the United States and spread, becoming an agricultural Juggernaut.

Scientific Name: Popillia japonica – The genus is based on a well established Roman surname. Marcus Popillius Laenas was consul, one of the Roman Republic’s two top magistrates, noted for his defeat of the Gauls in 359 BCE. He was the first of a long line of distinguished Roman leaders named Popillius. There is no known connection between any of these descendants and beetles. The species name establishes geographic origin in Japan.

Potpourri:  The Japanese Beetle is a case study in the invasive behavior of an alien species in the life and times of the twentieth century.  Its clandestine point of entry in August 1916 was  New Jersey in the form of beetle larvae ensconced in iris rhizomes imported from Japan as horticultural garden center offerings. [1] Spreading at a rate of about 10 miles per year, the shiny green and brown scourge has ravaged planthood in the eastern half of North America for over a century. The root munching grubs eat voraciously through turf all summer long, despoiling large swaths of lawns, or, if used to hit balls and then go find them, golf courses, only to become adults after wintering over six inches deep. What follows in spring after pupation is a two month feeding and mating frenzy culminating in the turf deposition of some 50 eggs per female to sow the seeds for Malthusian beetle populations. With an annual agricultural loss cost estimated at half a billion dollars, they have spawned a whole industry of eradication and control.

Beetles are by some measures the most successful of earth’s inhabitants. With more than 300,000 species worldwide, they comprise about one fourth of all described animals―a thousand beetles for every primate. This is in part due to an “intelligent” design. The Order Coleoptera  to which they are assigned is literally Greek for ‘sheath wings,’  describing their key taxonomic anatomical similarity. The hardened, chitinous front wings encase the more delicate rear wings with an armored barrier similar in form and function to  a box turtle’s carapace, protecting the beetle from many an unwelcome intruder. These encapsulating forewings, which are called elytra, the plural for elytron which also means ‘sheath’ in Greek, unfold with an elaborate linkage of struts and elbows to release the diaphanous rear wings for flight. The beetle, a six-legged biological version of the bipedal transformer toy, thus converts from a stolid, tank-like ground vehicle into a clumsy but functional airfoil to find food, to find a mate, to escape emergent threats, or simply to gad about on summer days. [2] The aphorism that the Creator must have had an inordinate fondness for beetles because he made so many of them is frequently attributed to Charles Darwin. The more likely source is British entomologist J.B.S. Haldane who wrote that “the Creator would appear as endowed with a passion for stars, on the one hand, and for beetles on the other for the simple reason that there are nearly 300,000 species of beetle known, and perhaps more …” [3] The versatility and resilience of beetles is notable, divine or otherwise.

Japanese beetles are in the family Scarabaeidae, usually referred to as simply scarabs, which comprise one tenth of all beetle species (a mere one hundred per primate). The historical importance of the scarabs is evident in nomenclature. Scarabaeus is Latin for beetle, which probably came from the Greek karabos, meaning horned beetle, with good reason. According to the dated but enduring Linnaean taxonomy, scarabs are distinguished in having the last 3 to 7 sections of their 10 segmented antennae formed into a lamellate or plate-like club; lamellicorn beetle is an alternative name. The notoriety of horn-beetle scarabs is due in part to their relatively large size, and, in many cases “outgrowths on head and thorax” that “produce bizarre forms.” [4] But the more surprising scarab origin story is central to Egyptian mythology. Khepri was one of the names for their Sun god  (along with Ra, Atum and Horus) that is a cognomen taken directly from kheprer, the Egyptian name for the dung-beetle. Many scarabs feed on animal feces and other decaying matter as a nutritional niche. The dung beetle carries this one step further, molding the semi-solid stool into balls that can be rolled along the ground and deposited into a purpose built hole. Here the eggs are laid so that hatched larvae will be provisioned with their first feast. The Egyptian holy men interpreted the dung ball as representing the sun being pushed into the “Other World” at dusk and back over the horizon at dawn. Thus the scarab amulet, a signature Egyptian embellishment and adornment, symbolized “the renewal of life and the idea of eternal existence.” [5] The transubstantiation of the bread and wine of communion into the body and blood of the Christian deity that are then consumed in sacristy is no less outré.

While dung is not on the menu for the Japanese Beetle, just about everything else is, earning it the distinction of being considered polyphytophagous, Greek for “many plant eating.”  While roses and fruit trees are its most notorious targets, the beetle smorgasbord includes at least 435 identified species from 95 families including garden and field crops, ornamental shrubs, and shade trees. The choice of one plant over another is related at least in part to scent.  Research has demonstrated that the phytochemicals eugenol and geraniol are particularly attractive―the fact that roses contain both provides some empirical validation. Exacerbating the beetle invasion problem (beetlemania?) is their tendency to congregate on one plant, creating a writhing mass of coruscating green and brown. Field testing has revealed that twice as many beetles alight to join a party in progress, eschewing adjacent plants of the same species for no apparent reason. Both the quality and quantity of the meal must surely suffer as communality prevails. With a preference for plants in direct sunlight, the banquet starts at the top, stripping the foliage downward by eating between the leaf veins, leaving characteristic lacelike skeletons as remnants. In many cases, the plant is left totally defoliated and dies as a result. In one field test 2,745,600 beetles were collected from 156 peach trees … an average of 17,600 per tree. As half of that population would be female, the ensuing egg deposition in nearby fields would result in a veritable contagion of larval grubs, eating away at the roots of the ecosystem to the detriment of both field and forest. [6] The scourge of the Japanese beetle to an environment unprotected by native predators can be apocalyptic.

Beetle mating mania

Evolutionary success for any animal species requires a minimum of two surviving adults to replace each gravid female. In insects, this is achieved predominantly by depositing large caches of fertilized eggs that hatch to larvae and pupate to adults mating in sufficient numbers to establish perpetuity. Japanese beetles evolved to survive predation and attrition in their native habitat, primarily the grasslands of northern Honshu and the whole of Hokkaido. In the United States they are unchecked, and their sexual drive to survive has produced exorbitant dividends. Male beetles are equipped with a penis-like aedeagus to inject cyst-encapsulated spermatozoa into the female vagina. The instinctual male mating mandate is triggered by the pheromones of emergent virgin females; they descend en masse, forming large clusters called “beetle balls.” One experiment using females in a trap collected almost three thousand males in one hour. Mating attempts persist throughout month-long adult lives. Coitus occurs primarily on leafy foliage that doubles as dining room and can last for several hours. Speaking of balls, one male was observed mating with seven different females in a single day and another was observed mating with at least two different females over five consecutive days.  Females take periodic breaks from the action to dig about three inches into the soil to lay several eggs only to return to remate and repeat, ultimately laying about fifty. [7] A population bomb nonpareil.

The exploding growth of Japanese beetles was noted within two years of their initial introduction in a nurseryman’s refuse pile in Burlington County New Jersey in 1916. By 1920, 1,000 quarts of beetles were collected in one half square mile and two years later, the area had expanded to six square miles. In 1923, when the range had surpassed 700 square miles and extended into Pennsylvania, the clarion call was sounded at the national level. The USDA dispatched scientists to Japan to search for predators and began evaluating pesticides for control and remediation. [8] But it was too little too late and by 1970, the range had reached at least 150,000 square miles and extended over 14 states. Despite extensive efforts to stem the tide, it is now established in 30 states. While it was long thought that the Rocky Mountains and the Great Basin would present an impenetrable barrier to their westward migration, Japanese beetles have recently made landfall in the Pacific northwest. It is postulated that adult beetles hitched a ride on an airplane or that larvae arrived surreptitiously in the root soil of imported plants. [9] The economic costs have grown accordingly. The Japanese beetle larva is the worst turf-grass pest in the United States; control costs are estimated at $460 million annually. This estimate is not inclusive of crop damage and the devastation of ornamental shrubs like rose bushes.  While this is hardly chump change, it pales in comparison to the annual costs of invasive species, which is on the order of $20 billion. The highest invasive species costs are attributed to mammals, primarily due to rodent crop damage. Plants are next due to strangling vines like Amur honeysuckle. Insects place third, led by the red imported fire ant (or RIFA) of the southwest with an annual cost of $1.5 billion. [10] The Japanese beetle has the distinction of being one of the first invaders and one of the most visible if not the most costly. It can only get worse with a warmer climate.

In an attempt to mitigate some of the economic and aesthetic damage, farmers and homeowners usually start with chemical warfare, primarily with pesticides  based on permethrin and carbaryl. The former is known as one of the best deterrents to ticks when applied to clothing for hikers and soldiers but the latter is more widely used because it is cheaper. The insect killing euphemism pesticide captures the insidious effects of chemicals when widely applied to farm fields and home gardens. The extermination of “pest” species like Japanese beetles also eliminates beneficial insects like butterflies and bees. The insect Armageddon of the last several decades is an unsettling result due in no small part to its food chain effect; many birds rely on bugs for protein. There are certainly eco-friendly alternatives based on botanicals, but they are for the most part deterrents that only last for several days. The main effect is to shunt the beetles temporarily to another location like your neighbor’s garden. A second line of defense utilizes Japanese beetle traps that emanate vapors made from a combination of virgin female pheromones and a treacly blend of fruits. The problem is that the traps are much more effective at attracting beetles (especially males) than they are at capturing them. The end result follows the law of unintended consequences: more traps, more beetles.[11]

The obvious but complicated alternative is biological control. Difficulties arise not only in the identification of the appropriate control organism  but also in ensuring that the cure does not become a curse. It is a factual matter that invasive species come from somewhere where they are not invasive … held in check by their native evolved ecology. While the first step is to scour home turf for potential predator imports, an assessment of  viability to the new environment is equally mandatory. Among the notable failed biological control attempts was the introduction of mongooses to Hawaii to kill crop-eating rats. The diurnal mongooses never hunted the nocturnal rats, decimating the bird population instead. In the case of beetles, the task is not as onerous since many wasps are masters of insect parasitism and, not infrequently, one species of wasps specializes in one species of beetle. The Spring Tiphiid Wasp (Tiphia vernalis) was introduced to North America in the 1920s for its known parasitism of Japanese beetles. As one of natures more insidious predators, the female wasp burrows into the soil to locate a beetle grub, paralyzes it with a sting, and lays its egg that hatches to a larva that feeds on the now immobilized carcass. While effective, the tiphiid wasps alone have failed to check the Japanese beetle onslaught and other controls have been identified. The Winsome fly (Istocheta aldrichi) was also imported from Japan as a control vector. It deposits eggs on the thorax of adult female beetles which hatch to maggots that burrow under the outer wing covers to consume the softer body parts. There are also insect eating nematodes and several types of bacteria that are employed in the never ending battle to thwart the Japaneses beetle invasion. But so far, it is at best a standoff. [12]

 The impracticality of eradicating an invasive species like the Japanese beetle renders damage control as the only feasible alternative. The ounce of prevention as a pound of cure method is to establish protocols to halt the human-assisted migration of beetles from an infested part of the country to new territory. Nine western states have signed on to the USDA Animal and Plant Health Inspection Service (APHIS) Plant Protection and Quarantine (PPQ) program to monitor Japanese beetle populations and stop migration. Airports are assessed for local beetle populations and aircraft are treated to minimize the chances for the spread from infested areas to the protected states. [13] While this will lower the risk, it will not eliminate it. With some irony, it has been pointed out that, for all the human chemical, control, and programmatic efforts, the Japanese beetle has outsmarted us. Therefore, the first rule of Japanese beetle control is that you can’t control Japanese beetles. It is possible to reduce the damage by using chemical sprays selectively on their favorite plants like roses and kill enough to prevent their spread to other plants, a process called trap cropping. Another possibility is to encourage limited growth of plant invasives such as multiflora rose and Japanese knotweed that Japanese beetles demonstrably prefer. But be ever mindful of who is in charge. The final rule of Japanese beetle control is that they will “seek revenge for their dead relatives.” [14]

References:

1. Milne, L. and Milne, M. National Audubon Field Guide to North American Insects and Spiders, Alfred A. Knopf, New York, 1980, pp 561-562

2. Marshall, S. Insects, Their Natural History and Diversity, Firefly Books, Buffalo, New York, 2006, pp 257-258.

3. Haldane, J.B.S. What is life?  The Layman’s View of Nature,  L. Drummond, London,1949,  p 258.

4. Gressitt, J. “Coleoptera”, Encyclopedia Britannica, 15th edition  Macropedia Volume 4 William Benton Publisher, Chicago, Illinois, pp 828-837.

5. Viaud, J. “Egyptian Mythology” New Larousse Encyclopedia of Mythology, Hamilton Publishing Group, Ltd. London, 1973, pp 9-43.

6. Fleming, W. “Biology of the Japanese Beetle” USDA Technical Bulletin Number 1449, July 1972. https://naldc.nal.usda.gov/download/CAT87201410/pdf     

7. Gyeltshen, J. et al “Japanese Beetle” University of Florida. https://entnemdept.ufl.edu/creatures/orn/beetles/japanese_beetle.htm     

8. “Japanese Beetle Ravages”, Reading Eagle Newspaper Article 22 July 1923 extracted from New York Herald.

9. Betts, A. “Japanese beetle count passes 20,000” Washington State Department of Agriculture Ag Briefs. 3 September 2021. https://wastatedeptag.blogspot.com/2021/09/japanese-beetle-count-passes-20000.html     

10. Fantle-Lepczyk, J. et al “Economic costs of biological invasions in the United States” Science of the Total Environment, Volume 806, Part 3, 1 February 2022. https://www.sciencedirect.com/science/article/pii/S0048969721063968?via%3Dihub   

11. Potter, D. et al “Japanese Beetles in the Urban Landscape” University of Kentucky College of Agriculture, Food, and Environment Entomology Department. https://entomology.ca.uky.edu/ef451

12 “Managing the Japanese Beetle: A Homeowner’s Handbook. USDA, Washington, DC https://www.aphis.usda.gov/plant_health/plant_pest_info/jb/downloads/JBhandbook.pdf   

13. USDA Animal and Plant Health Inspection Service (APHIS) Japanese Beetle Handbook https://www.aphis.usda.gov/import_export/plants/manuals/domestic/downloads/japanese_beetle.pdf

14. Gillman, J.  “Disney and Japanese Beetles”. Washington State University, 18 March 2010

Rosebay Rhododendron

Common Name: Rosebay Rhododendron, Rosebay, Great-laurel –  Rosebay is used as a descriptive name for several plants with characteristic rose-like blossoms. Rhododendron is one of the few plants with its genus as common name.

Scientific Name: Rhododendron maximum – The genus is a combination of the Greek words rose (rhodon) and tree (dendron), an indication of its well established association with civilizations in antiquity, “rose tree” being an apt description. Maximum is a widely used term especially in mathematics as an adjective to convey the largest in size or quantity. It is derived from the Latin magnus, meaning great or big, which, for a rhododendron, it is.  

Potpourri: The lush, dense thickets of rhododendron that dominate the understory of upland elevations are testimony to an evolutionary path that produced a competitive combination of successful traits. It is one of the relatively few broad-leaved flowering plants of the angiosperm (enclosed seed) clade that is evergreen, retaining foliage year-round like the largely needle-leaved gymnosperm (naked seed) clade. The prominent rose-like blossoms that extend from the end of nearly every branch are a bouquet to attract pollinators, mostly bees, that flit from one to the other collecting nectar and pollen. With successful fertilization, an elongated, egg-shaped fruit with five cells splits open to release thousands of seeds that scatter to extend the grove ever outward. The lack of any evidence of insect damage or animal browse is a matter of chemistry. Rhododendrons, like many other members of the Ericaceae or Heath family, evolved strong chemicals to deter predation. Most animals give it a wide berth; deer supposedly are able to browse without harm but there is little evidence that they do so regularly. Rhododendron leaves can be fatally toxic to cattle and sheep. [1]

There are over one thousand species of the Rhododendron genus that extend globally across the temperate climates of the northern hemisphere. Based on fossilized pollen found in strata dating from near the end of the Cretaceous Period and fossil leaves from the beginning of the Tertiary Period, it is postulated that Rhododendrons first appeared just after the breakup of Pangaea in southeastern Asia about 50 million years ago. Speciation spread globally across a wide band of latitude during the pre-glacial epochs when a warmer climate prevailed. It is probable that the subsequent glacial cooling cycles of the current Quaternary Period resulted in the isolation of rhododendron populations in remote mountainous regions, just as Balsam Firs are isolated in elevated areas of the Appalachians. This explains the rich diversity found on the slopes of deep valleys in southeast Asia in a band extending from just east of the Himalayas through the Malaysian archipelago and the single species rhododendron diaspora to Japan, the Appalachian Mountains, and the Caucasus region of eastern Europe. [2]

While “mad honey” may also evoke marital discord, it has historical rhododendron relevance. Mesopotamia, the land between the Tigris and Euphrates rivers, is where western civilization arose from Neolithic farm villages that planted the first tentative crops. Rhododendron had spread across the Anatolian peninsula that is now Türkiye from their epicenter in the Caucasus, attracting swarms of honeybees.  In 400 BCE, the soldier of fortune Xenophon led his mercenary Greek army eulogized as the “ten thousand” on a forced march of 1500 kilometers westward  through hostile territory of present day Kurdistan and Armenia to the Black Sea. Lacking adequate provisions, they lived off the land, raiding bee hives for honey. As Xenophon later recorded in his chronicle Anabasis,  “the soldiers who ate the honey went off their heads, and suffered from vomiting and diarrhea … so they lay there in great numbers as though the army had suffered a defeat, and great despondency prevailed.” [3] While no one died, the debilitating effects of rhododendron honey were put to nefarious use near this same location three centuries later.

Mithridates Eupator became the ruler of Pontus in 115 BCE when his mother, who had tried to kill him as a youth, was deposed in a coup d’état. To protect himself against the conspiracies inherent to governance of that era, he followed a regimen of microdoses of poison to acquire immunity over time, becoming an expert on toxins and their antidotes. Uniting the diverse population of Greeks, Persians, and Thracians along the northern tier of Asia Minor, he became a serious rival to the Romans encroaching ever eastward. In the First Mithridatic War (88-84 BCE), his navy of 400 ships and army of 290,000 took over the Black Sea and the Greek cities on its banks, putting an end to the flow of tribute money to Rome and nearly bankrupting their economy. [4] The Romans rallied in two ensuing wars that eventually drove Mithridates from power as the last major eastern threat to their burgeoning empire, but not before he had tricked them on at least one occasion with mad honey of the rhododendron. In 67 BCE, the Roman general Pompey was advancing eastward along the Black Sea coast near Trabzon to engage the Pontic forces. Mithridates, employing his mastery of poisons, placed bee hives in clay pots along their route. Three squadrons of Roman soldiers succumbed and were slaughtered in their stupor. In spite of this tactical success, the forces of Rome eventually prevailed, and Mithridates was deposed and exiled to Crimea, where he was stabbed to death by the agents of his son since poisoning was not an option. [5]  The genus Eupatorium which includes the poisonous white snakeroot and the medicinal boneset is named for Mithridates Eupator in recognition of his contribution to toxicology.

Honey from rosebay rhododendron in North America is not nearly as common nor as virulent as the legendary Caucasian rhododendron of Xenophon and Mithridates. Nonetheless, the mad honey trope persists. In 1801, an account of rhododendron honey inducing nausea, muscle spasms, and blurred vision was published in the Transactions of the American Philosophical Society.[6]  A report in the most venerable scholarly publication in the Americas (established in 1771 when the states yet to be united were still colonies) affords some credence to this assertion. However, there is little evidence of any significant incidence of what is sometimes euphemistically called “honey intoxication” in North America. There are several reasons for this. R. maximum is neither as toxic as R. ponticum, the plant eponymously named for Mithidates’ Pontus homeland, nor as widely dispersed. There is also the fact that honey bees are indigenous to Europe as native pollinators for many wild plants. They were introduced to the Americas for crop pollination and are largely relegated to that role, even as some have become naturalized. The few reports of  mad honey illnesses in the US are at least in part attributable to an alternative medicine herbal treatment that links “sexual performance enhancement” to the consumption of bespoke beekeeper-induced rhododendron-mad honey.  Of twenty one honey-related emergency room visits due to symptoms that included dizziness, nausea, vomiting, and syncope (loss of consciousness due to low blood pressure), most were men of middle age who sought to regain virility [7]―another good reason to call it mad honey.

Rhododendron maximum produces a  poison named grayanotoxin which has been and is sometimes still referred to as andromedotoxin, acetylandromedol, or rhodotoxin (from the genus). While concentrated in honey, it also permeates the leaves and flowers. Grayana is an Asian Heath Family plant species from which the toxin was first extracted and analyzed. The name derives from the American botanist Asa Gray who supported Darwin’s work with the observation that many plants in eastern North America were similar to those of east Asia (like rhododendron), indicating similar evolutionary progressions. Grayanotoxin interferes with the operation of neurons by disrupting “voltage-gated sodium channels.” The effect is that  the neurons that carry the signals from one part of the body to another that make everything happen … from the beating of the heart to the thinking of the brain … can no longer do so in the prescribed order with proper timing. [8] The mechanism employed by neurons to carry out their quintessential task is electrochemical.  Electrical signals travel along the neuron from the dendrites at one end to the axon at the other where they pass over a gap called the synapse to the next neuron in the sequence path using sodium ions as transport. This is the main reason that electrolytes (ionic fluids) are so important and that hyponatremia (low sodium) can be fatal. It has been long been established as most likely that this ionic neural mechanism was a random (Darwinian) mutation that evolved only once, and, owing to its sensory and mobility efficacy, was replicated in every animal ever since.  However, it may be much more complex than that as sea sponges, which have no neurons, and comb jellies, which do, have DNA similarities. [9] The details of evolution are still evolving.

The effects of rhododendron grayanotoxin poisoning are what one might expect considering the disruption of nerve function as its cause.  Dizziness, confusion, and blurred vision are sure to follow a diminution of neuron signaling in the brain. Likewise, insidious side effects on autonomous systems take a toll; the heart beats more slowly and  blood pressure can drop to induce a loss of consciousness. Since nerves do everything, a panoply of effects have been reported that range from numbness around the mouth and excessive salivation to vomiting and diarrhea. Since humans don’t as a rule eat leaves and flowers, most reported human health effects concern the consumption of  toxic honey, which is brown and bitter and not golden and sweet. Since bitter is a  taste sensor variant to protect against inadvertently consuming poisons, it is unclear why anyone would eat tainted honey in the first place (excepting virility which trumps reason). However, cattle, sheep, goats, and donkeys do eat rhododendron leaves and consequently fall victim to its poison. The toxic dose for cows is 0.2 percent body-weight (about one kilogram) with symptoms appearing about three hours later that last for several days. Fatalities are not uncommon in part due to the ruminating mastication of cows; chewing toxic cud can only release more poison. Domestic cats and dogs will on occasion consume the azalea type of rhododendron that is widely planted in gardens, the characteristic symptoms of gastrointestinal distress result. [10]

Plants create toxic chemicals for a reason – usually to deter animal predation. Heath Family plants are no exception. Grayanotoxin likely was an evolutionary mutation that kept herbivorous animals at bay, which it does. In some cases, a priori plant chemical defenses can be co-opted by humans to take advantage of their toxicity. This is especially true when a plant (or fungus) has evolved to ward off microbes or bacteria that are equally threats to the health of humans, becoming an antibiotic. In that grayanotoxin acts to disrupt neural activity, it would seem an unlikely candidate for medicinal use owing to its profound, disturbing effects. However, there is ample evidence that it was used by Native Americans for a variety of applications. [11] Cherokee used it both as an external poultice for rheumatic pain and as treatment for skin abrasion. This may merely have been a placebo that was thought to work, so it did. The rhododendron was apparently also used for various purposes having nothing to do with health, such as to “throw clumps of leaves into a fire and dance around it to bring cold weather.”  [12] It is also reported that Native Americans made a tea from the leaves that was “taken internally in controlled doses for heart ailments.” The same guide notes “leaves toxic, ingestion may cause convulsions and coma.” [13] There has been some recent research concerning the use of rhododendron compounds for specific ailments. For example, diabetic rats treated with grayanotoxin produced more insulin, presumably due to some form of nerve stimulation. All things considered, it is probably best to avoid it altogether, in spite of any number of herbal remedies containing rhododendron extract that supposedly produce salubrious affects. [14]

Heath Family shrubs (Ericaceae) are masters of their chosen environments that include the    understory of trees at higher elevations and craggy, berry bogs. They have help in the form of specialized fungal partners that envelop their roots, providing soil nutrients like phosphorus and nitrogen in exchange for the sugars generated by photosynthesis. This relationship is called mycorrhizal derived from the Greek words for fungus mykes and root rhiza, literally “fungus root.” While almost all (~90 percent) plants have mycorrhizal fungal partners, most are either in the form of fungal sheathes surrounding the outside/ecto of the root (ectomycorrhizal – mostly trees), or fungal branches that penetrate into/endo root cells (endomycorrhizal – other plants) to form little tree-like structures called arbuscules. Ericoid mycorrhizas combine the two forms in that they both surround the roots and penetrate the cells so that this effect is even more efficacious. It is now well established that trees and shrubs (like rhododendron) share and balance nutrients to maintain a healthy ecosystem through their interconnecting fungal-root networks; facetiously the “wood wide web.” [15] The effectiveness of the outer and inner “ectendomycorrhizas” of heaths in promoting interconnected communities is such that they can and do completely take over a habitat. This can be a problem when rhododendron are introduced to non-native environments. For example Rhododendron ponticum was introduced to the UK from Iberia in 1763 and has spread to crowd out native trees, covering over three percent of all woodlands. Once established, it is almost impossible to extirpate. [16]

Rhododendrons, in spite of invasive tendencies in some regions, are among the most popular horticultural plants. It is the most diverse genus of the Heath Family with more than a thousand identified species. There is a Global Conservation Consortium for Rhododendron that seeks to promote and protect all species from extinction. Their relevance to the ecosystems is of particular importance to “underpin livelihoods in regions where they protect watersheds and stabilize steep mountain slopes in the areas where some of the most significant river systems in Asia begin.” [17] The rhododendron collection at the renowned Royal Botanical Gardens at Kew is among its most cherished, with over 3,000 species of which 300 are threatened with extinction. They were in many cases discovered, named, bred, and donated by the generation of British plant hunters that plied the globe during the nineteenth century. [18] So far as is known, none of them were affected by mad honey, their virility apparently well established.

A near impenetrable stand of rhododendron crowd out all other vegetation

References:

1. Brown, R. and Brown, M. Woody Plants of Maryland, Port City Press, Baltimore, Maryland, 1999, pp 247-254.

2. Irving, E. and Hebda, R.  “Concerning the Origin and Distribution of Rhododendrons”. Journal of the American Rhododendron Society. 1993 Volume 47 Number 3.

3. Xenophon. “4.8.19–21”. In Brownson CL (ed.). Anabasis. Perseus Hopper. Department of Classics, Tufts University. https://www.perseus.tufts.edu/hopper/text?doc=Xen.%20Anab.%204.8&lang=original

4. Durant, W. Caesar and Christ, The Story of Civilization Volume 3, Simon and Schuster, New York, 1944, pp 516-519.

5. Lane R. and Borzelleca J. “Harming and Helping Through Time: The History of Toxicology”. In Hayes AW (ed.). Principles and methods of toxicology (5th ed.). 2007, Boca Raton: Taylor & Francis.

6. Harris, M. Botanica North America, Harper-Collins, New York, 2003, pp 60-61

7. Demircan A. et al. “Mad honey sex: therapeutic misadventures from an ancient biological weapon”. Annals of Emergency Medicine. 15 August 2009 Volume 54 Number 6 pp 824–829

8. “Grayanotoxins”  Bad Bug Book: Handbook of foodborne pathogenic microorganisms and natural toxins (2nd ed.). Food and Drug Administration. 2012. https://www.fda.gov/media/83271/download   

9. Dunn, C. “Neurons that connect without synapses” Science 21 April 2023, Volume 280, Issue 6642, , p.241, 293.

10. Jansen S et al . “Grayanotoxin poisoning: ‘mad honey disease’ and beyond”. Cardiovascular Toxicology. 19 April 2012 Volume 12 Number 3 pp 208–215.

11. Popescu, R and Kopp, B. “The genus Rhododendron: an ethnopharmacological and toxicological review”. Journal of Ethnopharmacology Volume 2 May 2013, 147 Number 1 pp 42–62.

12. Ethnobotany database at http://naeb.brit.org/uses/search/?string=rhododendron

13. Duke, J. and Foster, S. Medicinal Plants and Herbs, Houghton-Mifflin, Boston 2000, p. 260.  

14. Jansen, op. cit.

15. Kendrick, B. The Fifth Kingdom, Third Edition, Focus Publishing, Newburyport, Massachusetts, 2000, pp 257-278.

16. Simons, P. “A spectacular thug is out of control”. The Guardian. 16 April 2017

17. https://www.globalconservationconsortia.org/gcc/rhododendron/  

18. https://www.kew.org/

Starling

Common Name: Starling, European Starling, Common Starling – The vocal, gregarious songbird extended across broad swaths of Eurasia even as the Indo-European language groups were differentiating. The Old English stærlinc was probably derived from stearn, a type of tern. The similarity to the Old German stara and the Prussian starnite  are indicative of a pan-European origin without any meaning beyond that of the well known bird.

Scientific Name: Sturnus vulgaris – The Latin name for the starling is sturnus with similar Indo-European origins. Vulgaris means “common’ in Latin, as the epithet vulgar suggests.

Potpourri: The European or common starling was intentionally introduced to North America in the nineteenth century as part of a cultural movement that sought to ameliorate habitats from both an aesthetic and a practical perspective. This practice extended to medicinal plants and herbs like coltsfoot and plantain but was expressly focused on birds. The starling, noted for its ravenous consumption of insects, was considered a boon to farmers in the extirpation of crop pests prior to the adaptation of chemical pesticides in the middle of the last century. It was also considered a cultural icon in Europe for its prodigious and varied song, frequently mimicking other birds, and, as pets, human speech. What’s not to like? The starling has thrived to the extent that it has become a problem on a scale comparable to pigeons in the park and Canada geese on the golf course. Bird as pest is a contradiction in terms. While society bemoans the loss of birds to glass buildings and wind farms, urban jurisdictions must manage huge starling flocks with acres of droppings and rural agronomists must account for purloined produce. It is a complicated story that begins in New York City’s Central Park.

The hackneyed version of starling invasion blames a wealthy patrician from Manhattan who had made his money in drugs, presumably legal, named Eugene Schieffelin. As an amateur ornithologist, he became a member of the American Acclimatization Society with the stated goal of introducing every one of the 600 avian species included in the copious works of William Shakespeare. To that end, Schieffelin released approximately 100 starlings in Central Park between 1890 and 1891. This initial introduction incontrovertibly resulted in the 200 million starlings flocking from coast to coast, wreaking havoc to harvests and despoiling city streets. Accounts typically include a passage from Shakespeare’s Henry IV in which a bothersome rebel named Hotspur proposes to disturb the king’s sleep by teaching a starling to say the name “Mortimer,” an earl Henry distrusted (Henry IV, Part I, act 1, scene 3). [1] The account of Schiefflin’s starlings is usually trundled out to lambast the arrogance and ignorance of the powerful elite of the past in instigating environmental disasters of the present.

Histories that fail to account for the culture and knowledge of the time and place then and there are sophistry. The Schieffelin account is true so far as the act of starling release but widely misses the mark as to motivation and expectation. The exchange of flora and fauna between Eurasia and the Americas had been going on for over four hundred years by 1890, sometimes intentional and beneficial but frequently happenstance and harmful. Horses, wheat, and cattle were introduced by colonists for work, transport and food. Influenza, smallpox, and diphtheria stealthily disembarked decimating native populations. In return, turkeys, potatoes, and tobacco offered new and exotic tastes and temptations to the Old World. Syphilis was purportedly carried back to Spain by Columbus’s sailors and spread throughout Europe as the “French Disease.” [2] By the nineteenth century, global integration had run its course with largely benign results.

The acclimatization movement arose in France in the 1850’s as an idea proposed by the naturalist Isidore Geoffroy Saint-Hilaire. The introduction of new species from one continent to another in order to better understand the adaptation of new species to new environments was one of its primary enterprises. The American Acclimatization Society was organized in New York in the 1860s with a more nuanced goal of improving beauty and diversity with an emphasis on birds. In 1877, a Mr. Conklin of the Central Park Museum reported at a meeting of the society that the commissioners of Central Park had released 50 pairs of English sparrows and that they had “multiplied amazingly.” They also freed some starlings because these birds were “useful to the farmer and contributed to the beauty of the groves and fields.” [3] This was just one of  numerous attempts on both coasts to acclimatize the starling to the New World.

Problems with species introduced to a new region absent the checks and balances of native predation and other environmental limits first became manifest in the late nineteenth century. In 1886, Clinton Merriam, the first Chief of the USDA Division of Ornithology and Mammalogy warned of the damage to grain, seed, and vegetable crops caused by the importation of harmful birds (notably English sparrows) and mammals (notably European rabbits). Ten years later, Theodore Parker, the Assistant Chief of the USDA Biological Survey advocated for federal legislation, because  “the animals and birds which have thus far become most troublesome when introduced into foreign lands are nearly all natives of the Old World,” specifically calling out the European starling for crowding out benign insectivorous native birds in addition to eating farmed crops as food. The Lacey Act of 1900 was the first major Federal legislation concerning  wildlife management, named for its originator, a representative of the farmers of Iowa. Introducing the term “injurious” as a type of animal behavior, its intent was to “regulate the introduction of American or foreign birds or animals in localities where they have not heretofore existed.” [4]  It is still in force to this day; invasive has supplanted injurious as the pejorative of choice.  

What about Shakespeare? Schieffelin’s contribution to starling scatology would have escaped notice altogether had he not been named as perpetrator by Frank Chapman, a preeminent American ornithologist who initiated Audubon Magazine and the annual Christmas bird count. During his long career at New York’s American Museum of Natural History, he came to know Schieffelin who would periodically stop by to check on the status of starlings. In the seminal 1895 Handbook of Birds of Eastern North America, Chapman effected Schiefflin’s responsibility for their introduction. Fifty years later, the nature writer Edwin Teal published an account stating unequivocally that Schlieffen’s “… curious hobby was the introduction into America of all the birds mentioned in the works of William Shakespeare.” This assertion was apparently an extrapolation of the development of a garden in Central Park where plants associated with the bard were planted … starting in 1916, ten years after Schlieffen’s death. [5] The attribution of starling introduction to Henry IV is surely poppycock.

There is an aesthetic aspect of starlings that has been overshadowed by the cacophony of their massive flocks―they are mimics nonpareil.  According to the diary of Wolfgang Amadeus Mozart, he purchased a pet starling on 27 May 1784, annotating the entry with the musical transcription of its whistled song. Three years later he led a funeral procession of dirge-singing mourners and eulogized his avian companion’s death at its gravesite with poesy: “A little fool lies here whom I held dear, a starling in the prime of his brief time, not naughty quite, but gay and bright, and under all his brag, a foolish wag.” The starling’s tune as recorded for posterity by Mozart was nearly identical to the final movement of his Piano Concerto in G Major, K. 453 he composed at about the same time as he adopted the starling. This factual yet eerie account can only have occurred if the starling had learned the tune from Mozart who truly admired its musicality and therefore mourned its death. In all probability, Mozart strolled about Vienna whistling his compositions and wandered into a pet shop, perhaps more than once. The starling therein can only have learned the tune from him, earning the eternal sobriquet as “Mozart’s Starling.” Circumstantial evidence of Mozart’s reputation for whistling tunes as they came to his head and his fondness for birdsong … he had a canary as a youth … support this thesis.

The vocalization skills of the starling were well known to the Romans and certainly also to the Greeks whose culture they absorbed. The naturalist Gaius Plinius Secundus known as Pliny the Elder wrote that starlings “practiced diligently and spoke new phrases every day, in still longer sentences” in both Latin and Greek. Certainly Shakespeare and his sixteenth century audience were well aware of the tonal dexterity of the mimicking starling that could be taught to invoke the name “Mortimer”―the jest would otherwise fall flat. In a recent quasi-scientific  experiment with a group of starlings sharing a house with a small group of bird researchers, their innate audio habits were manifest: various birds repeated phrases including “we’ll see you soon,” “give me a kiss,” and fragments of the Star-Spangled Banner. Mozart composed a piece called A Musical Joke (K 522) shortly after the death of his pet. It is described as “awkward, unproportioned, and illogical” that goes on interminably to end in “a comical deep pizzicato (plucking) note.” This would also be a good description for the starling’s repertoire of screeches, clicks, and whistles from which it concocts a verisimilitude of human speech. Was this Mozart’s epitaph to his pet starling? It is more than a possibility, as he is otherwise known for melodic virtuosity. [6]

The starling of Mozart’s affection and Schlieffen’s obsession morphed into the scurrilous scavenger of the twenty-first century by being a too successful species.  In 1915 the USDA launched a comprehensive survey of the effects of the starling in North America that included surveys of farmers and the examination of the stomach contents of thousands of birds. Based on the findings that starlings ate more pests and consumed fewer crops than native birds, the researchers concluded that “the starling possesses an almost unlimited capacity for good.”  After over a century of profligacy, the limits of starling goodness have become manifest. According to an updated USDA study, starlings consume or otherwise despoil $800 million worth of agricultural crops every year, spread infectious diseases to both humans and farm animals that result in an additional $800 million, and crowd native birds out of nesting sites. A database of starling migration paths was recommended to track nuisance concentrations to allow for targeting them with “improved baits and baiting strategies,” clearly a euphemism for poisoning. Starlicide is a  USDA approved product to control starlings and blackbirds even though it is “toxic to other types of birds in differing amounts. But this is supposedly all right because the birds experience a “slow, nonviolent death.” [7] This policy begs a research project to assess its efficacy. Adding poisons to the environment to control highly adaptable birds that will evolve to avoid or tolerate it cannot be good public policy.

A flock of starlings is called a murmuration, not so unusual as bird collectives go―convocations of eagles and parliaments of owls among them. The name is an onomatopoeia for the sound made by careening masses of starlings maneuvering in giant formations with wings flapping and muted calls creating low, indistinct noises. These individual starling murmurs combine to create a murmuration that can comprise well over half a million birds. Rising in the late afternoon, murmurations pulsate in amorphous blobs of organized chaos that has long intrigued ornithologists. The prevalent theory is that it is driven by instinctive group behavior motivated by safety in numbers to attract outliers to join so all  can more safely settle on a place to roost for the night. Using multiple cameras from different angles to track individual birds and combining them in 3D computer models, it emerges that there is no leader, each bird synchronizing with its seven nearest neighbors. The undulating bulges of birds correlate to perturbations attributed to the “selfish herd effect” as birds on the edges move inward to the safety of the center. After about an hour, they descend en masse. [8]  

According to the International Union for the Conservation of Nature (IUCN), the starling (along with the myna in the Starling family Sturnidae) is among the world’s 100 worst invasive species based on its  “serious impacts on biological diversity and/or human activities.” [9]  Rome, Italy is the epicenter of roosting starlings that  have been coming south from all over Europe to overwinter in its balmy Mediterranean climate since the 1920s. Spending days feasting in groves of olive trees and farmland of  the surrounding countryside, starlings congregate in the late afternoon to meet-up for the nightly roost. Once situated, they relieve themselves of excrement that coats whatever lies below with a slick mass of olive-oily slime. Street closures must be invoked to prevent motor bike crashes. Parked cars are encased with an implacable sarcophagus of starling scat. Attempts to stem the avian tide ranging from outright poisoning to the introduction of predatory raptors like hawks have failed due to the adaptability of starlings, the reason for their ubiquity. The only effective strategy has been relocation. Rome’s environmental department devised a technique employing a recording of a starling screeching in distress (induced in a laboratory) that is broadcast with amplified bullhorns to disrupt the roost. Generally, after the third day of being chased away, starlings opt for a less contested and congested roost as bird-man compromise. [10]

The starling’s overwhelming success as an individual species is a serendipitous result of natural selection. Other than proliferation and vocalization, they are undistinguished as just one of about 6,500 species of the order Passeriformes that make up about half of all species of the class Aves.  Usually called song birds, they are classified in taxonomy according to the configuration of their feet. Three claws forward and one back promote grasping and perching on tree branches―they may best be thought of as perching birds that sing. [11] Like almost all other birds, starlings are monogamous, sharing parental duties in nest building, egg incubating, and chick feeding (up to 20 times per hour). In fact, there is some evidence that the male and female birds coordinate these activities so that they share equally. [12]  Depending on latitude, they produce up to two clutches of six eggs every year with a success rate of up to 80 percent. While this would nominally result in a Malthusian progression of an additional ten birds per couple every year, only about 20 percent of the chicks survive to reproductive age. Two chicks per couple annually is still enough for a population explosion. Starlings are omnivores, with a daily consumption of about 15 grams of mostly insect animal food and 30 grams of plant food. Foraging in locations that range from orchards and feed lots to urban landfills, they can readily provision their nests,  typically tucked away in nooks of man-made structures. [13]

Starlings have figured out how to make a living in a world otherwise overrun by humans, taking advantage of the terraforming that defines our habitats. While they have become invasive, one might offer the same assessment of Homo sapiens. It should come as no surprise that the class Aves produces individual species that manage to overcome the most challenging environments with unsurpassed survival skills; the penguins of Antarctica and the goony birds of Midway Island among many others. Avian survival of the meteor impact darkness of  the Cretaceous-Paleocene Extinction 66 million years ago as the only representatives of the dinosaurs established a genetic heritage of resilience. According to recent DNA evidence, the starling family emerged about 6 million years ago during a less dramatic but equally challenging global climate transition. Originating in Asia, they spread during at about the same time as C3 plants were being replaced by C4 plants that characterize a drying climate as a part of the global carbon cycle. These plants, like corn or maize, sedges, and sugar cane, are more efficient in conditions with high levels of carbon dioxide. It is likely that the peculiar starling jaw muscles first evolved to meet the C4 food challenge. Unlike most birds with strong muscles to close the bill, starlings have the opposite with protractor muscles to open the bill. This provides the ability and propensity to penetrate narrow slits and prying them open expose the plant or animal food otherwise protected. The clever and adapted starlings radiated westward, becoming the European starling. [14]

Cities are the anthropomorphic monuments of civilization. The natural world is buried beneath megatons of concrete interwoven with tunnels for trains, sewers, water, and electricity. The plants and animals that were displaced are banished to waste areas if they survive at all. In the grim and gray concrete canyons, there is no life other than planted trees, manicured lawns, and an occasional park to remind the humans that abide therein that nature really does exist. The few animals like birds and squirrels  that have learned to live with the hubris of human occupation are, if anything, a blessing. Aside from providing a reminder that we are not really alone, they offer the beneficent function of clearing the streets of the uneaten bread crumbs sourced from food trucks and tossed aside as a measure of disdain for the earth we live on. The stolid starlings do not let it go to waste, following their exceptional bird survival skills.

Starlings scramble after breadcrumbs on Pennsylvania Avenue in Washington DC

References:

1. Mirsky, S. “Antigravity: Call of the Reviled.” Scientific American, June 2008

2. Smithsonian History of the World Map by Map, Random House, London, 2018, pp 158-159

3. “American Acclimatization Society” New York Times, 15 November 1877.

4. Jewell S.“A century of injurious wildlife listing under the Lacey Act: a history”. Management of Biological Invasions  Volume 11 Issue 3 pp 356–371. https://www.reabic.net/journals/mbi/2020/3/MBI_2020_Jewell.pdf   

5. Miller, J. “Shakespeare’s Starlings: Literary History and the Fictions of Invasiveness.” Environmental Humanities 1 November 2021.  Volume 13 Number 2 pp 301–322. Shakespeare’s Starlings | Environmental Humanities | Duke University Press (dukeupress.edu)   

6. West, M. and King, A. “Mozart’s Starling”  American Scientist. March–April 1990.  Volume 78 Number 2 pp 106–114.

7. Linz, G. et al  “European starlings: a review of an invasive species with far-reaching impacts”Managing Vertebrate Invasive Species. USDA  Paper 24 pp 378–386.

8. Langen, T. “Why do flocks of birds swirl in the sky?” Washington Post, 12 April 2022.

9. http://www.iucngisd.org/gisd/search.php  

10. Harlan, C. and Pitrelli, S. “A stunning spectacle – and a huge mess.” Washington Post, 15 January 2023

11. Alderfer, J. ed  Complete Birds of North America, National Geographic Society, Washington, DC, 2006, pp 502-504.

12. Enns, J. “Paying attention but not coordinating: parental care in European starlings, Sturnus vulgaris” Animal Behavior 2022. USDA Agricultural Publication.

13. Linz, Ibid.

14. Zuccon, D. et al. “Phylogenetic relationships among Palearctic – Oriental starlings and mynas”  Zoologica Scripta 10 April 2008 Volume 37 No. 5 pp 469–481.