Timber Rattlesnake

The coiled position is not necessarily for an imminent strike. It is mostly a defensive posture.

Common Name: Timber Rattlesnake, Canebrake Rattlesnake, Banded Rattlesnake, Black Rattlesnake, Eastern Rattlesnake – The name ‘timber’ describes the snake’s preferred habitat of rocky hills and forest uplands. The species is one of several that employ the rattle’s auditory warning.

Scientific Name: Crotalus horridus –   A crotalum is one of a pair of small cymbals that were used in antiquity to make a clicking noise; the castanet is a vestige. The generic name derived from it refers to the clicking noise made by the segments of the rattle. The species name horridus, in spite of its seeming etymological association with ‘horrid,’ has nothing to do with either the human perception of the snake or its venom. Horridus is Latin for ‘rough’ or ‘bristly’ and refers to the rough appearance of the scales due having a raised keel-like edge in marked contrast to the smooth skin of many snakes. [1]

As the only large and relatively common poisonous snake in the Appalachian Mountains, the timber rattlesnake evokes both existential fear and an abiding respect from all who cross its path. There is certainly a justification for these perceptions: its size ranges from 3 to 5 feet (the record is 6 feet); its venom is exuded in copious quantities through long and penetrating fangs; its potentially lethal strike is launched at lightning speed that is almost too fast for the eye and certainly too fast for the reflexes; and its bite can be deadly to humans if untreated. [2] However, the incidence of timber rattlesnake strikes on humans is vanishingly small, resulting in one or two fatalities per decade nationwide. The most common cause is handling snakes as a part of a religious ceremony [3]. The reason for the disparity between the potential for injury and the incidence of injury is that the timber rattlesnake is docile and will only strike if repeatedly provoked and threatened.

Ophidiophobia, the fear of snakes, is the most common type of herpetophobia, which applies to all reptiles. This fear is innate and almost certainly the result of evolution which may extend back in time to the earliest mammals, huddling in dark recesses to escape predatory dinosaurs. [4] Fear of snakes was reinforced in primates that evolved as tree dwellers where constricting snakes followed them in search of a meal. Recent research on macaque monkeys in Japan revealed that a region of the brain that is unique to primates called the pulvinar was especially sensitive to sighting snakes. Furthermore, monkeys that were raised in captivity without prior exposure to snakes displayed fear on first encounter. [5] Culturally, the serpent became a symbol of pernicious influence, chosen by the writer of Genesis as the tempter of Eve. Consumption of the fruit of the tree of knowledge led to expulsion from the Garden of Eden and God’s proclamation that the serpent would always crawl on its belly and eat dust. [6] Fear of snakes is embedded in the brain’s amygdala along with the fight or flight response that triggers panic action. However, cognition based on information stored in memory can override fear, a point enunciated by Roosevelt’s in his first inaugural address: “The only thing we have to fear is fear itself”.

The timber rattlesnake can be up to 6 feet long.

The timber rattlesnake is a consummate predator, well endowed with both the sensory tools and the physical agility to sustain its wholly carnivorous nutritional needs. As a pit viper, it has the namesake opening, or pit, just below the feliform vertical eye slit; the pit is the primary means of detecting prey.  The sensory organ in the pit is a heat receptor, capable of detecting a 1°C difference at a range of about one foot. This is both necessary and sufficient to detect and engage its warm-blooded prey during the preferred nocturnal forays when the cooler air accentuates the temperature differential. The strike is executed by the reflex-quick straightening of the lateral muscles to transition from either an S-shaped or coiled stance to full length extension; the fanged triangular head is projected about half of its body length. In other words, a 4-foot snake can strike at 2 feet. Contrary to popular folklore, the coiled position is not a strike prerequisite, though the snake will typically assume this posture in anticipation of a mammal’s traverse. Following the consummation of a successful strike, the olfactory sensors on the forked tongue are used to locate the head from the emanations of the victim’s mouth odors to initiate cephalic ingestion, a preference based on the anatomical connections of any attached appendages. Digestion is almost total; the gastric fluids in the rattlesnake dissolve everything including the bones, adding about 40 percent of the snake’s body weight annually. The prey consists almost entirely of small mammals. A 1939 survey in George Washington National Forest, which included the capture and evisceration of 141 timber rattlesnakes, found that the stomach content was 38% mice, 25% squirrels and chipmunks, 18% rabbits, 13% birds, and 5% shrews; one had eaten a bat. [7]

Two male snakes “wrestling” to win the heart of a nearby female.

Sex is vitally important for nearly every living thing on planet Earth. While perhaps historically overemphasized and of late deemphasized among humans, it is the essence of evolution. The random genetic mixing that is the result of the male-female “interaction” to form the gamete is how speciation (including our own) occurred. This is equally true for timber rattlesnakes, who take sex pretty seriously. While many snakes spend the cold winter months in a communal burrow called a hibernaculum, they range separately in search of prey for the remainder of the year. About every second year, females over the age of five years will release pheromones in the spring or early summer as they ply the leaf litter pathways of their home turf. The scent is strong enough to attract any and all males that happen by. The stage is then set for one of the most intriguing contests for the right to breed among all animals. Lacking arms, legs, and claws with only venomous fangs for teeth, what amounts to the timber rattlesnake version of an arm-wrestling contest ensues. Rising upward in intertwined arabesque coils, they try to push each other over. The prize goes to the one with the most stamina as the loser retreats dejected from the field. On first encountering the snakes depicted in the photograph, I was convinced that it was a male-female pre-coital ritual until learning of its even more surprising purpose from an expert. [8] Following insemination with one of the successful male’s two copulatory organs called hemipenes, the female gives birth to about a dozen young who are immediately on their own to face the world armed only with venom and slithering stealth. Most will not survive.

The signature rattle is a curiosity in its constitution and a conundrum from the standpoint of how it may have evolved; rattlesnakes are found only in the Western Hemisphere. The rattle starts as a bell-shaped horny protuberance called a button at the end of the tail. Every time the snake molts, which ranges from one to five times a year according to age and growth rate, the caudal end remains attached to form a segment of the rattle; the rattle grows in length by one segment for each molt. While it would theoretically be possible to count the number of times that the snake had shed its skin by counting the segments that constitute the rattle and thereby estimate the snake’s age, in actual practice this is fallacious. The rattle is loosely attached at each of the segments so that the assembly is subject to periodic breakage; it is not unusual to find a detached rattle segment on the trail. The conundrum associated with the rattle is that the rattlesnake employs both aposematism and crypsis simultaneously.  The purpose of the rattle is ostensibly to ward off an attack by a potential predator, an aposematic behavior. However, their primary predators – which include hawks, owls, coyotes and foxes – are apparently not put off by the warning of the rattle. King snakes, the preeminent rattlesnake predators, are immune to the toxins of the rattlesnake. The defensive behavior of rattlesnakes in the presence of a king snake does not involve the rattle in any way; the midsection is arched with the extremities held to the ground in an attempt to club the attacker. Experiments have revealed that the smell of the king snake triggers this response. [9]

Charles Darwin was also perplexed by the peculiar rattle of the American snakes. He wrote that “Having said thus much about snakes, I am tempted to add a few remarks on the means by which the rattle of the rattle-snake (sic) was probably developed. Various animals, including some lizards, either curl or vibrate their tails when excited. This is the case with many kinds of snakes.  Now if we suppose that the end of the tail of some ancient American species was enlarged, and was covered by a single large scale, this could hardly have been cast off at the successive molts. In this case it would have been permanently retained, and at each period of growth, as the snake grew larger, a new scale, larger than the last, would have been formed above it, and would likewise have been retained. The foundation for the development of a rattle would thus have been laid; and it would have been habitually used, if the species, like so many others, vibrated its tail whenever it was irritated. That the rattle has since been specially developed to serve as an efficient sound-producing instrument, there can hardly be a doubt; for even the vertebrae included within the extremity of the tail have been altered in shape and cohere. But there is no greater improbability in various structures, such as the rattle of the rattle-snake.” [10] The improbable evolution of the rattle had to have a provenance that was unique to the Americas; there are no rattle snakes anywhere else. There must therefore have been a predatory threat to the snakes that created the evolutionary rattle warning behavior. It was not human predation, as the land bridge of Beringia was not traversed to bring them from Eurasia until about 10,000 years ago. The only reasonable explanation must be that there was a snake predator among the extinct megafauna of the pre-human Tertiary Period and that the rattle developed as an effective tool to ward off that predator, presumably as an indication that the poisonous venom was, while perhaps not deadly, certainly unpleasant.

The black variant with keeled scales to prevent reflection and improve stealth.

Timber rattlesnakes, for the most part, are colored with earth tone banded markings to blend with the browns and blacks of the forest; this is the camouflage of crypsis which can be employed to deceive prey but is equally useful as concealment from predators. However, it should be noted that there are at least two different cryptic color variants: the first is the canebrake rattlesnake, which was once considered a separate species, – it is more brightly colored to match its cane field habitat; the second is a much darker, predominantly black variant which is an adaptation to promote nocturnal hunting. The stealth of coloration is enhanced by the snake’s keeled scales – each having a central ridge to interrupt the otherwise scintillating sheen of reflectance as is the case with snakes with smooth scales without keels (the etymology of the species name horridus). The overall effect is that the snake is well concealed against its prey, but also against its predators. The fundamental question remains―why did the rattle evolve?

The venom of the timber rattlesnake poses a different evolutionary question that has resulted in some hypotheses as to its origins. Darwin offered “It is admitted that the rattlesnake has a poison fang for its own defense, and for the destruction of its prey”  but offered no specifics as to its likely evolutionary origin. [11] Current thinking is that snakes evolved as large tree dwelling constrictors in the Miocene Epoch some 30 million years ago. When the climate changed so as to promote the grassy savannahs, the snakes became smaller and ground dwelling; some evolved a venomous chemistry for their saliva that promoted hunting and therefore their fitness to survive.  Snake venom evolved as a complex chemistry of protein synthesis; depending on the species of snake, it may have a predominant neurological effect or a predominant vascular effect. Viper venom is of the latter category; its most obvious and potentially fatal symptom is slowing of blood circulation due to coagulation. From the standpoint of its intended small mammal prey, the venom achieves its objective of immobilization attendant to consumption. While the venom can be and to some extent is used to attack predators, it is not very effective. The king snake is immune to rattlesnake venom and other predators are either unaffected or able to avoid its application. One firsthand account reports that a wild turkey held down a timber rattlesnake with both feet that was “repeatedly striking at the bird’s long, armored legs and folded-in wings, but to no avail.”  The turkey eventually killed the snake by cutting it through at the neck and then ate it. [12] Humans are another matter.

In any given year, approximately 45,000 people are reported to have been bitten by snakes in the United States; 6,000 of these are from venomous snakes and less than 10 results in fatalities – due almost entirely to the Eastern and Western variants of the Diamondback Rattlesnake. A larger number of domesticated animals are also bitten, though the numbers are of questionable merit as reporting is arbitrary and not required by law. The symptoms for snakebite vary according to the size of the snake and the amount of envenomation; about one fifth of poisonous snakebites are inflicted without the transfer of venom. This may be due to a dearth of venom after a recent kill or due to an intentional forbearance in order to preserve the venom for a future kill. The immediate symptoms of envenomation by a rattlesnake include intense pain at the point of penetration, edema and hemorrhaging. As the venom spreads through the body in the first few hours, the swelling and discoloration become more pronounced and systematic cardiovascular distress causes weakness, nausea and a diminution of the pulse to near imperceptibility. In the worst cases, a comatose state and death can result. In the twelve-to-twenty-four-hour period that follows, the affected limb suppurates and swells enormously, a condition that can also lead to cardiac arrest. In most cases, the symptoms abruptly cease after about three days as the body neutralizes the toxins. [13]

What to do in the case of a poisonous snakebite is and always has been a matter of some considerable conjecture.  Traditionally (the cowboy hero western paradigm), a tourniquet is established between the bite and the heart to arrest the flow of blood-borne toxin, the area of fang penetration is cut open to afford better access, and an oral suction is established to extract the venom. Snakebite kits were (and probably still are) sold that have a razor blade and a suction cup to carry out this procedure with some efficacy. According to current thinking, the cut and suck method does not work very well, though human trial data is probably nonexistent. But the logic against the cut and suck method is compelling. Applying a tourniquet concentrates the venom to a smaller area, where the damage will be more profound. It is actually better to allow the body to dilute it the venom to diminish its effects. The location of the penetration is not necessarily where the venom is concentrated, as the snake’s fangs are long and curved; cutting will likely only result in a greater potential for infection. Suction is not a good method to remove the viscous venom, as it will have immediately permeated the tissue to the extent that it cannot be extracted with a vacuum pressure.  The generally accepted procedure at present tends to a more plausible and less radical approach. After getting the victim clear of the immediate vicinity of the snake, the bite area should be cleaned with antiseptic wipes (if available), any jewelry or tight-fitting clothing should be removed to allow for swelling and the victim should then immediately be transported to a medical facility for the administration antivenom, which is now widely available. In the event that the snake bite has occurred in a remote area, the victim should be transported, either by being carried if possible or by slowly walking if not to the closest point of egress where medical attention can be obtained. [14] However, the only certain way to ensure survival from the bite of a timber rattlesnake is to not get bitten in the first place; if you see a timber rattlesnake on the trail, give it wide berth.

References:

1. Simpson, D. Cassell’s Latin Dictionary, Wiley Publishing, New York, 1968, pp 159,279.

2. Behler, J. and King, F. National Audubon Society Field Guide to North American Reptiles and Amphibians, Alfred A. Knopf, New York, 1979, pp 682-689

3. “Snake-handling W.Va. preacher dies after suffering bite during outdoor service”. The Washington Post. The Associated Press. May 31, 2012.

4. Öhman, A. and Mineka, S. “Fears, Phobias, and Preparedness: Toward an Evolved Module of Fear and Fear Learning” Psychological Review, 2001 Vol. 108 pp 483-522.

5. Hamilton, J. “Eeek, Snake! Your Brain has a Special Corner Just for Them” National Public Radio All Things Considered, 28 October 2013.

6. The Holy Bible, Revised Standard Edition, Thomas Nelson and Sons, Camden, New Jersey, 1952, p 3 Genesis 3:14.

7. Linzey, D and Clifford, M. Snakes of Virginia University of Virginia Press, Charlottesville, Virginia, 1981, p 134-138.

8. Demeter, B.  Herpetology expert for the Smithsonian Museum of Natural History. Private communication.

9. Linzey and Clifford, Op. cit.

10. Darwin, C. The Expression of the Emotions in Man and Animals. New York, D. Appleton & Company, 1872):  pp 102-103

11. Darwin, C. On the Origin of Species, Easton Press special edition reprint, Norwalk, Connecticut, 1976. p 166.

12. Furman, J. Timber Rattlesnakes in Vermont and New York, University Press of New England, Lebanon, New Hampshire, 2007.

13. Linzey and Clifford, pp 124-126

14. American Red Cross First Aid/CPR/AED Participants Manual pp 96-98. Available at https://www.redcross.org/content/dam/redcross/atg/PDFs/Take_a_Class/FA_CPR_AED_PM_sample_chapter.pdf

Celandine, Greater and Lesser

Greater Celandine

Common Name: Celandine or Greater Celandine (above) and Lesser Celandine (below) – Celandine is frequently called greater celandine to distinguish it from its unrelated namesake. It is derived from the Latin word chelidonia which means swallow (the bird not the verb) in English. The purported reason is that celandine flowers bloom in early spring when swallows arrive to its original Mediterranean habitat and wilt when the swallows depart. Celandine is also called swallowwort due to this association and tetterwort or nipplewort for its medicinal applications. Lesser celandine got its name due to superficial resemblance to the celandine, both having yellow flowers and proliferating in similar wet areas. It is sometimes called fig buttercup or pilewort for its use in treating piles, another name for hemorrhoids.

Lesser Celandine

Scientific Name: Chelidonium majus – The genus of greater celandine means swallow in Latin as per discussion above. The species name is Latin for major, a synonym for greater. It is in Papaveraceae, the Poppy Family. Lesser celandine is Ficaria verna. The genus is from ficus, the Latin word for fig which is attributed to the two plants having similar root structures. The species name accounts for its spring (vernal) blooming. It is a member of Ranunculaceae, the Buttercup Family.

Potpourri: Even though the greater and lesser celandines share the same name, they are not closely related according to taxonomy. While both are in the Order Ranunculales of flowering plants, they are in two different families: Poppy and Buttercup. There is, however, a good reason for mistaken identity. Aside from growing in similar wet habitats as weedy plants, they share a long history of similar uses by humans for medicinal applications. It is likely that early herbalists who sought plants for potions and poultices looked for yellow flowers and found one or the other. Since greater celandine was almost certainly the first to be exploited for its chemical compounds, the addition of lesser celandine became a useful mnemonic for herbalists. Because both are overly successful in reproduction, spreading out from a small clump to take over relatively large areas, they are both subject to the universal pejorative for anything that grows were humans don’t want it to. A weed is “a form of vegetable life of exuberant growth and injurious effect” according to Merriam -Webster Third International Dictionary.  Lesser celandine is by far the most notorious and is considered an invasive species in some areas.

Another reason for referring to celandine as greater celandine is to distinguish it from the celandine poppy, also known as the wood poppy, a plant indigenous to North America. Celandine poppies are in a different genus (Stylophorum diphyllum) but are otherwise very similar in terms of chemical, and therefore medicinal properties, common characteristics of many Poppy Family plants. [1]  In all likelihood, the original name of this flower was wood poppy, and due to its superficial resemblance to the greater celandine, it was given an alternative name celandine poppy by settlers moving inland from the original colonies. This has some credence as they are found mostly in the Midwest, which was subject to waves of migration from the original New England states after the passage of Northwest Ordinance in 1787 as one of the first acts of the newly established Congress. The use of the celandine name for both the lesser celandine and the celandine poppy is almost certainly because it was well known to many settlers who came to the New World from Europe. Greater celandine was (and is) one of the more common herbal remedies for a wide range of ailments in the Old World.

Greater celandine, like most herbal remedies, was adopted by apothecaries based on trial and error oral tradition that singled out natural plant medicines. Prior to the scientific revolution in chemistry of the nineteenth century that led to pharmaceutical formulations, nature was the only choice. However, even in the modern era of big pharma, many if not most drugs are synthesized based on plant (and fungal) chemistry. Since every plant needs to grow large enough to reproduce, many evolve smells and tastes to ward off predator animals that may range from larvae to deer. If their primary threats were bacteria and microbes, then these evolved chemicals could be good candidates for human medicines for the same effect. Greater celandine exudes a bright yellow-orange liquid from its roots and stem. This likely drew attention since yellow was one of the colors the four humors mediating human health that were postulated by the Greeks of antiquity and dominated Europe in the Middle Ages. Based on the formulation of Galen in the first century CE, red blood, yellow bile, black bile, and white phlegm were associated with sanguine, choleric, melancholic, and phlegmatic attributes. [2] Within the religious construct called the Doctrine of Signatures, a plant that had yellow juice must surely have been put there by God as a natural source of yellow bile. Greater celandine was therefore one of the more important herbals of history.  

What was Greater Celandine used for? John Gerard, one of the earliest and most well-known herbalists in Europe, attributes Aristotle with its use in the treatment of “the eies (sic) of Swallows that are not fledge, if a man do prick them out, do afterwards grow again and perfectly recover their sight.” What to make of this? Treating baby bird eye disorders in the fifth century BCE is probably not literal, losing its original meaning over years and translation and interpretation.  Gerard continues with “The juice of the herbe is good to sharpen the sight, for it clenseth and consumeth away slimie things that cleave about the ball of the eye and hinder the sight.” [3] The shrine of Saint Frideswide, the patron saint of Oxford, England and reputed to be a “benefactress of the blind” is decorated with a bas-relief of greater celandine, presumably for its curative power since the flower is a prolific weed in and around Oxford. She supposedly called forth a spring in a village near Oxford whose waters were used as a wash to help restore vision, one basis for her sainthood. The eye cure remedy is unlikely, as the yellow-orange liquid exuded from greater celandine is highly corrosive and can only have blinded those who tried it, swallows and all. [4]

Greater celandine has been used as a folk medicine across Europe eastward into China for millennia and in North America after its introduction by advancing settlers in the eighteenth century. The root and stem juices were used topically to treat a variety of skin problems including warts, ringworm, and eczema. In modern medicinal practice, salicylic acid and/or cryotherapy (freezing) are similarly used, a measure of the strong reactive chemistry of the plant. Taken internally, it was not surprisingly used to treat yellow jaundice, a liver ailment that could suggest a lack of adequate yellow bile that needed augmentation. There has been a neo-renaissance in the use of greater celandine in the treatment of cancer over the last several decades. This takes the form of what amounts to natural chemotherapy, using the chemicals chelerythrine, copticine, sanguinarine, and citric acid produced by the plant for its own defense to kill tumorous cancer cells. [5] The most well-known greater celandine based product is Ukrain (named for the country) that was developed in 1978 and successfully tested in several small sample size studies for its effectiveness in treating pancreatic cancer. [6]

As an herbal remedy, greater celandine is not subject to the rigorous testing and certification necessary to qualify as a drug. It can therefore be procured over the counter without a physician’s prescription for use according to alleged and/or perceived (placebo) benefits. It is promoted for intestinal digestive problems, as a mild sedative, to prevent gallstones, and to treat liver disease. This is in addition to its long-standing use to treat skin problems like warts and to reduce eye irritation, despite the inconsistency of these countervailing therapies. However, treatment with greater celandine derived herbals is controversial. There is some indication that it causes hepatitis, a liver disease it is supposed to cure (discovered when patients using it got better when the treatment stopped). It is a known skin hazard, causing rashes and itching, and in some cases, severe allergic reactions. It is poisonous for dogs and some farm animals. [7] It is telling that the European Medicines Agency concludes that “the benefit-risk assessment of oral use of Chelidonium majus must be considered negative.” [8]

Lesser Celandine the beautiful

Lesser celandine is a doppelgänger of its greater cousin. It is a harbinger of spring in two ways. On the positive side, it blooms in profusion with a delicate, yellow-rayed flower arrayed on bright green sculpted leaves that evokes the color and warmth of the sun to erase the drab grays of winter. Since it is a variety of buttercup, the petals have the characteristic glow that is the subject of childhood play in determining preference for butter by its reflection on cheek or chin.  However, lesser celandine doesn’t know when to stop, spreading outward in all directions until it is a green blanket that covers everything. Simply put, it is invasive―an early reminder of the summertime onslaught of plants that range from Japanese stilt grass to dandelions. On its European home turf, it is beloved and eulogized as the very essence of spring. In North America, it is a weed, choking out native flowers and replacing them with a striking, but nonetheless monoculture, greensward. The good Doctor Jekyll and the selfsame but sinister Mister Hyde.

Lesser Celandine the scourge

The US Department of Agriculture defines a noxious weed as “any plant or plant product that can directly or indirectly injure or cause damage to crops (including nursery stock or plant products), livestock, poultry, or other interests of agriculture, irrigation, navigation, the natural resources of the United States, the public health, or the environment.” Just about anyone with a lawn or living near a woodland stream will agree that lesser celandine qualifies.  It was introduced into the United States sometime before 1867 when the first documented specimen was recorded in Pennsylvania. It was almost certainly planted as an ornamental; its aesthetic qualities enhance the color and seasonal variety in flower gardens. Like many introduced species, its ability to spread and dominate its new habitat was neither expected nor even realized. And, like most invasive species, it took decades for it to radiate from its original site growing geometrically in reproduction. The USDA estimates that 79 percent of the land area of the United States is suitable for its habitat and that it has an 82.6 percent chance of becoming a “major invader” if introduced. [9].

There are two reasons why introduced plants (and animals) become invasive. The first is that in most cases, new introductions will have none of the environmental constraints that were extant on its home turf. It is a tenet of ecology that all living things are constrained from exponential growth by competition for resources. In the real world where resources are limited, population growth is constrained to a finite limit called the carrying capacity which it reaches by following what is called a logistics curve. Every species occupies a biological niche that includes all of the resources available to it in its ecosystem, a term coined in 1935 referring to both the physical and biological surroundings. When a species is taken from its evolved ecosystem and placed in a new one, the rules of the game change. The checks imposed at home are removed and growth continues until it is stopped by the ecology of the new habitat. [10]

The tuberous roots of Lesser Celandine

The second factor associated with invasive behavior is the ability of the introduced species to spread and multiply so as to dominate the new environment. Lesser celandine has three methods of propagation that almost guarantee survival and promote spread. In addition to the seeds defining all angiosperms, it has not one but two means of vegetative cloning. The roots form small tubers and the stems form bulbils in the leaf axils. Both become detached and are spread by mowing, digging, and, most importantly, flowing water. The densest patches are found in wet areas due to the significance of the latter.  Once it gets established, it is almost impossible to get rid of it. Anything short of digging up the entire plant, roots and all and being careful not to drop any bulbils, will only result in a brief hiatus for a year or maybe two. Only a powerful herbicide like glyphosate will truly excise it.  [11]

In its European homeland where it is naturally kept in check, lesser celandine is not only tolerated but admired. In the UK, revered might be more appropriate. Described as a “sweet little plant,” it appears at the very beginning of spring (which is how it crowds out the competition) with the bright sun-like flowers, it is sought out by gardeners and bred by horticulturalists. There are over a hundred cultivars that range from “aglow in the dark” to “yaffle” and include  dusky maiden, mister brown, and the ghost. [12] The poet William Wordsworth admired the lesser celandine, writing “It is remarkable that this flower, coming out so early in the Spring as it does, and so bright and beautiful, and in such profusion, should not have been noticed earlier in English Verse.” [13] So he proceeded to write a poem that begins with:

                                     There is a Flower, the Lesser Celandine,

                                     That shrinks, like many more, from cold and rain;

                                      And, the first moment that the sun may shine,

                                      Bright as the sun itself, ’tis out again! [14]

Had Wordworth been an American poet the leitmotif might have been beauty and the beast instead of sunshine.

References:

1. Niering, W. and Olmstead, N. National Audubon Society Field Guide to North American Wildflowers, Alfred A. Knopf, New York, 1998, pp 670-675

2. Parker, S. Kill or Cure, Illustrated History of Medicine,  DK Publishing, New York, 2013, pp 106-107.

3. Gerard, J. Gerard’s Herball – Or Generall Historie of Plantes, London, 1633, pp 39-41.

4. Mabey, R. Weeds, Harper Collins, New York, 2010, pp 188-194.

5. Foster, S. and Duke, J. Medicinal Plants and Herbs, Houghton-Mifflin, Ne York, 2000, p. 105.

6. Sloane Kettering Medical Center . https://www.mskcc.org/cancer-care/integrative-medicine/herbs/ukrain      

7. . “Celandine”. American Cancer Society. August 2011. https://web.archive.org/web/20150423221233/http://www.cancer.org/treatment/treatmentsandsideeffects/complementaryandalternativemedicine/herbsvitaminsandminerals/celandine     

8. . “Assessment report on Chelidonium majus” European Medicines Agency, Committee on Herbal Medicinal Products (HMPC) EMA/HMPC/369801/2009  13 September 2011

9.  “Weed Risk Assessment for Ficaria verna  (Ranunculaceae) – Fig buttercup”  Animal and Plant Health Inspection Service. United States Department of Agriculture. August 12, 2015

10. Nowicki, S. “Biology: The Science of Life” The Teaching Company, Chantilly, Virginia, 2004.

11. . “Lesser celandine, Ficaria verna”. Washington State Noxious Weed Control Board. https://web.archive.org/web/20160324080851/http://www.nwcb.wa.gov/detail.asp?weed=185    

12. http://www.johnjearrard.co.uk/plants/ficariaverna/genus.html     

13. Mabey, op cit.

14. https://en.wikisource.org/wiki/Poems_(Wordsworth,_1815)/Volume_2/The_small_Celandine

Catoctin Formation

After about 600 million years, the Catoctin Formation still looks like lava.

Catoctin Formation:  A catoctin is defined as “a residual hill or ridge that rises above a peneplain and preserves on its summit a remnant of an older peneplain,” where peneplain is “an erosion area of considerable area and slight relief.” [1] It is derived from Catoctin Mountain in north-central Maryland where the Catoctin Formation was first noted as consisting of a geologic plain rising above a plain. Some sources contend that a tribe of Native Americans called Kittocton were resident in the general area and if that is the case, it is almost certain that Catoctin is a toponym. [2] However, the existence of a tribe named Kittocton is probably specious as the tribe is not listed by the National Geographic Society. [3] Many geographical names came into common parlance without any records―ancient wooded hill, land of many deer and speckled mountain have also been proffered as the meaning of Catoctin in one of now lost Native American languages.  

Potpourri: The Catoctin Formation is the most recognizable geological feature of the Blue Ridge Province of the Appalachian Mountains. Its origin as lava that flowed out of fissures in the earth’s crust is evident in the sequential cascades that solidified as they spread over the pre-Cambrian landscape about 600 million years ago (mya). Even though it was named for Catoctin Mountain, where it can only be seen in a relatively few and out of the way places, it is the capstone rock assemblage in Shenandoah National Park. The Marshall Mountains dominate the northern section of the park, benches of lava flowing outward to form the roadbed for Skyline Drive.  White Oak Canyon, a cynosure of the central section follows the circuitous lava flow path. The Appalachian Mountains are over a billion years old. In contrast, the 60-million-year-old Rocky Mountains and even younger Himalayas are relative newcomers to terra firma. The Catoctin Formation is the keystone that connects the arc of the ancient past to the ever-evolving present. [4] How could magma from earth’s liquid mantle flow through and then out over the crust in a place that is now as placid and peaceful as a national park in western Virginia?  

Plate tectonics emerged as a coherent and scientifically supported theory of geology in the middle of the last century. It was first postulated by the German meteorologist Alfred Wegener in 1915 based on the conformity of the contours of the western coastline of Africa and the eastern coastline of South America. Supporting observations of geologic and fossil similarities that straddled not only South America and Africa, but also Australia and India could only be explained if these areas had at one point been connected in a single land mass, eventually named Gondwanaland for a region in India. The idea that massive continent-sized chunks could somehow move around, floating on top of a pool of molten rock agitated by planetary rotation and lunar gravity, and plow through oceanic crust like an ice breaker seemed too fanciful to many geologists until the middle of the century when further research revealed a viable mechanism. Sea floor spreading was confirmed by the observance of magnetic field shifts in solidifying magma flows in the mid-ocean ridges to provide a source of new crust. That earthquakes recurred in known prone zones led to the notion that plate movement was involved. The term subduction was given to the sliding of great arcs of oceanic crust under adjacent and less dense regions of crust to be remelted as magma in the mantle. With a supply of new magma emerging from the ridges and a recycling facility for old magma in subduction, there was no need to plow through anything. [5] So what has all this got to do with the Catoctin Formation?

The tectonic plates, many with lower density sections that are the land mass continents contained within their boundaries, have been floating around driven by the chaotic forces of physics by the liquid mantle for most of Earth’s 4.5-billion-year history. When two plates with the less dense continental crust float into each other, subduction is not an option and a headlong crash results. When an irresistible force meets an immovable object, something has to give and the only option is skyward. The result is an orogeny, from the Greek oros, meaning mountain. The mountain building orogeny that created the original Appalachian Mountains about 1.2 billion years ago is named Grenville for a small town in Quebec on the Canadian Shield central core of the North American plate. Wegener’s preliminary hypothesis that there was a contiguous area he called Gondwanaland was later expanded to include a second northern land mass named Laurasia that joined to form the supercontinent Pangaea (Greek for all earth) about 300 mya. Over the last several decades, additional geological analysis of bedrock on a global scale has concluded that the movement of plates results in the reassembly of at least 75 percent of all the jigsaw puzzle of landforms into a supercontinent roughly every 750 million years. Pangaea was proceeded by Rodinia, a name derived from the Russian word rodit meaning to give birth as it was at first thought that Rodinia was the original supercontinent that gave birth to all others. Further research has posited an additional supercontinent named Columbia that precedes Rodinia with evidence of additional combinations that extend as far back as the Proterozoic Eon that started 2.5 billion years ago. [6]

Catoctin Formation dike through older bedrock

The bedrock of the Appalachian Mountains was then the result of the collision of the land mass containing North America named Laurentia with the land mass containing northwestern Eurasia named Baltica that gave rise to what was to become Laurasia (North America and Eurasia) about 1.2 billion years ago. When Rodinia started to break apart about 700 mya, fissures opened, allowing magma in the form of lava to flow upward out of the mantle, through the bedrock of the Grenville orogeny, and spread out over its surface. This is the fons et origio of the Catoctin Formation.  Continued expansion in a manner similar to the opening of the eponymous Atlantic Ocean in the present geologic age resulted in its precursor named Iapetus, the father of Atlas in Greek Mythology. Initially, the cooled magma was covered by rough gravel at the shallow waters edge as the mountains were worn away by erosion. As the ocean expanded, the now submerged Appalachian bedrock with its lava coating became covered by smaller sized particles, and eventually the fine silted sand of mid ocean. The gradation of sediments of stone to pebble to sand on top of the Catoctin Formation is evident in the present day Weverton, Harper’s, and Antietam formations that make up the Chilhowee Group. [7] Iapetus stopped opening and began to close about 400 mya, creating Pangaea from Laurasia and Gondwanaland with a series of three orogenies named Taconic, Acadian, and finally Alleghany as the various plates collided from north to south. The resultant Appalachian Mountains were probably as high or higher than the Rockies at their peak uplift. Pangaea’s disassembly started near the end of the Mesozoic Era about 65 mya and is still in progress, the once buried lava rocks of the Catoctin Formation now in full view after millennia of erosion of the once majestic mountains to create the coastal plane. [8]

Geology as the science dealing with the physical nature and history of the earth has evolved extensively through the ages; even the rather obvious origins of lava have been misunderstood. While the Greeks and Romans appreciated the nature of lava and eruptions (the burial of Pompeii by the eruption of Vesuvius in 79 CE could hardly have been misinterpreted), the ensuing Dark Ages of biblical doctrine stifled the study of nature. According to Archbishop James Ussher of Ireland, the earth was created on Sunday, 23 October, 4004 years Before Christ and Noah’s flood was responsible for all current landforms. Even when science rebounded after the Renaissance, geology was especially difficult since it is mostly out of sight in tangled knots of rocky confusion. The noted German geologist Abraham Werner conceived that a universal ocean originally covered the earth and that all rock precipitated from it, dismissing the volcanic origins of lava altogether. His adherents, which included most geologists in the eighteenth century, were called Neptunists for the Roman God of the Oceans Neptune. The Vulcanists named for the Roman god of fire and the hearth, restored lava to its true provenance as magma emerging from fiery mantle. The word lava came into wide use in the 17th century from the Italian dialect around Naples, Italy (near Vesuvius) and meant something like falling― presumably from Vulcan’s home which had become a volcano.  Lava, in current parlance that reflects decades of study, comes in three basic forms: A’a for rough, fragmented blocks; Pahoehoe for smooth, undulating flows; Pillow for lava that emerges under water. A’a and pahoehoe are of Hawaiian origin due to the importance of the perennial lava flows that were key to early studies in volcanology. The lava of the Catoctin Formation is primarily dry, flowing, pahoehoe.

The primary constituent of the Catoctin Formation is basalt (from the Greek basanites, a type of slate used to test gold from basanos meaning test).  Basalt is an igneous (ignis is Latin for fire) rock, the generic name for any rock created directly from magma, the liquid rock of the mantle. Because of the low silica content, basalt has a low viscosity, so that the lava flow can move relatively quickly and travel as far as 20 kilometers from the source, which can be either a single vent or a long fissure.  Basalt is erupted at temperatures that range from about 2000 to 2100 °F, to become either a’a or pahoehoe depending on temperature and topography.  Basalt is the most abundant igneous rock in the earth’s crust, comprising almost all of the ocean floor. A rock is defined by the combination of minerals that it contains. A mineral is “a natural substance, generally inorganic, with a characteristic internal arrangement of atoms and a chemical composition and physical properties that are either fixed or that vary within a definite range.” [9] The primary minerals that make up the rock basalt are pyroxene and feldspar.

Pyroxene is from the Greek pyr and xeno meaning “alien to fire.” The pyroxene of Catoctin Formation basalt is a complex of different minerals that are silicates of magnesium and calcium and which include iron and manganese.  The general formula is X(Si,Al)2O6 where Xcanbe calcium, sodium, iron, or magnesium.  Magma that contains significant amounts of magnesium (Mg) and iron (Fe) is called mafic as an acronym for these elements. The other major component of magma consists primarily of feldspar and silica; it is called felsic according to the same logic. Feldspar, the other major constituent of Catoctin Formation basalt, is a complex of aluminum silicate minerals, i.e. containing aluminum and silica, in combined with potassium (KAlSi3O8), sodium (NaAlSi3O8) or calcium (CaAlSi2O8). Feldspar is derived from feldspat, German for “field flake” referring  to common rocks typically strewn about an open area that could readily be cleaved into flakes.  Feldspar comprises over fifty percent of the earth’s crust. The similarity between basalt and feldspar in terms of elemental composition is due to the dominance of oxygen in chemical combinations. The earth’s crust is about 50 percent oxygen combined with 30 percent silicon, 8 percent aluminum with iron, calcium, sodium, potassium and magnesium making up most of the balance at 2 to 5 percent each. [10]

The basaltic lava flows that first emerged from the mantle during the breakup of Rodinia have been subject to 600 million years of change. This included some millions of years under the Iapetus Sea and the crushing pressures of the assemblage of Pangea. The effects of the pressures and temperatures of deep depths and orogenies on existing rocks changes their shape, structure and properties. The name for the resultant rocks is metamorphic, literally changed body. To provide an overarching order to the otherwise intricate complexities of the mineral combinations of individual rocks, they are subdivided into three general types. Igneous rocks of the magma were first, solidifying in the first days of the nascent Earth’s cooling. Water evaporated from the primordial oceans precipitated as rain over lava lands, causing the erosion to transport grain by grain into the ocean to form sediments that gradually sank under their own weight to form sedimentary rocks. As the physics of balancing forces formed separated plates that drifted in their own magma ocean, the resulting colossal forces changed or metamorphosed the igneous and sedimentary rocks. Sedimentary shale became slate and igneous basalt became metabasalt. The Catoctin Formation that remains is the result of an unimaginable journey that took it from the peak of the tallest mountains to the deep sea and back again. While it still retains its basic lava-like appearance in places, it is comingled with many other rock types with their own histories. It has equally been subjected to differing environs that changed its core composition.

Catoctin Formation bounded by metabolized sandstones.

The Catoctin Formation has the colloquial name greenstone due to the gray-green coloration of many outcroppings, a result of its metamorphic journey. The Catoctin basalt is comprised of phenocrysts (large crystals) of plagioclase feldspar named albite in a fine-grained matrix of the minerals chlorite, magnetite, actinolite, pyroxene and epidote.   Epidote is a structurally complex mineral of calcium, aluminum, iron and silicon [Ca2 (Al, Fe) 3(SiO4)3(OH)] that has a green color described as pistachio.  It is this mineral that, when present, gives the Catoctin Formation a distinctive greenish hue. The sequential lava flows over an extended period are reflected in the diversity of the Catoctin Formation. The boundaries between the lava flows are marked by breccias, metatuffs, and metasandstone.  Breccia is a rock comprised of smaller rock fragments cemented together by sand, clay and/or lime.  These rocks identify areas where a crust formed on a lava flow that was disrupted by subsequent flows.  A tuff is a porous rock created by a consolidation of volcanic ash.  The metabolized tuffs, or metatuffs, are attributed to a rapidly moving cloud of molten ash.  The metabolized sandstones, or metasandstones, mark the boundary between one lava flow, a period of erosion and sedimentation, and a second lava flow. [11]

References:

1. Webster’s Third New International Dictionary of the English Language, C. G. Merriam Co.  Encyclopedia Brittanica, Inc, Chicago, 1971 p 354, 1669.

2. http://www.npshistory.com/publications/cato/index.htm   

3. “Indians of North America” National Geographic, Volume 142, Number 6, December 1972

4.Gathright, T., Geology of the Shenandoah National Park, Virginia Department of Mineral Resources Bulletin 86, Charlottesville, Virginia, 1976, pp 19-25.

5. Cazeau, C., Hatcher, R., and Siemankowski, F. Physical Geology Harper and Row Publishers, New York, 1976, pp 374-393.

6. Meert, J. “What’s in a name? The Columbia (Paleopangaea/Nuna) supercontinent”. Gondwana Research. 14 December 2011, Volume  21 Number 4 pp 987–993.    https://www.gondwanaresearch.com/hp/name.pdf   

7. James Madison University Geology Notes –  https://csmgeo.csm.jmu.edu/geollab/vageol/

8. Schnidt, M. Maryland’s Geology, Shiffer Publishing, Arglen, Pennsylvania, 2010, pp 88-112.

9. Dietrich, R. Geology and Virginia, The University Press of Virginia, Charlottesville, Virginia, 1970, p 4.

10. Cazeau et al op cit.

11. USGS Geological Survey Bulletin 1265 “Ancient Lavas in Shenandoah National Park Near Luray, Virginia” https://www.nps.gov/parkhistory/online_books/geology/publications/bul/1265/sec2.htm

Cardinal

Male cardinal pausing between assaults on his reflection in window – Photo by A. Kholmatov

Common Name: Cardinal, Northern cardinal, Redbird, Common cardinal, Cardinal grosbeak – The eye-catching red color of the male plumage is almost identical to the color that distinguishes the echelon of ecclesiastical prelates that rank just below the pope in the Roman Catholic Church. While officially named the Northern cardinal to distinguish it from other members of the genus that predominate in Central and South America, its range from Maine to Florida and west to Texas leads to the more common use of cardinal throughout the United States.

Scientific Name: Cardinalis cardinalis –  The genus and species names are the original Latin form of the word cardinal, derived from cardo, meaning “hinge.” The implication is that it is something of central importance, like the cardinals of Rome, the cardinal (N,S,E.W) directions, and the cardinal (1,2,3 …) numbers. The double genus-species designation connotes that the northern cardinal is the type species for the genus, which in a way does stress centrality.

Potpourri: The male northern cardinal is arguably the most recognizable and popular bird in North America. It was chosen as the official bird by seven states, foregoing uniqueness for panache. It is the only team name shared by two professional teams―baseball in Saint Louis and  football in Arizona. It is one of the official color of colleges ranging from MIT in Massachusetts to Stanford in California. The cardinal was chosen for its eye-catching, strident redness and not for any particular avian vitality, ubiquity,  or the singularity of song.  The cardinal is not especially notable, just one of the many so-called songbirds of the order Passeriformes that flit from tree to tree in search of food, nest-building materials, or each other. And all the while, the female cardinal is swathed in the brown feathers to match the colors of the trees and soils. [1] Why then, is the male cardinal cloaked in cardinal red?

There is also a Sacred College of Cardinals, the source of both the name and the color of the bird. The first use of the term cardinal to indicate a person of pivotal importance (literally on which things hinged from the Larin word cardo) was the deacons that presided over the seven regions of Rome in the 6th century. These prelates eventually became a privileged class as Roman magistrates and adopted the red that had long been used in Roman society to indicate rank and importance. [2] Red has been a key color in almost every society in human history, from the red ochres used in cave drawings to the war paint of Native Americans. The red that later became the robes of royalty throughout Europe was a rare and expensive commodity, ranking just behind royal purple in prominence. Red that was symbolic of power and wealth in the Roman Empire was sourced from miniscule, sap-sucking insects of the genus Kermes that fed on oak trees in the Mediterranean basin that were collected, crushed, and strained. A great deal of painstaking labor went into making just a few drams of dye. The red bug goo color that passed from Roman centurion to cardinal in antiquity was and still is scarlet and not cardinal red.

So why are North American red birds called cardinals and not scarlets? The bird cannot have been seen by Europeans before the 15th century, when the mainland of North America was first colonized. The striking red bird was almost certainly noticed by the French moving their bateaux up the Saint Lawrence River to lay claim to the region as New France. Suffering a dearth of settlers, the French government, directed by Cardinal Richelieu, chief minister to King Louis XIII, encouraged emigration starting in the middle of the 17th century. The new settlers who expanded along the Saint Lawrence River from Quebec City to Montreal were in a sense his agents, eventually renaming a tributary the Richelieu River. A bird named cardinal as Richelieu’s signature color would be equally apt. The cardinal bird name probably carried south with commerce and cultural contact to reach English colonists moving inland from Boston. No friends of persecuting papists, they may have favored the cardinal name in mockery. This is not outside the guardrails of the bawdy humor of the age. When Mark Twain was presented a scarlet robe on his receipt of an honorary doctorate at Oxford, he remarked “There is no such red as outside the arteries of an archangel.” [3] The bird is cardinal in both French and English with only a change in pronunciation as distinctive.

Cardinals have some characteristics that distinguish them as unusual when compared to the other perching birds of the Order Passeriformes more commonly called songbirds. Their most obvious is the pronounced color difference between the male and the female, a trait called sexual dimorphism. While there are subtle differences in the hue of plumage between the sexes of many birds, none take it to the extreme of a scarlet red male and a forest brown female. One hackneyed rationale is that the male would draw predators away from the nest so that the female could remain hidden with the brood. More chicks would then survive to retain the color dichotomy in perpetuity. The female, as procreator, would therefore choose a more cardinal red mate to enhance the survival of her genes. This doesn’t make much sense, since mammal egg snatchers like foxes and ferrets cannot see red. While demonstrably true physiologically and experimentally, the reason mammals cannot see red (including bulls charging at capes) can only be a matter of conjecture. The operative theory is that mammalian origins in the shadows of the dominant dinosaurs was literally devoid of much light but movement mattered; smell and hearing were paramount. Over evolutionary time, mammals retained only  blue and green cones for rudimentary color vision, with a surfeit of rod cells for dim light peripheral movement perception. (Red cones were regained by primates like us as a consequence of taking to the trees to facilitate locating the bright colored fruits that became their mainstay diet). [4] The consequence is that the red male cardinal might as well be brown since its movement is all that would matter for a predator mammal.  There are other cardinal predators such as owls, hawks, and snakes that do see red, but there is no correlation between the degree of male redness, which is referred to as “ornamentation,” and predator avoidance behavior in field studies. In fact, female cardinals have been observed fighting back against predation with no reliance on male participation. [5]  

Mate choice is a more compelling reason for cardinal red. The selection of the most desirable male by a female has been well established in some species of birds. In New Guinea, there are male birds of paradise that put on elaborate feathered displays to impress females and male bowerbirds that build extravagant nests with colorful decorations that range from red fruits to green fungi as proffered bridal suites. [6] The elaborate tail of the peacock can have no other function than to impress the pea hen. Mate choice, however, is not just for the birds. To a greater or lesser extent, it is pervasive throughout the animal kingdom from fruit flies to fruit bats and especially humans. Our very identity depends on a random sequence of mate choices that were made by parents and grandparents that extends through hundreds of generations. Mate choice can be defined as “any pattern of behavior, shown by members of one sex, that leads to their being more likely to mate with certain members of the opposite sex than with others.” In biological jargon, these are called the courter and the chooser. While there is no serious scientific disagreement about the existence of mate choice as an essential component of the birds and the bees doctrine, there is neither consensus about its actual mechanisms nor understanding of the way it evolved. [7] It is complex, inclusive of combinations of sight, smell, sound, and perhaps touch (but rarely, if ever, taste). For female chooser cardinals, some combination of sight for color and sound for birdsong are the most likely factors.

The unusual characteristics of birds were not lost on Charles Darwin, whose evolution epiphany was inspired at least in part by the different beak sizes and shapes of Galapagos Island finches. The importance of what have come to be known as Darwin’s finches on his ultimate conclusions concerning survival of the fittest has been oversubscribed. In visiting the islands of the archipelago, Darwin was struck by the similarities of a Galapagos mocking-bird to one called Thenca that he had recently seen in South America. On traveling to a second island and finding a third type of mocking-bird and observing that the indigenous giant tortoises were equally varied, he first posited that there must be something about isolated islands that promotes variations. In his field notes, he wrote that “such facts would undermine the stability of species.” It was only on his return to England with his collected finch specimens that an ornithologist named John Gould reached the conclusion that the finches were “so peculiar as to form an entire new group containing twelve new species.” [8] In the seminal work Darwin published about twenty years later, his thoughts on birds were much more nuanced. In a chapter entitled “Difficulties with the Theory,” he observes that “beautiful colours” and “musical sounds” must be due to sexual selection since “natural selection acts by life and death.” He concluded that structures created “for the sake of beauty” would be “absolutely fatal to my theory.” [9]

Darwin’s radical theory of evolution was in direct contradiction to the Bible’s origin story of the Great Flood and Noah’s Ark, an issue that resonates to this day despite overwhelming DNA evidence of evolution’s veracity. He purposely excluded any discussion of mankind’s origins so as to mitigate shock and backlash from the ecclesiastical establishment of the Victorian Era. A decade later, he elected to take on Adam and Eve directly in a second book, The Descent of Man, with the almost forgotten subtitle and Selection in Relation to Sex. Here then is Darwin’s full blown retraction: “If female birds had been incapable of appreciating the beautiful colors, the ornaments and voices of their male partners, all the labor and anxiety exhibited by the latter in displaying their charms before the females would have been thrown away; and this it is impossible to admit.” He even alludes to the use of bird feathers in women’s fashion that was popular at that time to assert that “the beauty of such ornaments cannot be disputed.” [10] There must then exist a sexual selection based on perceived beauty that operates hand in hand with natural selection based of fitness that combine to produce the tree of life. The dating game of young adult humans only differs from the pairings of birds such as cardinals in range and scope.

Sexual color dimorphism in cardinals must have something to do with mate choice, but it may not be the only factor. The intricacy, variation and tonal quality of song is also considered to be one of the primary means by which male courters seek the attention of the female choosers among passerines. In most species, only the male sings, lending some credence to this behavior as mate related. However, cardinals are unusual in that both the male and the female sing. In fact, the songs are so similar between the two that to the human ear they are indistinguishable.  However, when the male and female cardinal songs are separately analyzed by frequency and amplitude, the two songs are shown to be distinct.[10] Since bird songs are learned and, in some cases, embellished by practice, the question would be whether males learned their version of the song from other males and females likewise learned if from other females. A third intriguing possibility is that the female learned from the male and then modified the sounds ever so slightly as a way to respond. The reverse, with the male learning from the female is also possible but unlikely. This would suggest that the male and female cardinal share in a more or less egalitarian fashion.

Female cardinal engaged in nest building.

Cardinals are very aggressive―males and females in almost equal measure. This is especially notable in the late spring and early summer when adequate and suitable territory for nesting is established. Any intruder cardinal that attempts to penetrate the guarded perimeter of a mate pair’s domain will be subject to assault by the male, the female, or both. With lowered crest and eyes fixed on the aggressor, defending cardinals have been observed lunging after the intruder, using their feet and beak as weapons to force expulsion. The physical onslaught is often augmented by vocalizations described as chips and pee-toos. Intruder bird chases can go on as long as thirty minutes. This pronounced defensive posture is the cause of one of the more notable cardinal behaviors. Since birds are not self-conscious like humans and a few other animals, they do not recognize themselves in reflective surfaces like window glass. Cardinals are therefore frequently given to aggressively attacking their image in a window or even a shiny car bumper, pecking at the imagined intruder that will never go away until they themselves do. Sapience as its benefits. They eventually cease in fatigue and probably frustration.

Cardinal appearance goes beyond the red color of the male plumage to the broader category of ornamentation, inclusive of the length of the crest, bill coloration, and face mask contrast. Many attempts have been made to correlate variations in cardinal ornamentation to variations in body size and condition, feather growth, parental care, territorial defense, and mating choices. In general, the results have failed to establish any definitive relationship between any ornamentation trait including male redness and any other aspect of cardinal behavior or physiology. For example, in a trial in a rural area of New York found that males with brighter colors were positively correlated with reproductive success but those in an urban area in Ohio were not. In a more controlled experiment called a captive mate trial, females showed no preference for colorful males. [12] The only variable that can be directly attributed to a cardinal’s relative redness is the availability of fruit during the molting period when feathers are renewed. Fruits are colored by the  chemicals called carotenoids that are found in many plants to augment chlorophyll by absorbing light energy from additional frequency bands. When cardinals are fed a diet devoid of carotenoids, they vary in color from pale red to yellow. [13]

Why are male cardinals red and female cardinals brown? There is clearly a mate choice of some sort in operation, but it is not a choice favoring redness. Cardinals have elaborate courting behaviors that demonstrate evolutionary development of sex related activities. Sex matters. Many if not most birds are monogamous, retaining the same mate for life. Cardinals are a bit less stoical, changing mates not regularly but on occasion. So there must be come choosing going on and that would be  under the purview of the female chooser. This is an evolutionary result related to the lack of an external male sexual organ in most birds. Sex therefore requires the consent of the female since copulation involves contact of the male and female cloacae, known euphemistically as the cloacal kiss. This could not happen without mutual consent. (Cloaca once meant  sewer, the name given to the opening in birds, reptiles, amphibians, and fish that serves for both excretion and conception). One hypothesis is that the female cardinal chooses a male for a mate due to his compatibility. Female and male cardinals have very similar behaviors that range from having almost identical songs to being equally aggressive. The hypothesis is that this similarity was the result of female cardinal mate choice. The complexities of human mate choice are equally qualitative. If this is the case, then the red color of the male cardinal is more likely a genetic coincidence incident to female selection of a companionable mate. This is not without precedent. Dogs bred for friendliness by humans develop rounded snouts and drooping ears.

References:

1. Alderer, J. editor, Complete Birds of North America, National Geographic Society, Washington, DC, pp 597-606.

2. “Cardinal” Encyclopedia Brittanica Micropedia, William Benton, Chicago, Illinois 1972.Volume 11, p. 560.

3. Rossi, M. The Republic of Color, University of Chicago Press, Chicago, 2019, p 132.

4. Drew, L. I, Mammal, Bloomsbury Sigma, London, 2017,  pp 254-256.

5. Jawor, J. and Breitwisch, R.. Multiple ornaments in male Northern Cardinals, Cardinalis cardinalis, as indicators of condition. Ethology 2004, Volume 110 Number 2 pp 113–126.

6. Prum, R. The Evolution of Beauty, Doubleday, New York, 2017, pp 184-205

7. Rosenthal, G. Mate Choice, Princeton University Press, Princeton, 2017, pp 3-30.

8. http://darwin-online.org.uk/EditorialIntroductions/Chancellor_Keynes_Galapagos.html  

9. Darwin, C. On the Origin of the Species, The Easton Press, Norwalk, Connecticut, 1976, pp 164-166, 360-366.

10. Darwin, C. The Descent of Man, The Easton Press, Norwalk, Connecticut, 1976, pp 79-80.

11. Yamaguchi, A. “A sexually dimorphic learned birdsong in the northern cardinal”. The Condor. 1 August 1998, Volume 100 Issue 3, pp 504–511.   

12. Cornell Lab of Ornithology. “Cardinalis cardinalis” at https://www.allaboutbirds.org/news/   and https://birdsoftheworld.org/bow/species/norcar/cur/behavior#sex    

13. McGraw, K. et al “The Influence of Carotenoid Acquisition and Utilization on the Maintenance of Species-Typical Plumage Pigmentation in Male American Goldfinches (Carduelis tristis) and Northern Cardinals (Cardinalis cardinalis)”. Physiological and Biochemical Zoology. University of Chicago Press. November, 2001 Volume 74 Number 6 pp 843–852.

Hemlock for a Happy New Year

Hemlocks are among the many pines and fir evergreens that are symbolic of the holiday season. This hemlock is a new generation growing to replace those lost to an invasive species and a devastating hurricane at Limberlost in Shenandoah National Park.

Common Name: Eastern Hemlock, Canada hemlock, Hemlock spruce – Hemlock is the name for the hop plant in both the Germanic (homele) and Finno-Ugric (humala) language groups. The hop plant is the source of “hops” used for centuries across much of northern Europe to impart a bitter flavor to liquors made from malted grain. The small flowers of the hop plant are similar to the flowers of the poison hemlock (Conium maculatum) which shares the same etymology and from which the hemlock tree gets its name (by indirect association). In other words, the poison hemlock looks like and was named for  the hop plant and the hemlock tree shares a number of attributes with poison hemlock. The Carolina hemlock is very similar and difficult to distinguish from its collocated cousin.

Scientific Name: Tsuga canadensis – The generic name is from the Japanese word for the larch tree which, like the hemlock, is a member of the pine family. Most of the other trees in the genus Tsuga are indigenous to east Asia, primarily Japan. The species name is reference to the first classification of the tree in the Linnaean taxonomic system based on a specimen first sighted and identified in Canada. The Carolina hemlock is Tsuga caroliniana first distinguished in the Appalachian uplands further south.

Potpourri: Hemlocks are members of the ubiquitous Pinaceae or pine family which consists of conifer or cone-bearing trees that grow throughout the temperate regions of both the Northern and Southern Hemispheres and in mountainous tropical regions. The Pine family includes pines (Pinus), spruce (Picea), firs (Abies), hemlocks (Tsuga), larches (Larix), and Douglas-firs (Pseudotsuga or false hemlock). [1] Since they are large trees that grow in dense clusters, they are among the  most important trees of the timber industry, providing 75 percent of all lumber, and 90 percent of paper  pulp.  There are over 200 species worldwide of which about 60 are indigenous to North America. Pine family trees are self-pollinating, or monoecious, contributing to their evolutionary success at the expense of genetic diversity. The “naked seeds” that literally define the Gymnosperms (gymno is Greek―gymnasiums were places for naked exercise) are at the base of the female pinecone scales fertilized by male cone pollen wind-blown from the same tree. The pollen that is deposited on the megasporangium of the female cone in the spring ceases growth through the winter, consummating fertilization the following year. [2] In good time, you get a pine.

Hemlocks can most easily be distinguished by their needles, a term referring to the narrow, pointed leaves that, except for the larch, do not fall off over winter giving rise to the more general term evergreen. Hemlocks needles are short, arrayed in two neat rows, one of nature’s better options for higher mountains and boreal forests. However, needles do have a lifespan. Pine trees lose about one fourth of their needles every year resulting in trails coated with a soft cushion of decaying needles that suppresses almost all other plant growth, one of the best treads for foot travel. The “evergreen” needle as a leaf form is an evolutionary result of several factors involving both latitude and geology. The primary determinant is the length of the growing season, which can vary from as short as 65 days in New England to an average of 250 days in the southeast. All things being equal, a plant will trend toward greater leaf area exposed to as much sunlight as possible. Photosynthesis in the chloroplast cells of the leaves converts sun photon energy to the hydrocarbon molecules of biology. Broadleaf trees grow where they can, and evergreen needle trees grow where they can’t.

Hemlock needles (with woolly adelgids)

When the non-growth colder season approaches, broadleaf trees are better off  wintering over with bare branches, having adequate time to replenish their foliage the following spring. In northern latitudes, there is simply not enough time to restock the canopy with sun gatherers, so they persist year-round as narrow needle-like leaves. Temperature is a second factor due primarily to physics; when the freezing point is reached, the uptake of water is squelched and growth is curtailed.  Since average temperature drops about 3 degrees F every 1,000 feet, mountainous terrain has the same effect as latitude on the growing season so evergreens also prevail in higher elevations. Needle trees are also favored in northern latitudes and uplands because they are winterized with wax-coated  needles and resin-infused wood and roots. The conical shape of many conifer trees with their one dimensional needles are also better at survival in heavy snowpack. It should be noted that the pine barrens of New Jersey and the wide expanses of scrub pines across the south are neither mountainous nor northern. Some species of pine thrive in dry sandy soils where periodic wildfires have historically been the norm. Their cones are serotinous, which means that they evolved to burst open after a fire to spread the seeds of restoration, eventually becoming the dominant species. [3]  

That hemlock trees have the same name as the poisonous hemlock plant cannot be a matter of chance etymology. They have some things in common, but not the notorious toxins of the latter. The “drinking of the hemlock” was the standard method of execution in Ancient Greece. One of history’s most enduring dramas is the trial of Socrates by the popular court or dikasterion comprised of 500 Athenian citizens in 399 BCE. He was prosecuted for undermining religious faith in the  “gods that the state recognizes” by introducing new “demonical beings” and for “corrupting the youth” and found guilty by a slight majority. The hemlock execution of Socrates is considered by many historians to mark the end of the Golden Age of Greece. [4] Poison hemlock was thus well known throughout Europe by the Middle Ages both for its toxicity, and, in small doses, for treatment of a variety of ailments. There is evidence of its use for the treatment of cancer, as a narcotic or analgesic, and even as an anti-aphrodisiac (perhaps by killing the object of desire). [5] Because of this, many Europeans were familiar with its shape when growing and its smell when ground into powder. However, since there were no hemlock trees in Europe, it took the discovery and exploration of the Americas to associate the poison hemlock plant with its namesake tree.

The hemlocks of North America were almost certainly first sighted along riparian riverbanks by French explorers who penetrated the mainland by sailing up the St. Lawrence from the North Atlantic in the 16th century. Their knowledge of the smell and branching pattern of the poison hemlock led to applying the familiar name to the unfamiliar evergreen tree due to its similar characteristics. This is corroborated by the British Cyclopedia of 1836 in noting that the hemlock tree was “so called from its branches in tenuity and position resembling the foliage of the common hemlock.”  Conium, the genus of the poison hemlock, was purposely chosen because the plant looked like a miniature cone-bearing tree. In the New World, where there were so many new and strange plants, any means of distinguishing one species from another by using a mnemonic brought some order to the chaos. To differentiate the evergreen version of hemlock from its doppelgänger, the compound name “hemlock spruce” was applied. [6] Spruce trees of the genus Picea prevail in boreal forests across North America and Eurasia. Spruce is an anglicized version of “from Prussia” due to the prevalence of native spruce trees along the Baltic Sea near present day Lithuania. Prussia was  the ancestral home of the medieval Teutonic Knights that grew in prestige and power, uniting the disparate Germanic states to form a unified Germany in the 19th century. The hemlock spruce is called Pruche du Canada in Quebec, further evidence of  Prussian origin. It was later moved from the spruce to the pine family.

Eastern hemlock or hemlock spruce is the most shade tolerant of all tree species and can survive with as little as 5 percent full sunlight. Since the conversion of solar energy to produce hydrocarbon energy is the foundation of life, its lack can only be compensated for by slow growth. Like Treebeard, the ent of Tolkien’s mythical Fangorn Forest, hemlock growth is slow but inexorable. A one-inch diameter (usually reported as dbh―diameter at breast height―to account for irregularities) hemlock can be over 100 years old. Since hemlocks can grow to over six feet dbh with a height of over 150 feet, it follows that longevity is another characteristic trait. The record age for a hemlock is 988 years, older than Noah’s 969-year-old grandfather Methuselah, the epitome of lifetime endurance. Once established, a hemlock canopy blocks sunlight from penetrating to the understory, snuffing out most arboreal competition. The subsequent microclimate of dense shade with a deep duff layer retains moisture and sustains uniformly reduced ambient temperatures. Not surprisingly, the relatively exacting moisture and temperature requirements for hemlock germination are met by the conditions that they create. [7] But there is more to forest soil management than trees. There are also fungi.

Hemlock polypore growing on dead hemlock.

Pine family trees like hemlock are connected through their root systems with fungi that surround them, an arrangement know as ectomycorrhizal, “outside fungus root” in Greek. About 90 percent of all plants form mutualistic partnerships with fungi to gain access to essential soil nutrients like phosphorus and nitrogen, with the plant providing up to ten percent of its hydrocarbon sugar output to root fungi in return. For most plants, the mycorrhizal relationship is an option that results in more robust growth. For trees of the Pine family like hemlock, the mycorrhizal relationship is universal. Many different species of fungi are involved with the roots of any given tree. While there have been no studies for hemlocks, the closely related Douglas firs (Pseudotsuga menziesii) are estimated to have over 2.000 different species of associated fungi. [8] The kingdom Fungi is not uniformly benign, however, as all living things must find their niche in the tangled web of life as a matter of survival. The subsurface soils kept moist by the hulking hemlocks are an ideal habitat for mold, another broad category of fungi. Seven species of fungi attack the seeds of hemlock resting on the moist soil awaiting the magic of germination. One mold species, Aureobasidum pullulans, was found growing on almost three fourths of all hemlock seeds, impeding their full function. Hemlocks, when they eventually keel over, provide yet another form of fungi, the saprophytes that feed on the dead. Were it not for the fungi that consume the cellulose and lignin from which tree trunks are made, the world would be covered with tree trunks and none of their carbon would be returned to the atmosphere. Because hemlocks are so pervasive, one species of fungus aptly named Ganoderma tsugae or hemlock polypore, subsists exclusively on its deadwood.  Also called varnish shelf, it is one of the most recognizable of all fungi and is closely related to one of the most important fungi in Asian medicine (see full article for further details).

Hemlock growing adjacent to fallen old growth hemlock trunk in foreground.

The hemlock is listed on the International Union for Conservation of Nature Red List as near threatened. [9] This surprising state of affairs is not the result of clear cutting and overharvesting, although human impact has surely had deleterious effects. The high point of hemlock harvest was at the turn of the last century when the wood was used primarily for home construction roofs and flooring. As the population surged in the decades that followed and newspapers of the golden age of Hearst and Pulitzer proliferated, hemlocks became one of the primary sources for paper pulp.   The effects are exemplified by Michigan’s growing stock decreasing by over 70 percent between 1935 and 1955, a result of the slow growth of hemlock relative to its removal. However, the real culprit that threatens hemlocks is a sap sucking insect closely related to aphids, the bane of gardeners and food for ladybugs. The woolly adelgid was probably introduced from Japan in the early 1950s somewhere in New England and has now spread to 19 states and two Canadian provinces.[10] The larvae of the adelgid suck the body fluids from hemlock needles at their base, covering themselves with a fluffy white layer (hence woolly) to protect against predation (see full article for further details). A death by a literal thousand cuts ensues that can take decades but is in most cases inevitable. The hemlocks of Limberlost were the only old growth tract in Shenandoah National Park. They had been so weakened by woolly adelgids that they toppled during hurricane Fran in 1996. The hemlocks are just starting to recover almost thirty years later (note fallen hemlock trunk in foreground in photo). 

Unlike its poisonous namesake, hemlock is not only edible but salubrious. It has been attested that the entire Pine family “comprises one of the most vital groups of edibles in the world.” [11] This would mostly apply to northern latitudes where the paucity of winter food could result in starvation absent the resort to eating pine tree inner bark, a thin layer called the cambium.  The nutritious cambium is responsible for the formation of the water transport xylem on the inside and the hydrocarbon food transport phloem on the outside; in other words, it makes the tree trunk. For soft wood pine trees stripping off the outer bark layer to gain access to the cambium can be readily accomplished with primitive scraping tools. The native peoples of North America collected cambium which was cut into strips eaten either raw, cooked, or dried and ground into flour to make bread, a practice adopted by early colonists. The Adirondack Mountains of New York derive from the Mohawk word haterỏntaks, which means “they eat trees.” The healthful benefits of hemlocks and other pines are further enhanced by high concentrations of anti-inflammatory tannins and anti-oxidant ascorbic acid/vitamin C in all parts of the tree. The various Indian tribes had diverse uses, extending from pine tea tea to treat colds to thick pinesap paste applied to wounds as poultice.[12] One early colonist wrote in his diary in the mid 19th century that “I never caught a cold yet. I recommend, from experience, a hemlock-bed, and hemlock-tea, with a dash of whiskey in it merely to assist the flavor, as the best preventive.” [13]

References: 

1. Little, E. The Audubon Field Guide to North American Trees, Eastern Region, Alfred A. Knopf, 1980, pp 276-301.

2. Wilson, C. and Loomis, W. Botany, Holt, Rinehart and Winston, New York,1967, pp 549-570

3. Kricher, J. and Morrison, G. A Field Guide to Eastern Forests of North America, Peterson Field Guide Series, Houghton Mifflin Company, Boston. 1988, pp 9-10.

4. Durant, W. The Life of Greece, Simon and Schuster, New York, 1966, pp 452-456.

5. Foster, S. and Duke, J. Medicinal Plants and Herbs of Eastern and Central North America. Peterson Field Guide Series. Houghton Mifflin Company, Boston, 2000, pp 68-69.

6. Earle, C. Tsuga, The Gymnosperm Database, 2018, at https://www.conifers.org/pi/Tsuga.php      

7. Godman, T. and Lancaster, K. “Pinaceae, Pine Family” U.S. Forest Service Report at https://www.srs.fs.usda.gov/pubs/misc/ag_654/volume_1/tsuga/canadensis.htm   

8. Kendrick, B. The Fifth Kingdom, Focus Publishing, Newburyport, Massachusetts, 2000. Pp 257-278.

9. https://www.iucnredlist.org/species/42431/2979676    

10. https://explorer.natureserve.org/Taxon/ELEMENT_GLOBAL.2.131718/Tsuga_canadensis  

11. Angier, B. and Foster, K. Edible Wild Plants, Stackpole Books, Mechanicsburg, Pennsylvania, 2008, pp 168-169.

12.Ethnobotany Data Base at http://naeb.brit.org/uses/search/?string=tsuga+canadensis   

13. Harris, M. Botanica, North America, Harper Collins, New York, 2003, pp 44-46.

Wind Energy

Wind Turbines along Allegheny Ridge south of Mount Storm, West Virginia

Wind energy comes from the sun. Counterintuitive but nonetheless true. The transfer of energy from the sun to the earth is fundamental physics, giving rise to weather in the short term and climate when averaged over decades. The sun’s energy in the form of radiant solar heating increases the temperature of the land surface of the earth by transferring electromagnetic energy to individual molecules, causing them to vibrate. Temperature is the empirical measure of the movement caused by the kinetic energy vibration of molecules. More vibration, higher temperature. Sun radiation similarly warms the ocean but mixing water current mitigates the surface heating effect. The heated land surface warms  the air immediately above it. Warmer air is less dense since the mostly nitrogen and oxygen molecules move farther apart due to the movement of vibration. The less dense, warmer air rises to create an area of lower pressure in the heated area relative to surrounding air masses. Similarly, cold air falls to create areas of higher pressure. The energy of the sun generates low and high pressure areas.

Temperature differences give rise to pressure differences. Wind occurs when air from an area of high pressure moves to an area of low pressure. On a global scale, the equatorial tropic regions are heated by the sun’s rays causing the air to rise and move toward the colder north and south poles. The tilt of the earth on its axis of rotation concentrates heating in the area between the Tropic of Cancer marking midsummer noon in northern latitudes and the Tropic of Capricorn where it is then midwinter. The rotation of the earth causes the global wind movement from equator to pole to shift in the direction of rotation. This gives rise to counterclockwise rotation in the northern hemisphere and clockwise rotation down under which is called the Coriolis effect. [1] The relatively simple flow of wind curling away from the tropics is complicated regionally by ocean thermal effects and land height differentials. The resultant winds can range from the doldrums of the Horse Latitudes to the fury of a cyclone. Capturing the sun’s wind energy can be a daunting proposition.

The use of wind by humans extends to the dawn of the historical record. Rock carvings of boats with sails have been found in the Nile Vallery at a site named Wadi Hammamat dating from about 3300 BCE, the pre-dynastic period before the union of Upper (southern) and Lower (northern) Egypt. Corroborating evidence in the form of Egyptian vases depicts reed-hulled ships with a single mast holding a square sail probably made from either papyrus or cotton which were likely limited to excursions along and across the Nile.  The Phoenicians, the sea-faring people of antiquity, ranged throughout the Mediterranean region and possibly passed through the Strait of Gibraltar to reach the British Isles. A rough hewn terra cotta ship model from about 1500 BCE found near Byblos on the Lebanese coast provides archaeological evidence. Supplemented with oar-wielding human crews,  the sail-powered galleys of the Greeks vanquished the  Persian fleet  at Salamis in 480 BCE and the long boats of the Vikings began their centuries long raids along the coastlines of Europe at Lindisfarne in Northumbria in 793 CE. [2] The sailing ship bereft of oars became the agent of change during the Age of Discovery that began with Columbus and literally established a New World order.

Wind power was the driving force for merchant ships seeking global trade in spices and silks and for warships seeking global dominance with cannons for centuries.  The language and units of wind are thus rooted in nautical applications. The knot or nautical mile per hour for wind speed is a good example. Ships navigate without landmarks in the open ocean, their horizons uniform in all directions. Starting from home port as a known datum, ships proceeded by dead reckoning, a means of determining current position by using only course and speed. The magnetic compass provided a reasonably reliable course but the speed was as variable as the wind. The mile predates the kilometer by centuries, having been introduced by the Romans as the distance travelled by its legions in 1,000 double steps (mille in Latin) or about 5,000 feet, which was standardized by Queen Elizabeth I to 5,280 feet, exactly eight furlongs. The nautical mile has a different provenance as one sixtieth of one degree of arc of earth’s circumference at the equator which works our to 6,080 feet. Since degrees of latitude and longitude along the surface of the earth define geographical position, the nautical mile providing increments of degree change is the best measure. An ingenious method was devised to determine ship speed in nautical miles per hour. A weighted sea anchor called a drogue attached to a rope was dropped over the gunwale (ship’s sides above the deck used for gun support). The rope had knots every 47 feet 3 inches which were counted as they played out for a period of 28 seconds (measured by sand glass) as the ship moved away from the stationary drogue. Every knot counted meant that 47.25 feet had been traveled in 28 seconds, which equates to nautical miles per hour. Since ship’s speed was knots, the wind that created it was given the same units. [3]

The age of sailing ships ended with the advent of steam boats powered mostly by coal, the first of fossil fuels. Before Thomas Newcomen invented the steam engine later improved by James Watt as a practical alternative power source, the only way to do work was with humans or animals, and, much later, flowing water or blowing wind. Manpower, now implausibly mangled as person-power, was paramount, and aggressive dynasties throughout the Old World cast about for humans as slaves to carry out the chores of manufacture. The word slave derives from the capture of Slavs from the southern steppes of Europe by the Tatars, who raided up and down the Dnieper and Don River basins to satisfy the demands of their Ottoman employers whose religion forbade the enslavement of Muslims. The Cossacks originated as a roving band of nomads that fought against the Tatarian slave trade. [4] The heavy stones of the pyramids of Egypt and Mesoamerica were cut, hauled, and hoisted by humans. The Africans kidnapped from their homeland and sold to colonists of the New World perpetuated forced slave labor into the nineteenth century.  Animal power came later.

Dogs were first domesticated from wolves about 10,000 years ago primarily as human hunting companions. Sheep, goats, and pigs followed over the next two thousand years as ready sources of animal protein to augment and eventually replace unreliable hunting for elusive prey. But it was the domestication of the cow/ox from the aurochs 6,000 years ago and the horse two millennia later that transformed human endeavor by incorporating beasts of burden. It has been argued that the prevalence of large domesticable herbivorous mammals in Eurasia (13 out of a total of 14 with only the llama as an American outlier) led to the historical dominance of this area in world history. [5] Workhorse became an idiom for any durable and dependable device as testimony to the centrality of equine employment for everything from chariots to plows. Watt invented the term horsepower to provide understandable equality to his steam engines to convince skeptical buyers of their efficacy. Both units of power are now in use, one mostly for cars and the other for lightbulbs (1 horsepower = 745.7 watts) … and wind turbines.  

The first wind machine was the windmill. Now a synonym for rotating, the term windmill originated as a compound word to describe the process of using wind to mill grain. As agriculture supplanted foraging in the Neolithic (New Stone) Age,  populations grew as more food became available on a regular, seasonal basis. The need to supply more grain to meet  burgeoning demand drove innovation. The small, hand operated grindstone that sufficed for the individual hearth grew in size and weight to the millstone of mass production. The role of grain mill operator, or millwright, evolved as innovators employed first human and eventually animal strength to operate centralized flour processing facilities. Windmills first appear in the historical record in 644 CE operating in the region now called Sistan or Sakastan in eastern Iran near the Afghanistan border noted for persistent strong winds and lack of flowing water. The Asbads (Persian for windmill) of Sistan, now a UNESCO World Heritage Site, consisted of a vertical axis directly connected to a pair of millstones at the bottom with wind-catching sails mounted horizontally in a stone structure configured with entrance and exit wind portals. [6] The use of vertical axis mills persisted into the 13th century and spread eastward to China and westward to the Crimean Peninsula that extends into the Black Sea.  

The first European windmills repurposed the Persian asbad with the use of the gearing that had been developed independently for the waterwheels of the Roman Empire. The result was the iconic post mill with two to four elongated sails made from canvas ship sailcloth stretched over a wooden frame and held vertically in the direct path of wind by a wooden post. The wind-rotated sails turned a horizonal axle which was connected to the grinding millstones with a 90-degree bevel gear. Post-type windmills first appeared in France in 1180 and were introduced to England a decade later, evolving over the next century to a tower windmill that included a movable roof that could be turned on a track to adjust to changes in wind direction. The windmill as water pump was developed in the Netherlands in the 15th century to drain low lying areas for plantation.  With the mill stone replaced by a bucket wheel, water was elevated by over six feet and deposited in purpose built drainage ditches. The windmill gained symbolic distinction as the epitome of Holland, complementing the tulips that were planted in the now arable land and the wooden sabots worn by peasants to traverse boggy fields. By the 19th century, the windmill as generic  power source contradicted its grain grinding etymology. In addition to water pumping, windmills were used to saw wood, polish stone, grind paint, press seed oil, make paper, and a variety of other mechanized processes including the traditional grain milling. The Zaan region just north of Amsterdam had more than 900 windmills in the 19th century. [7]

The first wind machine exclusively for power generation was constructed in Cleveland, Ohio by the electrical pioneer Charles F. Brush. After designing and patenting a dynamo for generating electricity for arc lights in 1876, he formed the Brush Electrical Company which sold arc lighting systems across the United States from San Francisco to New York, providing the first lights on Broadway. After selling his company to what was to become General Electric he retired to his mansion on Euclid Avenue in Cleveland, devoting himself to research and invention. In 1888, he designed and built a massive wind turbine with a 56 foot diameter rotor with 144 cedar blades to provide power to charge 12 direct current (DC) batteries to power 350 light bulbs in the  mansion. An 1890 article noted that “The reader should not think that electric light from energy obtained in this way is cheap because the wind is free … However, there is great satisfaction in making use of one of nature’s most unruly forces of motion.” [8] Perhaps that was Brush’s motivation. At about the same time, the Danish inventor and physicist Poul la Coul took a different tack, using a small number of rapidly turning blades to generate electricity at the Askov Folk High School, where he taught classes on wind electricity and founded the Society of Wind Electricians. His rather surprising choice for energy storage was hydrogen produced by the electrolysis of water which was used directly for gas lights in the school. Explosions caused by oxygen contamination blew out the windows on several occasions. In 1957, Johannes Juul, one of la Coul’s students, pioneered the first wind turbines to generate alternating current  (AC) electricity using the now standard three blade wind turbine. [9]  

The latter half of the 20th century was dominated by cheap fossil fuels for conventional power plants and, for a time, the promise of nuclear energy. The Arab oil embargo imposed in reaction to the 1973 Yom Kippur War sent shock waves throughout the industrialized world, eliciting a reassessment of dependence on foreign oil. The resultant impetus for alternative power generation sources led to a renaissance in wind energy research and development. The wind-wise and wind-resourced Danes took matters into their own hands. In 1975, a group of teachers from three schools that shared a large campus on the former Tvind farm in Western Denmark near Ulfborg placed an ad in a major paper “seeking windmill builders.”  The resultant Windmill Team, comprised of an eclectic group of 400 idealists with no prior experience and an average age of twenty-one, set out to build the world’s first megawatt (MW) wind turbine from scratch with funding provided by the teachers. Three years later, the Tvindkraft, with three pitched rotating blades made from fiberglass and a computer-controlled frequency converter to account for variable speed, rose above the Jutland plain. At a height of over 150 feet and a power capacity of 2 MW, the first modern wind turbine was the largest in the world for several decades. It is still in operation, providing electrical power to the three schools and the co-located Tvind Climate Center. Denmark subsequently became the world leader in wind energy, as copies of the design were built throughout the country. The Windmill Team sought no patents in order to promote the shift to wind power and away from fossil fuels, an act of notable altruism. [10]

Altamont Pass wind farm, California

In the United States, federal level wind turbine research and development sparked by the oil embargo followed the more traditional pathway of public funding to private companies. The Department of Energy (DOE) established the NASA Lewis Research Center in 1973 to oversee demonstration projects selected from proposals submitted. The NASA/DOE MOD-0 was a Lockheed design erected in 1975 in Ohio with two blades producing 100KW atop a 100-foot tower. Designs progressed over the years to MOD-5B, a Boeing installation on the Hawaiian island of Oahu in 1987 producing 3.2 MW on a 200-foot tower. None of these designs were ever commercialized and the prototypes were all eventually shut down and dismantled. [11] At the state level, rising oil prices coupled with nascent environmental concerns provoked the California Energy Commission to establish the Altamont Pass Wind Resource area in 1980. With favorable tax incentives, conditional use permits were awarded to commercial interests to build wind farms in Alameda and Contra Costa counties just east of San Francisco. This resulted in the world’s first modern large scale wind farm. With an average wind turbine power of only 94 KW, these relatively small turbines were combined in groups of up to 400 to generate city size megawatts of power.[12] Although interest waned when oil prices dropped in the mid 1980s, the Altamont Pass project was never abandoned and served as the nexus for increasing California wind energy capacity to address the rising temperatures of global warming.

The United Nations established the Intergovernmental Panel on Climate Change (IPCC) in 1988 to provide “an assessment of the understanding of all aspects of climate change, including how human activities can cause such changes and can be impacted by them.” The panel consists of an international team of recognized experts in the interrelated scientific fields that play a role in climatology. The First Assessment Report was issued in 1990 after having reviewed the preceding decades of research with two broad findings: (1) The greenhouse effect is a natural feature of the planet and its fundamental physics is well understood; and (2) The atmospheric abundances of greenhouse gases were increasing largely due to human activities. [13] After almost three centuries, the cost of the fossil-fueled Industrial Revolution had become clear. The environmental free lunch was over. By the turn of the century, wind energy was back on the table and resources poured into the design and construction of ever larger and more efficient turbines to be placed in dense clusters wherever the winds blew best. And because of wind variability, its use as a controllable force for reliable and consistent electricity posed an engineering challenge.

That wind force can pack a punch is evident in coastal communities hammered by hurricanes and in trailer parks torn apart by tornadoes. Wind derives power from the force it exerts on any surface that is in the path of its movement to equalize pressure. The basic wind power (P) equation is fairly simple:

                                                          P = ½ρAv3   

where ρ is air density, A is turbine area, and v is wind speed. Since the density of air is relatively uniform at 1.25 kg/m3, the only way to increase the power of a wind turbine is to make it bigger or to locate it in a windy area. Wind speed is the most important factor due to the cubic function, which means that if you double the wind speed, power goes up by a factor of eight (2x2x2=8). Area is the circle swept by the rotating blades that define its radius r, the familiar A = πr2. It is convenient to use metric units to produce power in watts. For example, a wind turbine with 10-meter blades (an area of 320 square meters) rotating in wind at 10 meters per second (1000 when cubed) would result in a theoretical maximum power of  (0.5)(1.25)(320)(1000) = 200,000 watts or 200 kilowatts (KW). Note that 1 meter per second is about 2 knots, the nautical wind speed unit. A kilowatt is approximately the amount of power required for a mid-sized single-family home. The megawatt (MW) is more useful for the energy needs of a city. Terawatt is global.

In the real world, there are both physical and practical limits on the calculated power of a wind turbine that together reduce the usable power by about half. The physical limitation is based on the fact that if all of the wind passing through a turbine generated power, then the wind would have no more energy. In other words, the wind would stop blowing. Since the wind does not stop, it stands to reason that only a portion of its energy can be extracted. The limit imposed by the physics of fluid flow is known as the Betz Limit for the German physicist Albert Betz who first proposed it. The Betz Limit is 59 percent. Therefore, the maximum power that could be generated from the 200 KW wind turbine would be 120 KW. This maximum would only be achieved when the wind maintained an average speed of 20 knots or 10 miles per second and if the blades were effective over the entire area A. That this is not the case is reflected in the use of  Cp,  the power coefficient or the performance coefficient. Cp varies according to the pitch angle of the blades, which are adjustable on all modern wind turbines, and on the rotational speed at the tip of the blades relative to the upstream wind speed, which varies according to blade length. A maximum Cp of 45 percent is the result of a tip speed that is 7 times faster than wind speed with a 0 degree pitch angle. [14]. Finally, it is necessary to account for wind variability over time in a given geographic area. The term capacity factor (CF) is used to adjust the wind energy that can be extracted relative to the nameplate or nominal KW or MW capability of the wind turbine. An economically viable wind turbine requires a CF of about 30 with a maximum in near perfect conditions approaching 45. [15] The bottom line is that it takes a lot of wind turbines to capture enough wind energy to power a city. Hence the wind farm.

Wind installations are divided into two broad categories according to placement: onshore and offshore. Onshore wind turbines are cheaper to build and maintain, but are limited by the lower average wind speeds over land and by human nuisance factors like noise and landscape aesthetics. Offshore wind turbines take advantage of the more consistent winds over water but can only be sited in countries with suitable littoral areas with adequate capital to finance the higher construction costs. Because many of the industrialized nations of the world abut the oceans and have limited land area available, offshore wind is sometimes the better option. Growth statistics since 2005 have been impressive. According to the Global Wind Energy Council, offshore wind grew by 21 percent annually over the last ten years, bringing the total installed offshore wind power to 64.3 GW. While the United States has only 42 MW of offshore wind installed, the Inflation Reduction Act incentivized offshore wind installations with about 50 GW of added capacity now in early planning phases.  However, onshore wind is still by far the most prevalent, comprising over 90 percent of global installed wind power. [16]

Onshore wind towers followed the historical pattern of windmills of past centuries that were installed locally where needed and feasible. The benefits that accrued to populations in areas hosting them offset most pushback complaints about land use and landscape clutter. In Holland they are and were the hallmark of Dutch industry. The only technical constraint for onshore wind is adequate wind. In general, this restricts installations to rows along mountain ridges and in phalanxes on windswept plains arranged to prevent wake interference. Financial constraints depend on the cost of electricity offsetting capital intensive construction. Political constraints are largely dependent on the local perception of climate change and on financial incentives for community services and local landowners. Onshore wind partnered with solar photovoltaics comprise the lion’s share of renewable energy that has burgeoned over the last decade. 440 GW of renewable energy, enough electricity for Germany and Spain, was added in 2023, 107 GW more  than that added in any previous year. The total amount of renewable power globally by the end of 2024 is expected to reach 4,500 GW (4.5 TW), the amount of electricity consumed annually by the United States and China. [17] These rosy projections are certainly good news as testimony to the oft-stated goal of carbon neutrality by mid-century. However, it is not likely that  continued growth at this pace will be sustainable.   

World population has increased exponentially for centuries. Exponential means that it follows the progression 1-2-4-8-16 ad infinitum, doubling every generation if the exponent is two.  The supporting world market economy has expanded in proportion to the number of people it serves as substantiated by annual percentage increases in GNP. However, all growth is limited by inherent constraints. Globally, the earth and its resources are finite and there will eventually be a population maximum (estimated at 10 billion in 2050). Technologies like wind energy are also constrained by both physical and geographic limits. Wind turbine power went from a few kilowatts in the 19th century to 100 KW one hundred years later. The climate change impetus to improve wind turbine performance resulted in taller towers, longer blades, better generators, and lighter materials produced a tenfold increase to the megawatt range by the end of the 20th century. The largest wind turbines now being built are in the 5MW range. There will be no gigawatt wind turbines. The reason is that there are constraints on the maximum power that wind turbines can produce due to height, weight, and the physics of both wind and electricity. The resultant S-shaped curve accounts for these systematic shortfalls over time. It is a calculated or logistic result unlike the mathematical exponential. It called logistic growth. [18]  

Growth can be exponential only in the beginning when “low hanging fruit” is harvested. This hackneyed engineering axiom refers to things that are easy to change since they are within arms reach (low off the ground) and are fully developed (ripe fruit). An example would be changing from steel to aluminum to reduce weight to build a taller tower.  Improvements become harder over time until the carrying capacity is reached. This phenomenon is called “decreasing returns to scale,”  characterized by an inflection or turning point where exponential growth transforms to an asymptotic approach the maximum sustainable size (or power). Though the terms are not synonymous, the logistic S shape is the result of logistics.  Logistics is defined as managing the details of an undertaking. The aphorism “an army marches on its stomach” refers to the need to have food supplied to it by a logistics chain that is frequently called the supply line. An unfed army cannot continue to march.  This phrase is attributed to Napoleon, who ironically lost most of his Grande Armée on the plains of Russia due to not following his own logistical dictum. The design, manufacture and deployment of complex technologies like wind turbines requires a steady logistical stream of materials to build them and industrial engineers to install them. From the megawatt powering standpoint, it may be concluded that wind turbines are at or near their logistical megawatt limit due to logistics factors.

Wind turbines are equally, and perhaps more dramatically, limited by geographic and social constraints. Sites with the highest average winds and most favorable demographics were filled with the initial round of wind towers now in operation and generating “current” electrical statistics. The original investors were rewarded with sustainable profit margins necessary for a market economy. Expansion to less desirable sites with less wind will change the equation. At some point, revenue from the sale of electricity is no longer sufficient to  finance the capital investment needed to build the wind turbine in the first place. At the same time, higher wind turbine manufacture and installation costs accrue due to the  increase in demand for critical materials with a limited supply. Supply/demand mismatch is a harbinger of inflation, the gradual rise of all costs. Financial strain is starting to occur in 2023. Orsted, the largest energy company in Denmark and world leader in wind energy, cancelled two major wind projects off the coast of New Jersey called Ocean Wind that would have produced 2.2 GW. According to one of the Orsted executives “macroeconomic factors have changed dramatically over a short period of time, with high inflation, rising interest rates, and supply chain bottlenecks impacting our long term capital investments.” This decision was further justified based on local opposition.  Residents of New Jersy’s coastal Cape May filed a lawsuit to block a tax break for the wind farm claiming that “offshore wind development could threaten fisheries and marine mammals.” [19]  With short sightedness approaching myopia, the shoreline loss that the rising sea levels of global warming will ultimately induce, no doubt to the benefit of both fish and whales and the detriment of future generations, was not mentioned.  

Onshore wind projects have technical and social challenges that are in many cases even more trenchant than the nearly out of sight offshore projects. Land-based wind machines straddle ridgelines and dot wind swept plains in remote places―far from the industries and populations they serve. Transmission and connectivity is a serious problem in continent-sized countries like the United States and Australia. A good example is the plight of the Southwest Power Pool (SPP) that manages the electricity grid across the Great Plains from the Dakotas to the Rio Grande through over 60,000 miles of transmission lines. New wind and solar generation sites seeking to hook up to the grid must wait in what is called the “interconnection queue” until a computer simulation can be run to ensure that the grid remains stable and effective. A wind energy firm in Virginia named Apex drew up plans to install 135 wind turbines in New Mexico generating 300 MW of power in 2013 and applied for connection to SPP in 2017. By the time SPP got around to running the simulation in 2022, there were dozens of projects totaling over 10 GW. The model showed that a new 100 mile long high voltage power transmission line would be necessary to accommodate the disruption at a cost of over $1B that would need to be paid by the projects seeking grid admission. With a bill of over $250M, the Apex project was no longer financially viable and was cancelled. The Federal Energy Regulatory Commission (FERC) that requires the simulation testing is working to ameliorate the situation, but the inherent variability of wind and solar power on an otherwise continuous  and necessarily stable power grid must be taken into account lest blackouts prevail.  [20]

The 26th United Nations Conference of Parties or COP26 held in London in November of 2021 established a global benchmark of reducing net carbon emissions by 50 percent in 2030. The lion’s share of the emissions (69%-89% depending on the model) are from power generation and transportation. In order to meet the 2030  COP goals, the power sector will need to reduce carbon dioxide emissions by over 50%. Meeting this threshold will require the elimination of all coal power plants and the increase in solar and wind power by about five times the growth levels of the last decade―nothing less than exponential will do. Similarly, the transportation sector will require an increase in electrical vehicles (EVs) from 4% in 2021 to an average of 67% in 2030, placing even more strain on the grid. A continuation of current US public policy will produce only 6% to 28%  of net carbon emission reduction by 2030. [21] It is not unreasonable to suggest that the goal cannot be reached using only wind and solar power which must be used when generated or stored in nonexistent long term energy storage repositories (like batteries). A stable source of electricity is needed to sustain the grid. Fusion will never be ready in time. “Politicians need to tell voters that their desires for an energy transition that eschews both fossil fuels and nuclear power is a dangerous illusion.” [22] The fate of Earth as human habitat is at stake.

References:   

1. Fovell, R. Professor of Atmospheric Science, UCLA, Meteorology: An Introduction to the Wonders of the Weather. The Teaching Company, 2010.  

2. Capper, D. Commander, Royal Navy “Sails and Sailing Ships” Encyclopedia Brittanica Macropedia, William Benton, Chicago. 1972, Volume 16, pp 157-163

3. Whitelaw, I, A Measure of All Things, St. Martin’s Press, New York, 2007, pp 30, 101.

4. Plokhy, S. The Gates of Europe, A History of Ukraine, Revised Edition, Basic Books, New York, 2021, pp 74-76.

5. Diamond, J. Guns, Germs, and Steel, W. W. Norton and Company, New York, 1997, pp 157-175, 355.

6. https://whc.unesco.org/en/tentativelists/6192  

7. Wailes, R, “Windmills” Encyclopedia Brittanica Macropedia, William Benton, Chicago. 1972, Volume 19, pp 861-862.

8. “Mr. Brush’s Windmill Dynamo”, Scientific American, New York Volume 63 Number 26, December 20, 1890.

9. The History of Modern Wind Power (Danish with English translation)    http://xn--drmstrre-64ad.dk/wp-content/wind/miller/windpower%20web/da/pictures/index.htm

10. https://www.tvindkraft.dk/stories/wind-and-the-environmental-crisis-windmill-denmark/#

11.  https://www.windsofchange.dk/WOC-usastat.php

12. Wind Turbine Projects – Current Development Projects – Policies & Plans Under Consideration – Planning – Community Development Agency – Alameda County (acgov.org)

13. Climate Change 2001 Synthesis Report, Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, UK. 2001.

14.  Aliprantis, D. Fundamentals of Wind Energy Conversion for Electrical Engineers,  Purdue University School of Electrical and Computer Engineering, 2014   https://engineering.purdue.edu/~dionysis/EE452/Lab9/Wind_Energy_Conversion.pdf  

15. Kalmikov, A. Wind Power Fundamentals, Department of Earth, Planetary, and Atmospheric Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, 2013. http://web.mit.edu/wepa/WindPowerFundamentals.A.Kalmikov.2017.pdf

16. Global Wind Energy Council, Global Offshore Wind Report 2023 https://gwec.net/global-wind-report-2022/

17. International Energy Agency (IEA) Renewable Energy Market Update Outlook for 2023 and 2014.   https://www.iea.org/energy-system/renewables/wind

18. Smil, V. Growth, The MIT Press, Cambridge, Massachusetts, 2019, pp 20-21, 181-184.

19. Puko, T. “Demise of N.J. wind projects imperils Biden’s offshore agenda” The Washington Post,  2 November 2023.

20. Charles, D. “Off the Grid” Science, Volume 32, Issue 6662, 8 September 2023 pp 1042-1045

21. Bistline, J. et al “Actions for reducing US emissions at least 50% by 2030” Science, Volume 376, Issue 6596, 27 May 2022, pp 922

22. “Power Struggle” The Economist Jume 25-July 1, 2022. P 11. 

Opossum

Common Name: Opossum, Opossum, Possum, Common opossum, Virginia opossum – The name is of Native American provenance and is from the Algonquian word apasum, which translates as “white animal.” The coarse fur ranges from white in the northern reaches of its range to almost black in warmer regions.

Scientific Name:  Didelphis virginiana – The generic name is from the Greek di meaning “two” and delphys meaning “womb.”  The opossum is a marsupial with a pouch that is used to nurture the young after gestation; it is metaphorically a “second womb.” An alternative etymology for the genus is the fact that the female opossum has two uteruses. The species name refers to its original identification and description in the colony of Virginia by European naturalists defining the taxonomy of North American fauna.

Potpourri: Opossums are the only marsupial mammals in North America; there are none in Eurasia.  They are in a sense living fossils, transitional forms that link the more primitive egg-laying monotreme mammals to those with a placenta. The global geographic dispersion of marsupials is a biological affirmation of continental drift and its plate tectonic motive force. They predominate in the Australian archipelago with their monotreme brethren and are well established in South America. Opossums arrived in North America as relatively new immigrants across the Panamanian isthmus when the two continents became conjoined about 2.8 million years ago; only yesterday in geological time. That they thrived amid the dominant placental mammals is testimony to their resilience and adaptation. This is especially noteworthy considering that they lack any form of plates or spines to protect their scrawny bodies and possess neither fangs nor claws to repel an attacker. Their most effective defense is to feign death, giving rise to the idiom “playing possum.”

Evolution rarely if ever proceeds in a straight line―it is a record of the past but not a plan for the future.  Random mutations yield periodic variations on the original theme; only a precious few result in an intelligent design that endures. Nature is the judge and every other living thing is the jury as newness strives for an unoccupied niche. The original mammal combined three key attributes that together resulted in permutation and dominance: warm blood; milk glands; and body hair. The last common ancestor of mammals and reptiles lolled about in muddy wetlands about 300 million years ago. It took 100 million years to make a mammal and another 20 million years to make a monotreme, the most primitive of extant mammals. Monotreme means ‘one-hole’ as they have a single cloaca, the external passage for intestinal, reproductive and urinary purposes, a trait shared with birds and reptiles. Cloaca means ‘sewer’ in Latin. The scant fossil record indicates that there was a proto-monotreme named Australosphenida that spread out over the continents that originally formed Gondwana, the southern portion of Pangaea. [1]

Only five monotreme species survived the scourge of extinctions caused by competitors, environmental changes, and cataclysms.  The inimitable duck-billed platypus and four echidnas or spiny anteaters are the only surviving representatives. These animals stymied taxonomists for more than a century due to uncertainty about lactation and reproduction.  They lack prominent nipples and they produce offspring infrequently and in seclusion. When the first platypus was sent to England in 1799, it was thought to be a hoax due to the improbable combination of a bird-like beak, the four feet of a quadruped, and a fishy aquatic habitat. The lactation issue was resolved by a British Army Lieutenant in New South Wales named Maule who skinned his pet female platypus after she was accidentally killed and noted milk emanating directly from her abdomen.  When this report reached London, the testimonial of an army officer was considered unimpeachable and the platypus was declared mammalian in 1831. The reproductive question was not resolved until 1884 when a British zoologist named Caldwell had killed over 1400 monotremes to try to solve the riddle of reproduction. His brutal quest ended when he came across a female in the process of laying eggs. [2] The monotreme was confirmed as mammalian.

The importance of monotremes was noted by Darwin during his brief sojourn in Australia in 1836 during the seminal circumnavigation of HMS Beagle: “I had the good fortune to see several of the famous Platypus or Ornithorhyncus paradoxicus; certainly, it is a most extraordinary animal … A little time before this, I had been lying on a sunny bank and was reflecting on the strange character of the animals of this country as compared to the rest of the world.” [3] Some years later, he wrote “We here and there see a thin straggling branch springing from a fork low down on a tree, and which by some chance has been favored and is still alive on its summit.” [4] The platypus was pivotal to Darwin’s tree branch conceptualization of evolution. He was, in fact, consistent in his perspicacity throughout, as the dearth of fossils indicate that monotremes are, indeed, a straggling branch.  They are the egg-laying mammals that evolved from the reptiles as lactating nurturers, setting the stage for the successor marsupials, the pouched mammals; marsupium is Latin for pocket.

Marsupial mammals are considerably more common than monotremes with about 330 species almost exclusively in Australia which has four orders and South America which has three.  These groupings are sometimes referred to as Australidelphia and Ameridelphia. Marsupials are phylogenic with placental mammals in having a single common ancestor after the branching of monotremes. To emphasize these relationships, the older monotremes are designated Prototheria, the more recent mammals Theria, marsupials are Metatheria, and placentals are Eutheria. The range and diversity of marsupial mammals offers one of the most compelling arguments for the veracity of Darwin’s epiphany about the origin of speciation. Evolution is only a theory according to scientific rules that require experimental validation and the clock cannot be set backward to precursors nor can it be sped up from geologic to human time to prove it. A great many marsupials independently adapted over time to have the same body forms as placentals, a phenomenon called coevolution. The Tasmanian Devil is a  pouched carnivore; kangaroos, koalas, and wombats are pouched herbivores. There are marsupial moles and anteaters in addition to Thylacine, an extinct marsupial dog and Thylacoleo, an extinct marsupial lion. The independent evolution of sophisticated pouched mammals upends the widely held view that placental mammals are superior to marsupials. They are in reality a separate and essentially equal branch of the family tree with all of the necessary attributes to establish them as members in good standing of class Mammalia. [5]

The fossil record of the marsupial Metatheria provides a geographic date stamp for the breakup of Pangea punctuated by the Cretaceous-Paleogene (KPg) extinction event 66 million years ago. The oldest marsupial fossil found so far is from northeastern Eurasia in 125 million year old sedimentary rocks. The first Eutherian/placental fossil is “only” 35 million years older. All of the other extant Metatherian fossils from the Mesozoic Era are from the northern continents of Laurasia, there are none in South America or Australia. In that this is their present habitat, mass migration must have occurred. The oldest marsupial mammal fossil in South America is 64 million years ago, indicating that it had crossed from North America when the two were connected for some time by a land bridge; The KPg extinction event must therefore have extirpated the northern non-migratory marsupial contingent while the Eutherians prevailed. The current theory is that marsupials from South America crossed to Australia via Antarctica, when they were contiguous at temperate latitudes about from 55 to 35 million years ago; 45-million-year-old marsupial fossils have been found off the coast of Antarctica. As the only mammals in Australia, the Metatheria flourished as the climate changed from rainforest to open woodlands during the Miocene Epoch, producing marsupial megafauna in parallel with the placental megafauna of North America. Ten-foot kangaroos bounded about with giant koalas and a huge rhinoceros-like marsupial. The larger marsupials became extinct shortly after the arrival of the Aborigines about 40,000 years ago, the same fate that befell their placental brethren coincident with the arrival of the Native Americans 12,000 years ago. While predation by human hunters played a role in these extinctions, it was mostly a matter of environmental, climate-related habitat changes. The smaller marsupials, like the opossum, survived and thrived, and one species moved north to become the Virginia opossum. [6]

Opossum is from the Powhatan dialect of the Algonquian language group, a variation of oposoum meaning “white animal,” its coarse fur ranges from white in the northern reaches of its range to almost black further south. Captain John Smith, one of the founders of the Virginia colony at Jamestown in 1607 described an animal that “hath an head like a swine … tail like a rat … and the bigness of a cat” in a compiled list of Native American words. [7] It was officially classified it as Didelphis virginiana from the Greek di meaning “two” and delphys meaning “womb,” as the females have two uteri, a trait shared with all marsupials, with Virginia as locale of first siting. While not as chimerical as the platypus, opossums do not lack distinction.  Their pointed white faces and piercing, beady black eyes appear ghostly and ghoulish, particularly after dark when such things are imagined. The reverie is not diminished by the demonic prehensile tail that extends for half its body length. The caudal appendage is simian in form and function, adapted to grasping branches for balance and leverage in establishing tree cavity nests. Arboreal acrobatics are further enhanced by opposable thumbs on their hind feet, an attribute they share only with primates and a very few other species, mostly marsupial mammals. The satanic image is complete with the thoroughly fanged jaw that smiles with reptilian menace with a display of fifty teeth, more than any other North American mammal. For the Europeans who first came to the Americas in the sixteenth century, the opossum was sui generis and of consequent great interest.

Vicente Pinzón, the commander of Christopher Columbus’s ship Nina, brought an opossum back to the Spanish regents Ferdinand and Isabella, describing it as a “monster” with the “hinder of a monkey, the feet like a man’s, with ears like an owl; under whose belly hung a great bag, in which it carried the young.”  As the Europeans had never seen a marsupial (there are none in Eurasia or Africa), the opossum came to epitomize the exotic fauna of the Americas. [8] The German cartographer Martin Waldseeműller, renowned for his assigning the name America on a 1507 world map drawn from information gathered on the voyages of Amerigo Vespucci, included the opossum in a later woodcut as an evocative symbol of the New World.  Sixteenth Century engravings of the opossum, such as Étienne Delaune’s “America” depicted the peculiar animal with sharp fags and exaggerated claws. Over the course of the next century, the myriad novel plants and animals of the New World inspired the nascent science of comparative anatomy and ultimately to the evolutionary ideas of Darwin. The opossum was transmogrified from monster to mammal by Edward Tyson of the Royal Society who described the anatomy of the female opossum in a treatise in 1698. He correctly surmised that the “feet like a man’s” were for grasping and the “great bag for the young” was a manifestation of maternal care. [9] The opossum was redeemed.

That the Virginia opossum is the only marsupial to thrive in North America is testimony to its synanthropic nature, thriving in and around human habitation. Consequent to their omnivorous adaptability, fecund reproduction and creative, steadfast defenses against predators, they proliferate.  Opossums can and will eat almost anything that is organic, including but not limited to insects, snails, small mammals, fruit, eggs, fledgling birds, and, on occasion, cultivated crops. Those who choose to leave pet food or any other scraps in accessible areas will soon attract the attentions of opossums.  They make their home nests as a sequestered sanctuary for their brood in the hollows of trees, taking advantage of prehensile tails and grasping rear feet to navigate the arboreal habitat. The female opossum is fertile at the age of six months and can have two litters every year; the gestation period is only about two weeks. The young are not born in the pouch but instinctively crawl there from the uterus; this is no mean task as they are blind, furless and bee-size, weighing 100 milligrams. Those that make it seek out one of 13 nipples (twelve in a circle around one in the center) to which they are nurtured for several months. Although senescence is rapid (the average life span of an opossum is only about 3 years), the population is adequately replenished by the number of joeys and the nurturing nature of the species. [10]

Opossums are peerless masters of pantomime, the idiom “playing possum” a linguistic testimonial. They must be, as they lack innate physical defenses like porcupine spines, armadillo shells or bobcat claws, are not very fast, and don’t burrow in hidden dens. When cornered, an opossum will play dead so realistically as to dissuade even the most determined predators.  The mimicry is quite convincing. Stiffened in feigned rigor mortis, teeth bared in the throes of death with the putrid smell of a cadaver from mephitic fluid emitted from glands near the anus; they look, feel and smell like a dead animal. They stoically endure bashing, scratching, and biting remaining mute and motionless until the ruse prevails and the assailant retires. Healed scars are the only evidence of a protracted struggle.  A rabid pretense defense is used to deter aggression from less egregious threats.  The symptoms of rabies are simulated by secreting excessive saliva at the corners of the mouth with the lips drawn baring the full suite of sharp teeth. In addition to overt defenses, opossums have covert protection of an immune system that evolved resistance to the venomous bite of pit vipers, a property first discovered and patented in 1996. Due to the small number   of snakebite victims in the United States and the expense of synthesis, there was no incentive at that time to exploit opossum-based antivenom. With the advent of biomedical engineering, an E. coli bacteria was modified to produce the necessary opossum peptides at a price affordable enough for potential use in India, where there are over 100,000 snakebite deaths every year. [11] Opossums have succeeded against all odds with a combination of chemistry and comic opera, surviving as the fittest.

Like the bison and the turkey, the opossum is an iconic native North American animal. It embodies the spirit of the continent as the home of immigrants, including those of Native American heritage whose Asian genetic heritage is only some 12,000 years removed, almost yesterday in the grand swath of earth-time. The opossum is an equally itinerant immigrant as the only marsupial amid a menagerie of competitive placental mammals of equal need to eat and reproduce.  The ‘possum’ is central to the cultural cuisine of Appalachian and Ozark hill country, where it is hunted as a game animal and consumed as a choice entree according to recipes that have endured for generations. In January of 1909, President-elect William Howard Taft was served an eighteen-pound possum for dinner while visiting Georgia and was quoted in a New York Times article as having remarked “Well, I like possum, I ate very heartily of it last night.”   Numerous live opossums were sent to the White House from his southern constituents in consanguinity. A stuffed “Billy Possum” was created as an alternative to Roosevelt’s established “Teddy Bear,” which was also occasioned by a newspaper article. However, cuddly is an oxymoron for the rat-like, sneering possum and sales failed to meet expectations. [12] Walt Kelly’s Pogo is perhaps the most notable cultural testimony to the opossum; the kindly denizen of the Okefenokee Swamp famously said, “we have met the enemy and he is us” in the poster Kelly made for the first Earth Day in April 1970.

References:

  1. Weisbecker, V. and Beck, R. Marsupial and Monotreme Evolution and Biogeography. Nova Science Publishers, 2015
  2. Drew, L. I, Mammal the Story of What Makes us Mammals   Bloomsbury Sigma, London, 2017. pp 41-60.
  3. Keynes, R. D. ed. Charles Darwin’s Beagle Diary. Cambridge University Press. 1988.
  4. Darwin, C. Journal of the Researches into the Natural History and Geology of the Countries Visited during the voyages of HMS Beagle Round the World, John Murray, London 1845.
  5. Weisbecker, V. and Beck, R. Op. cit.
  6. Ibid.
  7. Mithun, M. The Languages of Native North America. Cambridge University Press. 2001 p. 332
  8. https://www.motherearthnews.com/nature-and-environment/opossum-facts-behavior-and-habitat-zmaz03aszgoe
  9. Tyson, E. “Carigueya Seu Marsupiale Americanum, or The Anatomy of an Opossum, Dissected at Gresham-College by Edw. Tyson, M. D., Fellow of the College of Physicians, and of the Royal Society, and Reader of Anatomy at the Chyrurgeons-Hall in London,” Philosophical Transactions of the Royal Society April 1698 no. 239 p 102 
  10. https://opossumsocietyus.org/general-opossum-information/opossum-reproduction-lifecycle/
  11. Davenport, M. “Opossum Compounds Isolated to Help Make Antivenom, Scientific American March 2015.
  12. Fuller, J. “Possums and Politicians? It’s Complicated.” Washington Post, 24 Sep 2014.

Stinkhorns

Common Name: Stinkhorn, Carrion fungus – Stink can mean either emitting a strong, offensive odor or, ethically, to be offensive to morality or good taste. Both interpretations apply according to the context herein. Horn is a foundational word of the Indo-European languages that refers to the bony protuberances that adorn the heads of many ungulates like deer. In that it also is associated with supernatural beings like the devil, it may be that this connotation was the original intent for its use. Devil’s dipstick is an idiomatic name for some species of stinkhorn that suggest this interpretation.

Scientific Name: Phallaceae – Phallus is the Greek word for penis. There can be no doubt that the family name was selected due to verisimilitude, the remarkable resemblance of the stinkhorn to male, mammalian, and notably human anatomy.

Potpourri:  Stinkhorns are a contradiction in terms. For some they are the most execrable of all fungi and for others they are elegant, one species even being so named (see Mutinus elegans). They range in size and shape from the very embodiment of an erect male canine (M. caninus named for its resemblance to a dog penis) or human penis (like Phallus ravenelii in above photograph) to colorful raylike extensions ranging outward and upward like a beckoning, stinking squid (picture at right). In every case they are testimony to the creativity of the natural forces of evolution, seeking new ways to survive the rigors of competition. Like the orchids that extend in intricate folds and colors of “intelligent design” to attract one particular insect to carry out pollinator duties, stinkhorns have become “endless forms most beautiful and most wonderful” that defy the odds of probability that must therefore lead to evolution as an explanation. [1] The priapic and tentacled extensions can only have been the result of successful propagation for the survival of the species, just like Homo erectus.

The phallic appearance of some stinkhorns is not as outré as it seems at first blush. The priapic shaft elevates spores to promote dissemination. Like a fungal Occam’s razor, stinkhorns evolved the simplest solution―growth straight upward with no side branches, placing the spore gleba at the bulbous apex. The fungus accomplishes this in a manner similar to humans, using water pressure to hold the shaft erect in lieu of blood pressure; hydrostatic versus hemostatic. The phenomenon is part of the fungal life cycle that starts in the mycelium, the underground tangled mass of threadlike hyphae that is the “real fungus.” The stinkhorn starts in the mycelium as an egg-shaped structure called a primordium containing the erectable shaft surrounded by spore laden gleba held firmly in place with jellied filler cloaked with a cuticle. It is the fruiting body of the fungus.  When environmental conditions dictate, the “egg hatches,” and the water pressurized shaft grows outward and upward lubricated by the jelly at a rate of about five inches an hour until it reaches fly over country. Here the biochemistry of smells including hydrogen sulfide (rotten eggs), mercaptan (rotting cabbage), and some unique compounds aptly named phallic acids draws flies from near and far. In ideal conditions, the slime and spores will all be gone in a few hours, and the bare-headed implement of reproduction will soon become flaccid.

Stinkhorns belong to a diverse and now obsolescent group of fungi called Gasteromycetes from gaster, Greek for “belly ” and mykes, Greek for fungus. With the translated common name stomach fungi, they are characterized by the enclosure of their spores inside an egg-shaped mass called a gleba (Latin for “clod”). Hymenomycetes alternatively have their spores arrayed along a surface called a hymenium (Greek for “membrane”) and are by far the larger grouping. The hymenium surface can take the form of gills or pores on the underside of mushroom caps or any of a wide range of other shapes ranging from the fingers of coral fungi to the cups of tree ear fungi. The Gasteromycetes include puffballs and bird’s nest fungi. [2] In the former, the ball of the puffball is the gleba. On aging, a hole called an operculum forms at the top so that the spores can be ejected (puffed) by the action of falling raindrops for wind dispersal. Each “egg” in the bird’s nest is a gleba and is also forced out by the action of falling rain. The projectile gleba affixes to an adjacent surface from which spores are then also air dispersed. Stinkhorns evolved to distribute the spores from the gleba following a completely different evolutionary random sequence. They attract insects to the stink at the top of the horn.  

Flowering plants called Angiosperms are ubiquitous, successful in their partnership with many insects to carry out the crucial task of pollination. While this is primarily a matter of attracting bees and bugs with colorful floral displays and tantalizing scents promising nectar rewards, there are odoriferous variants. Skunk cabbage earned its descriptive name from the fetid aroma that attracts pollinating flies to jumpstart spring with winter’s snow still on the ground. Another member of the Arum Family, the cuckoopint, attracts flies with its smell and then entraps them with a slippery, cup-shaped structure embedded with downward pointing spines, releasing them only at night after they are coated with pollen to then transport. Stinkhorns produce an olfactory gelatinous slime containing its reproductive spores that some insects, mostly flies, are drawn to. It is not clearly established whether the flies eat the goo and later defecate the spores with their frass [3] or whether they are only attracted by the smell, perform a cursory inspection, and then fly off with spores that “adhere to the bodies of the insects and are dispersed by them.” [4]  Some insight can be gained according to entomology, the study of insects. Do they eat the slime or do they merely wallow in it? 

The primary insects attracted to stinkhorn fungi are the blow flies of the Calliphoridae Family and the flesh flies of the Sarcophagidae Family. The term blow fly has an interesting etymology that originates with their characteristic trait of laying eggs on meat that hatch into maggots, the common name for fly larva. Any piece of meat left uncovered in the open long enough for the fly eggs to hatch was once called flyblown, which gradually took on the general meaning of anything tainted. The reversal of the festering meat term gave rise to the term blow fly for its cause. As a purposeful digression, wounded soldiers in the First World War left unattended for hours on the battlefield were sometimes found to be free of the infections that plagued those treated immediately because the blow fly maggots consumed their necrotic tissue. It is now established the maggots also secrete a wound healing chemical called allantoin (probably to ward off competing bacteria) and they are sometimes intentionally used to treat difficult infections. Flesh flies, as the family name suggests (Sarcophagidae means flesh eating in Greek), are also drawn to carrion to eggs for their larvae to eat. [5] If blow flies and flesh flies are attracted to stinkhorns due to the smell of rotting meat, they would presumably lay eggs. So the conundrum is what happened to the maggots? While eggs could hatch in a few days and larvae would feed for a week or two, stinkhorns last for several days, their slime removed in half that time.

Field experiments have verified that stinkhorn fungal spores are indeed ingested by flies. Drosophila, the celebrated fruit fly of early genetic studies, had over 200,000 stinkhorn spores in their intestine when dissected in an experiment. Given the volume available in a fruit fly gut, this quantity adds some perspective to the vanishingly small size of spores. The larger blow flies were found to contain more than a million and a half spores in a similar field evaluation. It was further demonstrated that spores passing through insects and defecated in their frass were fully functional. [6] This is not too surprising as spores evolved for survival under hot, cold, or desiccated environmental extremes; the fly gut is relatively benign by comparison. It is true, then, that flies eat spore bearing material. It is equally evident that there are no maggots in stinkhorn slime, even though this is what the average blow fly does when offered smelly meat. Diversity provides a reasonable basis for this contradiction. There are over 1,000 species of blow fly each to some extent seeking survival within a narrowed niche. Flies of the order Diptera are  noted for their propensity to mutate and adapt. Some species of blow fly and flesh fly deviated from the norm to consume stinkhorn slime for nutritional energy and lay eggs elsewhere. The stinkhorn and the flies they attract are an example of mutualism. Flies are attracted to and gain nutrition from what is essentially a fungal meat substitute and the fungus gains spore dispersion. Many fungi are excellent sources of protein, containing all eight essential amino acids needed by humans.  Flies need protein too.

The startling, trompe d’oeil appearance of a penis in the middle of a forest no doubt attracted humans as soon as there were humans to attract. The first written account of stinkhorns is Pliny the Elder’s Natural History written in the first century CE based on observations made on his military travels throughout the Mediterranean basin. John Gerard’s sixteenth century Herball  identifies the stinkhorn as Fungus virilis penis arecti forma  “which wee  English call Pricke Mushrum, taken from his forme.” [7] The bawdiness of Shakespeare’s rude mechanicals gave way to  Victorian Age corsets and high collars where there was no place for a “prick mushroom.” Charles Darwin’s daughter is accredited with the ultimate act of puritan righteousness. Ranging about the local woods “armed with a basket and a pointed stick” she sought the stinkhorn “her nostrils twitching.” On sighting one she would “fall upon her victim, and then poke his putrid carcass into her basket.” The day ended ceremoniously with the day’s catch “brought back and burnt in the deepest secrecy on the drawing room fire with the door locked because of the morals of the maids.” [8] As the modern era loomed and sexuality came out of the bedroom onto the dance floors of the roaring twenties, stinkhorns regained respectability.

The Doctrine of Signatures was the widely held belief that God intentionally marked/signed all living things to help humans determine how best to exploit them. To those who ascribed to this philosophy, a penis shape could only mean good for sexuality, which in the rarefied view of the pious, could refer only to procreation. Eating stinkhorns undoubtedly arose as either a way to enhance virility or as an aphrodisiac, and probably both. Dr. Krokowski, in Thomas Mann’s The Magic Mountain, lectures about a mushroom “which in its form is suggestive of love, in its odour (sic) of death.” [9] The dichotomy of budding love and the stench of death leaves a lot of room for speculation across the middle ground. Stinkhorn potions have been proffered as a cure for everything from gout to ulcers and proposed as both a cure for cancer and the cause of it. [10] There is insufficient research to conclude that any of this is true.

Stinkhorns as food from both a nutritional and gustatory purpose is at the fringes of the brave new world of mycophagy, fungus eating. Food is a matter of culture that extends from the consumption of frog legs in France to the mephitic surströmming of Sweden. Mushrooms have been on the menu for centuries from the shiitake logs of Japan in Asia to the champignons of Parisian caverns in Europe, but almost everything else was considered a toadstool. From the strictly aesthetic standpoint, the consumption of the stinkhorn “egg” dug up before it has a chance to become a smelly phallus has some appeal. Charles McIlvaine, the doyen of mycophagists whose goal at the dawn of the last century was to make the public aware of the “lusciousness and food value” of fungi, describes stinkhorn eggs as “bubbles of some thick substance … that are very good when fried.” His conclusion is that “they demand to be eaten at this time, if at any.” [11] Of more recent note, Dr. Elio Schaechter wrote that sautéing stinkhorn eggs in oil resulted in “a flavorful dish with a subtle, radish-like flavor. The part of the egg destined to become the stem was particularly crunchy, resembling pulled rice cakes.” [12] I am reminded of a Monte Python episode in with Terry Jones is upbraided for selling chocolate-covered frogs made from real frogs, the bones necessary to give the confection a proper crunch.

Netted Stinkhorn

Not all members of the Stinkhorn family look like a penis. Some have lacey shrouds that extend downward from the tip like a hoop skirt with a hint of femininity. These scaffolds are not for decoration but for scaling. In that it has been established that each stinkhorn species is in partnership with some form of gleba eating insect, the rope ladder can only be to allow crawling bugs like carrion beetles to climb up to the top to access the sporulated slime. The local species is Dictyophora duplicata, (net-bearing, growing in pairs) which is commonly known as netted stinkhorn or wood witch. After the bugs have finished with their slime meal, the result reminds some of a bleached morel. While netted stinkhorns are relatively rare in North America, they are abundant in Asia.

Bamboo Fungus

The netted stinkhorn called Zhu Sun meaning bamboo fungus for its native habitat is one of the most sought-after delicacies of Chinese cuisine. It featured prominently in banquets of historical importance, including the visit of U. S. Secretary of State Henry Kissinger to China in 1970 to reinstate diplomatic relations during the Nixon administration. Kissinger reputedly praised the meal for its quality, but it was never clear whether this was a matter of diplomacy or taste.  Part of the bamboo stinkhorn’s esteem stems from its health benefits according to ancient Chinese medicine. Recent research has confirmed that consumption correlates to lower blood pressure, decreased blood cholesterol, and reduced body fat. In the 1970’s the price of bamboo fungus was over $700 per kilogram but efforts to cultivate it commercially were developed driving the price down to less than $20 per kilogram. [13] It can be found in many Asian markets. The back of the package depicted above offers that “bamboo fungus is a magical fungus. It grows in a special environment, free from pollution. Once mature, it emits a notable light fragrance. Its shape is light weight. Its flavor is delicious. Its texture is silky. It is very nutritious. It is an ideal natural food.” Kissinger may or may not agree.

References

1. Darwin, C. On the Origin of Species, Easton Press, Norwalk, Connecticut, 1976 (original London 24 November 1859). P. 45.

2. Kendrick, B. The Fifth Kingdom, 3rd Edition, Focus Publishing, Newburyport, Massachusetts, 2000. pp 98-101.

3. Wickler, W. “Mimicry” Encyclopedia Brittanica Macropedia 15th Edition,  William Benton Publisher, Chicago, 1974, Volume 12, pp 218.

4. Lincoff, G. National Audubon Society Field Guide to North American Mushrooms, Alfred A. Knopf, New York, 1981, pp 831-835.

5. Marshall, S. Insects, Their Natural History and Diversity, Firefly Books, Buffalo, New York, 2006, pp 407-408, 481-484.

6. O’Kennon, B. et al, “Observations of the stinkhorn Lysurus mokusin and other fungi found on the BRIT campus in Texas” Fungi, Volume 13, Number 3, pp 41-48.

7. Money, N. Mr. Bloomfield’s Orchard, Oxford University Press , New York, 2002, pp 1-8.

8. Raverat, G. Period Piece: A Cambridge childhood. London,  1960 : Faber and Faber  p. 136

9. Mann, T. . The Magic Mountain, translated by John E. Woods. New York 1927: Alfred A. Knopf. p 364

10. Aurora, D. Mushrooms Demystified 2nd Edition, Ten Speed Press, Berkeley. California, 1986, pp 766-778.

11.McIlvaine, C. and Macadam, K. One Thousand American Fungi, Dover Publications, New York, 1973 (originally published in 1900 by Bowen-Merrill  Company), pp xiii, 568-576.

12. Schaechter, E. In the Company of Mushrooms, Harvard University Press, Cambridge, Massachusetts, 1997, pp 168-173

13. Chang S. and  Miles P. “Dictyophora, formerly for the few“. Mushrooms: Cultivation, Nutritional Value, Medicinal Effect, and Environmental Impact (2nd edition). Boca Raton, Florida: CRC Press. 2004  pp. 343-355.

Wineberry

Common Name: Wineberry, Wine raspberry, Japanese wineberry, Purple-leaved blackberry, Hairy bramble – Wine is the color of the tiny hairs that cover the stem and carpels, a dark red similar to that attributed to red/burgundy grapes. Berry is a general term applied to any small fruit. It originally derived from the Gothic word weinabasi, a type of grape, evolving to the Old English berie.  Berry is one of only two native words for fruit, referring to anything that was like a grape. The other is apple, given to larger, pome-like fruits. Weinabasi à Wineberry.

Scientific Name: Rubus phoenicolasius –  Rubus is Latin for “bramble-bush” of which blackberry was the most well known of the many types of prickly shrubs that comprise the genus. The species name means purple colored. [1] The Greek word for the color purple is phoinik, which was also the origin of Phoenicia, the ancient land on the eastern Mediterranean Sea coast, present day Lebanon. This littoral area was the source of sea snails from which a very valuable purple dye was extracted. Clothing dyed purple was thus a symbol of wealth and prestige, the term “royal purple” a vestige of its importance. Before the advent of chemical dyes in the early 20th century, color could only be naturally sourced like blue from indigo.

Potpourri: Wineberry would not make a very good wine and it isn’t really a berry. The first wines were naturally fermented thousands of years ago absent any knowledge of the pivotal role of yeast. The sugars in fruit were the food source for natural local yeasts that gave off alcohol as a byproduct of their metabolism. Grapes are the only common and prolific fruits that have enough natural sugar to produce the “weinabasi” libation discovered by fortuitous accident. Wineberries, like all of the other fruits from which wines might be made, must be supplemented with extra sugar (called chaptalizing) to feed the yeast fungus. Wineberry wine, albeit with a tart berry-like taste, would be a far cry from the rich flavor that the best French terroir can impart.  A berry is a fruit with seeds imbedded in the pulpy flesh, like grapes, watermelons, and tomatoes. Wineberry, like all brambles that comprise the genus Rubus, notably blackberry and raspberry, is an aggregate fruit with a multitude of tiny, clumped “berries”. One could presumably refer to one wineberry fruit as wineberries. Regardless of its unlikely name, wineberry has spread far and wide, becoming a nuisance to the point of becoming an invasive species in the Appalachian Mountain and coastal regions of the Mid-Atlantic states including Maryland and Virginia.[2]

Wineberries are native to central Asia extending eastward to the Japanese archipelago. They were intentionally introduced into North America by horticulturalists in the 1890’s to hybridize with native Rubus plants. The goal was to potentially improve on nature’s accomplishment by hybridizing native plants with introduced species to produce new cultivars with a greater yield of bigger berries and/or resistance to plant diseases and pests.[3] The compelling rationale for new edible crops at this point in time was that world population had surpassed one billion eliciting global food shortage concerns first raised by Thomas Malthus one hundred years earlier. The eponymous Malthusian principle that population rose geometrically (1,2,4,8 …) while agriculture rose only arithmetically (1,2,3,4 …) leading to inevitable famine was impetus for improvements in agricultural products and methods. The first Agricultural Experimental Station in the United States was inaugurated in New York in 1880 with the express purpose of addressing this challenge. Its director E. Lewis Sturtevant established the precept of conducting experimental agriculture to develop new plant foods. By 1887, with 1,113 cultivated plants and another 4,447 plants with edible parts, research focus shifted to developing fruit varieties. The bramble fruits of the genus Rubus, with about 60 known species and a well-established penchant for hybridization, were considered good candidates for experimentation. Wineberries from Asia became part of the mix.[4] As it turned out, the first green revolution of manufactured fertilizer using the Haber-Bosch process (see Nitrogen Article) and the second green revolution internationalizing Norman Borlaug’s high yield wheat put off the impending Malthusian famine, at least so far. There is every reason for Rubus breeding to continue. [5]

Wineberries nearly ripe beneath sepal husks

Bramble plants of the genus Rubus are so successful at dominating disturbed habitats that bramble has become a euphemism for any dense tangle of prickliness. Wineberry is only a problem because it is better at “brambling” than many other species, even though the stalks are covered with wine-colored hairs and have no prickles. It spreads both vegetatively with underground roots and with seeds spread in the feces of frugivores, animals that eat fruit. The wineberry plant consists of a rigid stem called a cane that extends upward, unbranching at first, reaching lengths of up to 9 feet. Vegetative spreading is enhanced by tip-rooting which occurs when the longer canes (> 3 feet) arch over and reach the ground, where adventitious roots form to establish an extension. In dense clusters, tip-rooting predominates. It takes two years to make a wineberry, as the first year primocanes apply all growth to cane extension and leaf formation for photosynthesis. The second year floricanes become woody and produce flowers that become fruits if fertilized. Wineberry flowers are hermaphroditic and are therefore less dependent on pollinators since there is no need to transport male pollen from the stamen of one flower to the female pistils of another. [6] Each wineberry fruit is protected by husks densely covered with the signature wine-colored hairs that are remnants of the sepals that comprise the calyx at the base of the flower. [7]

Wineberry is just one of many invasive species that have come to dominate large swaths of the forest understory in the twenty first century. Like kudzu planted for soil remediation of the Dust Bowl and plantain imported as a vital European medicinal, wineberry was introduced with good intention―the improvement of native berry stocks through hybridization. But, as has become increasingly obvious, the complexities of local ecology can result in mountains from molehills as “Frankenplants” take advantage of their reproductive strengths over the competition. There are a number of reasons for the success of wineberry in its unwitting but instinctual quest to become the one and only species wherever it can. It is an aggressive pioneer plant in any disturbed area. One study in Japan found that wineberry covered almost two percent of an extensive ski area after clearcutting, showing high phenotypic plasticity in its adaptations. Its tolerance to shade from tree growth due to old field succession of open areas promotes dense wineberry thickets that are the hallmark of its aggression. [9] On the other hand, all Rubus brambles are apt to dominate disturbed areas like roadside cuts, where one typically finds both raspberries and blackberries in addition to wineberries. There is some irony in that recent DNA analysis of the genus indicates that the first Rubus brambles evolved in North America and subsequently invaded Eurasia without any human intervention. They are brambles, after all. [10].

A bramble of wineberry canes

On the positive side, wineberries are tasty and nutritious, providing a snack for the passing hiker and food for the birds and the bees. A popular field guide to edible plants includes wineberries with raspberries and blackberries as uniformly edible, notably “good with cream and sugar, in pancakes, on cereal, and in jams, jellies, or pies.” [11] The consumption of Rubus fruits by humans precedes the historical record. Given that Homo erectus evolved from the fruit eating great apes, the impetus would be a matter of wired instinct. It is hypothesized that the reason that primates are the only mammals with red color vision is evolutionary pressure to find usually reddish fruit for sustenance and survival in the jungle forest. Historical documentation of the consumption of aggregate fruits was established by Pliny the Elder in 45 CE. He noted in describing raspberries that the people of Asia Minor gathered what he called “Ida fruits” (from Turkey’s Mount Ida). The subgenus of raspberries which includes wineberries is appropriately named Idaeobatus. It is probable that the Romans began to cultivate some form of raspberry as early as 400 CE. [12] Rubus aggregates were also important medicines in addition to the more obvious nutritional attributes. They contain secondary metabolites such as anthocyanins and phenolics which are strong antioxidants, contributing to general good health. Native Americans used them for a variety of ailments ranging from diarrhea to headache, although there is no indication that the effects were anything beyond placebo. [13]

All things considered, it is hard to get worked up over wineberries as pernicious pests. Granted they tend to spread out and take over but then again, so do all of the other brambles. In most cases, the area in question falls into the category of a “disturbed” habitat. While this could be due to storm damage, it is almost universally due to human activities. Road cuts through the forest may be necessary for any number of reasons, but they are initially unsightly tracts of rutted mud unsuited for hiking. Once nature takes over, the edges, now in direct sunlight, become festooned with whatever happens to get there first and grows fast. And what could be more appropriate than a bunch of canes covered with wine colored fuzz bearing sweet fruits?   

References: 

1. https://npgsweb.ars-grin.gov/gringlobal/taxon/taxonomydetail?id=32416

2. https://plants.sc.egov.usda.gov/home/plantProfile?symbol=RUPH

3. “Wineberries”  Plant Conservation Alliances, Alien Plant Working Group. 20 May 2005.

4. Hedrick, U. “Multiplicity of Crops as a Means of Increasing the Future Food Supply” Science, Volume 40 Number 1035, 30 October 1914, pp 611-620.

5. Foster, T. et al “Genetic and genomic resources for Rubus breeding: a roadmap for the future” Horticulture Research, Volume 116, 15 October 2019 https://www.nature.com/articles/s41438-019-0199-2   

6. Innes, R.  Rubus phoenicolasius. In: Fire Effects Information System, [Online]. U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station, Fire Sciences Laboratory (Producer) 2009. https://www.fs.usda.gov/database/feis/plants/shrub/rubpho/all.html   

7. Swearingen, J., K. Reshetiloff, B. Slattery, and S. Zwicker  “Plant Invaders of Mid-Atlantic Natural Areas”. Invasive Plants of the Eastern United States. 2002 https://www.invasive.org/eastern/midatlantic/ruph.html   

8. Wilson, C. and Loomis, W. Botany, 4th Edition, Holt, Rinehart and Winston, New York, 1967, pp 285-304.

9. Innes op cit.

10. Carter, K. et al. “Target Capture Sequencing Unravels Rubus Evolution”Frontiers in Plant Science. 20 December 2019 Volume 10 page 1615.

11. Elias, T. and Dykeman, P. Edible Wild Plants, A North American Field Guide, Sterling Publishing Company, New York, 1990, pp 178-185.

12. Bushway, L et al Raspberry and Blackberry Production Guide for the Northeast, Midwest, and Eastern Canada, Natural Resource, Agriculture, and Engineering Service (NRAES) Cooperation Extension, Ithaca , NY. May, 2008. https://www.canr.msu.edu/foodsystems/uploads/files/Raspberry-and-Blackberry-Production-Guide.pdf     

13. Native American Ethnobotany http://naeb.brit.org/uses/search/?string=rubus%20&page=1