Chipmunk

The most notable feature of chipmunks is the prominent dorsal striping.

Common Name: Chipmunk, Ground squirrel, Grinny, Hackle, Chippee, Rock squirrel – Chipmunk is an Anglicized version of the name given to the animal by Native Americans. Originally  ajidamoo in the Ojibwa dialect of the Algonquian language group meaning red squirrel and traced to an earlier dialect as acitamo, which literally meant “upside down to eat.” With the removal of the two “a” sounds it became jidmoo or citmo which evolved to chipmunk probably due to the similarities to existing English words chip and mink. It first appeared in print in Huron Chief, a poetry collection by Adam Kidd published in 1830. [1]

Scientific Name: Tamias striatus – The generic name is Greek meaning “steward” or “dispenser” for the food hoarding behavior of chipmunks. They are stewards of their cache, keeping it replenished over summer and fall, dispensing when necessary in winter and spring. Striated means striped, derived from the Latin word striatus, emphasizing the most notable feature of the chipmunk―a striped ground squirrel.

Potpourri:  Chipmunks encountered along hiking trails offer only a streak of striped fur crossing a fallen log to a hidden den. While they are fervid foragers, packing their ample cheek pouches with seeds and nuts, they are fearful of looming, large animals. Rodents are an important source of food for a whole range of carnivorous animals and birds of prey.  The brief encounter is sometimes accompanied by a series of sharp sounds that consist of either a high pitched chirping or a staccato chipping or both. While it is contended by some and believed by many that this is the reason they are called chipmunks [2], linguists trace the name to an Algonquian word acitamo (as per etymology above). There are no chipmunks in Europe, so the immigrant colonists had never seen one and coopted the existing Native American name.

Chipmunks are in the Order Rodentia in the Family Sciuridae and are therefore closely related to squirrels and marmots. The rodents are the largest group of mammals, comprising roughly 50 percent of all species, closer to 70 percent if the number of individual animals on account of their geometric proliferation. Like all rodents, chipmunk incisors grow at a rate of several millimeters a week throughout there lives (less during hibernation), which promotes and necessitates gnawing hard objects frequently. [3] Phylogenic research over the last several decades has revealed that rodents are closely related to primates in the tongue-twisting superorder Euarchontoglires. Euarchonta is a grand order of mammals consisting of primates, and, surprisingly,  tree shrews and small flying mammals called colugos from South Asia.  Glire is a genetically related group or clade consisting of rabbits and rodents. In spite of our anthropocentric world view (Euarchonta means true rulers), humans share a common ancestor with rodents like chipmunks dating from the late Cretaceous Period at the end of the Mesozoic Era dominated by dinosaurs. [4] The behavioral characteristics of the chipmunks therefore offer some insight into the evolutionary foundations of all Euarchontoglires. A 130 million year old fossil rodent skull was recently found in Wyoming and nicknamed the mutant ninja chipmunk for its pronounced front teeth and saw-edged rear teeth, a “the terror of the underbrush.” Evolution is relentlessly creative. [5]

Chipmunks are hermit hoarders with masterful survival skills. They live alone in underground tunnels about two inches in diameter and up to thirty feet in length. The tunnels are dug with sharply clawed front feet, excess dirt pushed to the surface and carried away to conceal the openings. Multiple exits afford escape from predators that dig like dogs or slither like snakes. Tunnel depth ranges from two to three feet according to the prevailing winter temperatures.  The tunnels are fashioned with several sleeping areas lined with soft leaves and with several food storage areas, normally at the lowest point in the tunnel to keep it cool and fresh.

Chipmunks are omnivorous. Their primary diet of nuts, seeds and berries is occasionally augmented by the consumption of fungi, slugs, insects and even small birds and snakes. As the colder and shorter days of fall herald the coming of the winter, chipmunks become consummate foragers, storing nonperishable foodstuffs in their tunneled lairs for midwinter nourishment. The cheek pouches of the chipmunk are capacious, confirmed by field measurements of 31 corn kernels, 70 sunflower seeds, or 32 beech nuts. Experiments with chipmunk foraging revealed that a chipmunk can deliver about 4600 kilojoules (about 1,000 kilocalories which is the same as 1,000 calories in the vernacular) of food energy to the larder every day. The resulting cache can contain over 5,000 nuts. Chipmunk populations rise and fall with the availability of nuts and seeds.  Accordingly, when oak and hickory trees produce an abundant crop of nuts every three to five years, a phenomenon known as masting, chipmunks thrive. [6]

Chipmunks do not hibernate per se in the winter. They enter a torpid state called superficial or shallow hibernation. True hibernation occurs when body temperature is lowered near to that of the environment with concomitant physiological changes like a reduction in breathing to about three irregular breaths per minute.  Only three orders of mammals display true hibernation: Insectivora like the hedgehog, Chiroptera, the bats, and some Rodentia, like the marmot and the ground squirrel. Shallow hibernation is a compromise between full activity and true hibernation, the latter being a more precarious mode of survival from which a significant number of animals expire.  The chipmunk retires to its tunnel nest in the late fall and enters a deep sleep during which time its body temperature is lowered and metabolism slowed. About every two weeks, it awakens and snacks on cached food in a somnambulate state and then resumes its long sleep. The average 163 kJ energy consumption of an active chipmunk is reduced to 25kJ in the winter torpor. [7]

The only time chipmunks associate with each other except in confrontation over territory is during the spring mating season, when female pheromones attract males who aggressively compete against each other for paternity rights using loud chipping noises, chasing reversals, and even biting. After mating with multiple males during the 6-7-hour period of estrus, females return to their own burrow to give birth to about five blind and completely helpless young (which could be but are not called chipmunkies) that are nurtured absent any male parental care or provision. After about 40 days, they are weaned and abandoned, their mother establishing a new burrow elsewhere to sometimes produce a second litter. One in five survive the first year, resulting in a stable population of about three adult chipmunks per acre in suitable habitats. 

The coloration of western chipmunks is mor muted, although this picture from Wyoming is the closely related golden-mantled ground squirrel.

For a small and relatively insignificant rodent, chipmunk evolution and classification has aroused serious scientific inquiry for more than a century. A USDA survey in 1930 noted that “these animals, so attractive to every lover of nature” had been analyzed using “modern methods.” A total of 14,554 chipmunk specimens were evaluated to conclude that there were 65 species and sub-species in three genera: Eutamias in the eastern regions of Asia; Neotamias in western North America (the prefixes eu and neo both mean new); and Tamias in the east. [8] The most notable physiological difference was that chipmunks from humid areas were richly colored compared to pallid hues in dry regions. [9] However, there is not universal consensus concerning chipmunk taxonomy and many references include all 23 species of chipmunk in the single genus Tamias. One respected field guide addresses the similarity in the appearance of the western variants “with the shape of the penis (or baculum) and the call often serving as the basis for identification.” These species are generally confined to small, isolated areas along the west coast and probably evolved by interbreeding after separation. [10]

Chip ‘n’ Dale

The chipmunk is prominent in the legends of Native Americans. One well-known Iroquois story called “Chipmunk and Bear” anthropomorphizes the former as devil-may-care and the latter as unbearably arrogant. Bear proclaimed he could do anything, so Chipmunk challenged him to stop the sun from rising. Bear took the challenge and proclaimed that the sun would not rise the next morning. When the sun rose, Bear was upset, and Chipmunk laughed so hard that he collapsed from weakness.  Bear pinned Chipmunk to the ground with one big paw, proclaiming “your time to walk the Sky Road has come.” Chipmunk asked for one last prayer, beseeching Bear to lift his paw just enough for him to breathe. Bear complied and Chipmunk pulled free, the tips of Bear’s claws scraping his back in the process leaving three stripes. Chipmunk retains the stripes to this day as a reminder that one animal never makes fun of another. [11] Chipmunks also feature prominently in modern culture, epitomized by Alvin, Theodore, and Simon singing Christmas Carols in the 1960’s to Chip ‘n’ Dale, Disney cartoons with their own comic books and a television series.

Why do chipmunks have stripes? This may seem like an innocuous question, as many other animals are striped in whole or in part. However, rodents rely on a combination of stealth, speed, and dexterity to survive in a dangerous world of predators. Almost every other rodent has fur that ranges from gray to brown, blending into the background with no contrasting colors. Stripes are noticeable and would only provide camouflage in stripe-like grassy areas like the savannahs of African zebras or the reeds of Indian tigers. Chipmunks live in temperate deciduous forests of the Northern Hemisphere with brown tree trunks and tan leaf litter. Research on the African striped mouse, one of the only other striped rodents, revealed that the striping was caused by a gene that interrupts the development of pigment cells that was repurposed from a gene associated with cranial development, a phenomenon called co-option.  Chipmunks have the same gene. The hypothesis is that both species independently evolved striping through a mutation that was random but survived in some Darwinian way. [12] Since it is unlikely that the stripes improved survival from predation (or squirrels would have them), then there can be but one reason: female chipmunks prefer striped males. Bear claws can have nothing to do with it.

References:

1.Oxford English Dictionary  https://www.oed.com/dictionary/chipmunk_n?tl=true    

2. New Hampshire Public Television https://www.nhptv.org/natureworks/chipmunk.htm     

3. Wood, A. “Rodentia” Encyclopedia Brittanica, Macropedia William and Helen Benton Publishers, University of Chicago. 1974, Volume 15 pp 969-980.

4. Drew. L. I, Mammal, Bloomsbury Publishing, London, England, 2017 pp 193-224.

5. Holden, C. “The Rise of the Ninja Chipmunk”  Science, 18 May 1990, Volume 248 Issue 4957, p. 810

6. Saunders, D.  “Eastern Chipmunk”. Adirondack Mammals. Adirondack Ecological Center. 1988. https://www.esf.edu//aec/adks/mammals/chipmunk.php 

7. 10. Pennsylvania Game Commission  https://www.pgc.pa.gov/Education/WildlifeNotesIndex/Pages/Chipmunk.aspx#

8. Patterson, B. and Norris, R,“Towards a uniform nomenclature for ground squirrels: the status of the Holarctic chipmunks” . Mammalia. 1 May 2016 Volume 80 Number 3 pp 241–251.

9. Cockerell, T. “Book Review of Revision of the American Chipmunks by Arthur Howell, USDA Publication 52, November 1929” 28 March 1930 Science Volume 71 Issue 1839, pp 342-343.

10. Whitaker, J. National Audubon Society Field Guide to North American Mammals, Alfred A. Knopf, New York,1996, pp 408-438

11. https://www.native-languages.org/legends-chipmunk.htm#google_vignette  

12. . Reuell, P. “Science of Stripes” Harvard Gazette, 17  November 2016 https://news.harvard.edu/gazette/story/2016/11/science-of-stripes/     

Horsenettle

Horsenettle flowers range from light purple to white, all with tubular yellow stamens to attract pollinators

Common Name:  Horsenettle, Bull nettle, Carolina horse nettle, Apple of Sodom, Devil’s potato, Thorn apple, Wild tomato, Poisonous potato – A nettle is a plant of the genus Urtica noted for stinging hairs. The name has been widely applied to other plants that have prickles like the horsenettle. The horse association is likely due to the fact that horsenettle plants are commonly found in pastures, like those fenced off for horses.

Scientific Name: Solanum carolinense – Solanum is Latin for nightshade. The genus name is attributed to Pliny the Elder (Gaius Plinius Secundus), a Roman military commander and naturalist in the first century AD. The origins of Solanum are unclear, but sol is Latin for sun; there is a sunberry flower in the nightshade family. The similarity in spelling to the Latin word solamen which means comfort is another possible etymology. [1] Plants of the Solanum genus have historically been widely used as medicine for a variety of ailments and conditions.  The species name is reference to the North American colony Carolina where it was first noted, probably before its division between north and south.

Potpourri: The horsenettle is a weed according to the standard definition as it grows where humans don’t want it to grow and crowds out preferred plants. If weediness is a matter of garden aesthetics, however, an argument can be made that the five-petalled white or purplish star with five yellow elongated stamens projecting from the center has some appeal. If weediness is detrimental to food crops like soybeans and wheat awaiting harvest from farm fields, then eradication with herbicides may be justified. Horsenettle is also poisonous to the extent that it is included in edible wild plant field guides as a cautionary measure to prevent gathering the wrong things when edible plants are sought. [2] But it is also medicinal, having been used by Native Americans and subsequently by colonizing Europeans for centuries. This, too, is not unusual, as horsenettle is a member of the Nightshade family, a rogue’s gallery of deadly plants that also includes potatoes, tomatoes, peppers, and eggplants, mainstay edibles of western cuisines. Horsenettle is bad weed, good medicine, and has ugly prickles.

Another thing that can be said about weeds like horsenettle is that they are successful plants, able to flourish in marginal soils and spread outward in profusion. That is what all living things aspire to do, perpetuating their own kind following the recipe for survival by being fittest. Darwin came to recognize that competition among plants was equal to if not more than that among animals, even as Galapagos finch beaks became his focus. As a backyard scientist with inimitable curiosity, he conducted an field test in his backyard by clearing six square feet down to bare soil to observe the emergence of native weeds. He noted that “out of 357 no less than 295 were destroyed, chiefly by slugs and insects,” the detail testimony to thoroughness. As confirmation, he repeated the experiment on a second area of established turf, noting that “out of twenty species … nine species perished” because the “more vigorous plants gradually kill the less vigorous.” [3] It is evident that becoming a successful weed is an evolutionary feat rather than a routine event. It is also apparent that the weeds that persist and become human problems are the cream of the weed crop, exceptionally evolved with propagative efficiency.

Horsenettles are poisonous because they produce an alkaloid chemical named solanine, the name derived from Solanaceae, the Nightshade family of almost 4,000 plant species in nearly 100 genera. Alkaloids are complex organic chemical compounds that can in many cases have physiological effects on animals ranging from medicinal like morphine, hallucinogenic like mescaline, and stimulants like nicotine (the “ine” suffix is prescribed). The root alkali is derived from the Arabic word for the calcined ashes of the saltwort plant, and refers to molecules that are basic (pH > 7), the opposite of acidic. Alkaloids are mostly bitter, which is undoubtedly the reason why bitter is one of the five tastebud types also including sweet for sugars, salt for minerals, sour for ripeness, and savory for proteins. Bitterness warns of  poison and most animals avoid bitter plants like horsenettle. The genetic code for bitterness taste sensors was retained by the survivors; individuals that lacked sensitivity learned about bitter poisons the hard way. Up until the nineteenth century, plant compounds were only known through trial and error. The alkaloid associated with the poison hemlock (coniine) was the first to be synthesized in 1886. [4]

The taxonomy of plants is based on familial similarities. The production of a specific alkaloid is typically a shared characteristic. This is true of the nightshades (Solanaceae) just as it is of buttercups (Ranunculaceae), poppies (Papaveraceae) and barberries (Berberidaceae). Alkaloid concentrations vary among the different species of a plant from plentiful to nearly nonexistent. The nightshades range from almost no alkaloid in tomatoes, potatoes, and eggplant to substantial amounts in horsenettle and tobacco. Why plants produce alkaloids is uncertain. Experiments have shown that tomatoes grafted onto tobacco stems produce no solanine.  Conversely, tobacco grafted onto tomato stocks does. This would indicate that solanine isn’t involved in growth or metabolism. However, that is not to say that there is not a purpose for a plant to make a complex chemical compound, which takes energy and raw materials. There is more to life than growth and there is more to genetics than the here and now. Alkaloids may be vestigial remnants that once had a purpose in the evolutionary past but which is no longer relevant.

Horsenettle fruits look like small tomatoes

Alkaloids may also have a role in reproduction, as some plants produce high levels during seed and fruit formation which become depleted when the seed is ripe. Horsenettle fruits look like miniature tomatoes. Whether they are toxic or not is an open question. One source says “the berries are the most toxic when they are mature” [5] and another says “all parts of the plants, except the mature fruit, are capable of poisoning livestock” [6] Since poisoning experiments on humans and livestock are not ethically acceptable, almost all reports of poisoning are anecdotal. It is probable that immature fruits are poisonous and mature, ripe fruits are not. This makes sense, as plants produce fruit to be eaten by animals so that the seeds are distributed in a dollop of fertilizing manure. For example, all parts of the mayapple are poisonous except the ripe fruit. Experiments with livestock that consumed ripe horsenettle fruits have shown that the seeds pass through the gut unharmed, exactly as would be intended and predicted. [7]

The relationships between animals and plants are complex. This is particularly true when it comes to alkaloids. Ostensibly, plants produce the bitter compounds through random genetic mutation and eventually a formulation occurs that keeps animal predation in check. However, in the niche-centric ecology of survival, the opposite must also occur. That is, animals that evolve some form of immunity to certain alkaloids in certain plants gain the advantage of abundant food avoided by competitive herbivores. The example of the monarch butterfly caterpillars eating milkweed that is poisonous to nearly all other animals is well known. Experimentation has shown that this is more the rule than the exception. When the Panama Canal was built in the early twentieth century, the flooding of Gatun lake created Barro Colorado Island where a Smithsonian Field Station was opened in 1924 to conduct long term experiments of evolution in an isolated biosphere. A recent study of the 174 caterpillars found on the island found that they were “picky eaters” is choosing which types of over 200 toxic compounds they would consume. This “encourages diversification, as new species with new, temporarily insect-proof toxin profiles emerge.” [8] It is not therefore surprising that a fair number of insects, and some animals, eat horsenettle leaves, stems, and fruit.

The vast majority of twenty first century humans have plenty to eat―in many cases too much. There is no cornucopia in the wild where life is “nasty, brutish, and short” according to Thomas Hobbes. Many insects and a few animals consume not only the horsenettle fruit, but also the bitter, normally poisonous leaves and stems as well.  A study conducted in Virginia over a period of six years (1996-2002) revealed that 31 insects from six different orders ate horsenettle voraciously. In fact, a detailed survey of 960 horsenettle plants found that the plants were severely damaged. And it wasn’t just bugs, as meadow voles also consumed horsenettle with no apparent ill effects. The most damaging insect species were those that also fed on other Nightshade family plants including the eggplant flea beetle and the false potato beetle in keeping with the evolutionary pathway of alkaloid tolerance.  Fruits were assessed separately due to their importance in propagation as the seed bearing component of the plant. The three species accounted for 75 percent of fruit damage were false potato beetles, pepper maggots, and meadow voles. [9] This also provides some validity to the overall scheme of life with plants producing sweet, tasty fruit to attract animals for seed dissemination.  

As is the case with many plants that are listed as poisonous to animals in general and humans in particular, horsenettle has historically been used for medicinal purpose. In the eons that preceded the Renaissance in the arts and sciences, treatment of human and livestock ailments was a matter of local lore and tradition using naturally occurring substances, mostly plants. Essentially, the chemicals created by a plant for its own use and protection provided similar benefits when consumed by an animal. In the case of horsenettle, the Cherokee who were indigenous to Virginia and the Carolinas where it originated were its most inventive purveyors. The leaves were used internally to dispel worms (apparently worms don’t like it either) and externally to treat poison ivy (although one would think that Cherokee had figured out the “leaves of three let it be” rule). Fruits were boiled in grease to treat dogs with mange and the seeds of the fruit were made into a sore throat gargle. [10] The Native American uses of native plants were in many cases adopted by early colonists so that these “natural remedies” appeared in the early listings of drugs. Horsenettle was listed in the United States Pharmacopeia  from 1916 to 1936 as a treatment for epilepsy, and, in keeping with the “snake oil” practices of unregulated past, both an aphrodisiac and a diuretic. It has long since disappeared from the apothecaries shelves, and is now mostly known for its toxicity. A modern medicinal plant guide concludes with “fatalities reported in children from eating berries.” [11]

References:

1. Simpson, D. Cassell’s Latin Dictionary, Wiley Publishing New York, 1968, pp 560, 772.

2.  Elias T. and Dykeman, P. Edible Wild Plants, Sterling Publication Co. New York, 1990, p 265.

3. Darwin, C. On the Origin of Species, Easton Press, Norwalk, Connecticut, 1976, p.50.

4. Manske, R, “Alkaloids” Encyclopedia Britannica, Micropedia, William Benton Publisher University of Chicago, 1974, Volume 1 pp 595-608.

5. North Carolina State University Agricultural Extension https://plants.ces.ncsu.edu/plants/solanum-carolinense/   

6. Bradley, K. and Hagood, E.  “ Identification and Control of Horsenettle (Solanum carolinense) in Virginia” http://www.ppws.vt.edu/scott/weed_id/horsenettle.PDF           

7.  https://www.illinoiswildflowers.info/prairie/plantx/hrs_nettlex.htm

8. “One hundred years of plenitude” The Economist, Science and Technology, 6 July 2024. p 64.

9. Wise, M. “The Herbivores of Solanum carolinense (Horsenettle) in Northern Virginia: Natural History and Damage Assessment” Southeastern Naturalist,  1  September 2007,  Volume 6,  Number 3, pp 505-522.

10. Native American Ethnobotany Data Base http://naeb.brit.org/  

11. Duke, J. and Foster, F. Medicinal Plants and Herbs, Peterson Field Guide Series 2nd edition, Houghton Mifflin Company, Boston, 2000, p 206.

Destroying Angel – Amanita bisporigera

The key features of the Destroying Angel are the cup-like volva at the base of the stem, the stark whiteness of the stem, cap, and gills, and the partial veil hanging from the top of the stem just below the gills under the cap.

Common Name: Destroying Angel, Fool’s Mushroom, Death Angel, White Death Cap – The virginal whiteness of all parts of the mushroom are aptly described as angelic – beautiful, good, and innocent. The fact that it is anything but is conveyed by the addition of destroying with death-dealing toxicity.

Scientific Name: Amanita bisporigera – The generic name is taken directly from the Greek word amanitai, probably from the Amanus Mountains of southern Turkey where the noted Greek physician Galen may first have been identified the archetype, Amanita. [1] The specific name indicates that there are only two spores on each of its basidia in contrast to the four spores of other basidiomycete fungi. Virtually indistinguishable from Amanita virosa and Amanita verna which both frequently appear as synonyms in mushroom field guides.

Potpourri:  The destroying angel is a toadstool nonpareil. While the origin of the term toadstool is obscure, it cannot be a coincidence that tode stuhl means death chair in German, the language of the Saxons who emigrated to England. Its notoriety is not only because it is one of several mushrooms that contain deadly poisons called amatoxins, but also due to its close resemblance to Agaricus campestris, the edible field mushroom which is the cousin of the cultivated white button mushroom of supermarkets and salad bars. Both are white, similar in size and shape, and grow in the same habitat, primarily grass under or near trees. The destroying angel is the most dangerous of the numerous doppelgänger mushrooms as the deadly twin of a well-known and often consumed edible.  Misidentification absent knowledge of the subtle physical differences between the two can result in discovering the profound physiological differences with sometimes deadly result. The field white mushroom is nourishing. The angelic white mushroom is Shiva.

The cup at the bottom of the stem is the volva, the bottom half of the universal veil.

The key features that distinguish the destroying angel from similar mushrooms are straightforward if you know what to look for. First and foremost is the volva, (Latin for a covering like a husk or shell) which is the cuplike structure at the base of and surrounding the stem or stipe. The volva is frequently hypogeal, i. e. underground and out of sight. This means that it can only be positively identified by digging up the soil around the base of the mushroom. [2] However, it is the standard and preferred practice among mushroom gatherers to use a knife to cut through the stem cleanly at the base. This is done so the mycelium of the fungus from which the fruiting body mushroom grows is not seriously disturbed. The procedure is analogous to gathering apples from an apple tree. The fungal mycelium and the apple tree survive to produce new mushroom spores and fruit seeds for future generations. Using the standard harvesting technique, it is easy to see how the below the cut volva would not be noted.  White mushrooms must be dug out to the roots to avoid the dilemma of the death mushroom.

The only way to be certain that you have a puffball and not a Destroying Angel is to cut it in half.

The volva is the bottom part of what is known as a universal veil, a thin membrane that envelops the mushroom during the subterranean growth phase to protect the gills and the spores they hold from damage. The universal veil is a characteristic of all mushrooms in the Amanita Family. While there are a few other mushrooms that have a universal veil and its volva (such as the genus Volvariella named for this characteristic feature), it is a reliable identification feature for the destroying angel. All spore-bearing mushrooms are produced by the fungal mycelium underground as an ovoid called a primordium. Once they mature and environmental conditions are promising (like after rain) the extension of the stem causes the universal veil to tear around its circumference to expose the cap and gills of the fruiting body for spore dispersal. The volva is the lower part of the “eggshell” that remains attached to the bottom of the stem. Prior to upward extension, the destroying angel looks like a white egg, similar in appearance to a puffball, another type of edible fungus with which the destroying angel can be confused.  Some field guides include a picture of it in the puffball section to emphasize the danger of mistaken identity. [3] The only way to be absolutely sure is to cut the fungus lengthwise to reveal a cap and gills within.

Many mushrooms have what is known as a partial veil which also helps prevent damage to the reproductive gill surface. It is partial in that it only covers the underside of the cap, extending from the edges of the cap to the stem. When the mushroom cap expands fully, the partial veil also tears, in many cases leaving some remnants around the edges and a ring called an annulus attached to the stem just below the cap. In some cases, the partial veil remnant can be seen hanging like a draped clerical mozetta at the top of the stem. However, this annular ring is not well connected, and in many mushrooms with partial veils, there is no remnant. Most Amanita family mushrooms have both universal veils and partial veils with both a volva at the bottom and a ring around the stem as is the case with the destroying angel. The double protection afforded to the gills must have evolved due to the success of the species in propagation. Amanitas are one of the most prolific of all mushroom families. Partial veils and the remnant annulus are also a characteristic of the Agaricus family, which includes the edible field mushroom Agaricus campestris. They do not have universal veils with the tell tale volva.

The second prominent feature of the destroying angel is the stark whiteness of the cap, stem, and gills that has been described as having a “strange luminous aura that draws the eye” that is “easily visible from one hundred feet away with its serene, sinister, angelic radiance.” [4] The cap is smooth and usually described as viscid or tacky when wet.  This is to distinguish it from most of the other species in the Amanita genus that have warty patches on the cap from the dried out and cracking universal veil like the white dot warts on the bright red cap of the iconic fly agaric (Amanita muscaria).  The glowing purity of the whiteness is a reliable feature for initial field identification. Confirmation by looking for a picture or drawing of a white mushroom with a volva and annular stem ring using a field guide is another matter. One provides only Amanita verna or fool’s mushroom, prevalent only in spring (vernus in Latin). The common name implies that it fools the observer with its deception. [5] A second field guide provides both A. verna as the spring destroying angel, and Amanita virosa (virosus is poisonous in Latin) for mushrooms that appear in the fall with only a passing reference to A. bisporigera. [6] DNA sequencing of fungi has had a profound impact on the eighteenth-century Linnaean system basing taxonomy on physical similarity. It has been shown that all destroying angels of North America are A. bisporigera (with one additional species A. ocreata in California) and that A. verna and A. virosa are only found in Eurasia. Destroying angel is a universal common name for all species for the white mushrooms with volva.

The destroying angel is one of the deadliest mushrooms known. According to one account “misused as a cooking ingredient, its alabaster flesh has wiped out whole families.” [7] The toxic chemicals are called amatoxins (from the generic name Amanita), which are protein molecules made up of eight amino acids in a ring called a cyclopeptide with a molecular weight of about 900. The death dealing amatoxin variant is alpha-amanitin, which destroys RNA polymerase, a crucial metabolic enzyme. RNA polymerase transcribes the DNA blueprint, creating  messenger RNA that transport the codon amino acid recipe used  to make proteins on which all life depends. The ultimate result is rapid cell death. The gastrointestinal mucosa cells of the stomach, the hepatocytes of the liver, and the renal tubular cells of the kidneys are the most severely affected cells because they have the highest turnover rate and are rapidly depleted. The liver is most at risk because the hepatocytes that absorb alpha-amanitins are excreted with the bile and then reabsorbed. The initial stages of amatoxin poisoning start about ten hours after ingestion; the gastrointestinal mucosa cells are the first to be affected resulting in forcible eviction (aka vomiting) of the intruding poisons.  There follows a period of several days of calm as the stomach cells recover somewhat before the storm of  hepatic and renal debilitation. The third and final stage can in severe cases lead to the crescendo of convulsions, coma and death. The lethal dose for 50 percent of the population or LD50 is used by toxicologists as a benchmark for relative virulence. The LD50 for alpha-amanitin is 0.1 mg/kg.  A 70 kg adult will have a 50-50 chance of survival with a dose of 7 milligrams, the amount of alpha-amanitin in one small destroying angel. [8]

The North American Mycological Association (NAMA) received a total of 126 reports of amatoxin poisoning over a period of thirty years, about four annually. The fatality rate has historically been on the order of thirty percent attributed to liver and/or kidney failure; this number has improved over the last several decades to about five percent due to a better understanding of amatoxin physiology effects combined with aggressive therapy. The basic tenet of the treatment is to reduce the toxic concentration in the blood serum as rapidly as possible. Gastric lavation is used if the ingestion was recent enough followed by a thorough purging using emetics to induce vomiting and cathartics to induce evacuation of the bowels (essentially the same effect on the gastrointestinal mucosa cells to expel the poison).  Perhaps the most important therapy is the use of activated charcoal, as amatoxins have a high affinity for adsorption on its surface. Although there is no proven antidote, intravenous injections of penicillin have been used with some apparent benefit. A French physician named Bastien developed a three part procedure using intravenous injections of vitamin C and two types of  antibacterial drugs supplemented with penicillin to successfully treat 15 cases. To unequivocally prove its efficacy, he conducted the ultimate experiment by eating 70 grams of Amanita phalloides, the death cap cousin of the destroying angel and using the protocol on himself. [9] The most promising new treatment is silibinin, an extract of the blessed milk thistle (Silybum marianum), which is sold commercially as Legalon by a German pharmaceutical company. Liver transplant was once considered the last resort for amatoxin poisoning, but that may no longer be necessary. [10]

The destroying angel is not the only mushroom that produces amatoxin, nor is amatoxin the only substance produced by fungi that is inimical to humans. The identification of fungal toxins and the characterization of their imputed symptoms are among the most empirical of forensic science. The facts are based almost entirely on the anecdote. The identification of the mushroom that caused the condition under evaluation is usually a matter of conjecture since the victim has eaten the evidence. To add to the confusion, the alleged offending mushroom may have been consumed with a mixture of other wild foods and fungi gathered over a wide area in obscure nooks.  The dearth of fungal knowledge in the medical community contributes to uncertainly. Poison Control Centers (PCC) were established after World War II to deal with the proliferation of chemicals as clearing houses for information about poisons and their antidotes and treatment protocols. [11] Over the ensuing years, mushroom poisonings accounted for only one half of one percent of all PCC reports (1 in 200). Of those reported, only 10 percent included any information about the mushroom. Based on limited data, NAMA established a toxicology committee in 1985 and began to supplement the PCC data with a separate data base using the input from experienced mycologists and mushroom aficionados. The result to date is a more comprehensive accounting with fairly reliable identification of 80 percent of the mushrooms involved in poisoning. [12] This is a good start but has done little to assuage the beliefs of the general public that most if not all mushrooms are toadstools and that eating wild mushrooms is a fool’s errand, sometimes literally.

One example suffices to point out the irrational fear of amanita mushroom poisoning and the broader category of mycophobia. In 1991, the venerable French reference Petit Larousse Encyclopédie was recalled because the deadly amanita article lacked the appropriate symbol for poison. But this was not enough, since almost 200,000 copies had already been sold.  Several hundred students were hired to visit 6,000 stores throughout Europe and Canada to affix stickers with the appropriate symbol for poison on the pages and append a notice on the cover of the book that it was a new edition. [13]  History has impugned the mushroom as the source of the poison that has dispatched any number of notables, among them Claudius, the fourth Roman Emperor. The perpetrator is alleged to have been his fourth wife Agrippina who wanted her son Nero to succeed to the throne. The death is recounted by the philosopher Seneca the Younger in December 54 CE, only two months after the event occurred. According to his account, it happened quite quickly, the onset of illness and death being separated only by about an hour. [14] The mushroom assassination of Claudius is almost certainly apocryphal, as deadly mushrooms are relatively slow to act; those that act rapidly generally cause gastrointestinal distress that is rarely fatal. Hyperbole is not out of the question. One recent account attributes the disappearance of the Lost Colony of Roanoke to the relocation of the starving colonists to the island of Croatoan. Gorging themselves on the mushroom bounty that they found there, they died a horrible death of grotesque contortions. [15]

References:

1. McIlvaine, C. One Thousand American Fungi, Dover Publications, New York, 1973 pp 2-5

2. Roody. W. Mushrooms of West Virginia and the Central Appalachians, The University Press of Kentucky, Lexington, Kentucky, 2003, pp 62-63.

3. Lincoff, G. National Audubon Society Field Guide to North American Mushrooms, Alfred A. Knopf, New York, 1981. pp 551-552.

4. Russel, B. Field Guide to Wild Mushrooms of Pennsylvania and the Mid-Atlantic, The Penn State University Press, University Park, Pennsylvania, 1935, pp 67-69.

5. McKnight, K and McKnight, V.  Peterson Field Guide to Mushrooms of North America, Houghton Mifflin Company, Boston, 1987, pp 238-239, Plate 27.

6. Pacioni, G. (Lincoff, G, US editor) Guide to Mushrooms, Simon and Schuster, New York, 1981, pp 76-77.

7. Money, N. Mr. Bloomfield’s Orchard, Oxford University Press, Oxford. 2002 p 151

8. Hallen, H. et al. “Gene family encoding the major toxins of lethal Amanita mushrooms”. Proceedings of the National Academy of Sciences. 27 November 2007 Volume  104  Number 48  pp 19097–19101

9. Kendrick, B. The Fifth Kingdom, Focus Publishing, Newburyport, Massachusetts, 2000, pp 319-321.

10. Beug, M. in Fungi Magazine Volume 1 Number 2 Spring 2008. Beug is a Professor Emeritus at Evergreen State College and a member of the NAMA toxicology committee.

11. Wyckoff, A. “AAP Had First Hand in Poison Control Center” AAP News Sept. 2013 http://www.aappublications.org/content/34/10/45

12. Beug, M, et al “Thirty-Plus Years of Mushroom Poisoning: Summary of the Approximately 2,000 Reports in the NAMA Case Registry” Mcllvanea Volume 16 number 2 Fall 2006 pp 47-68.

13, Schaechter, E. In the Company of Mushrooms,  Harvard University Press, Cambridge, Massachusetts, 1997, pp 210-211.

14. Marmon, V. and Wiedemann, T. “The Death of Claudius” Journal of the Royal Society of Medicine, Volume 95, May 2002 pp. 260-261.

15. Spenser, S. “The First Case of Mass Mushroom Poisoning in the New World” Fungi Magazine, Volume 11, Number 4, Fall 2018, pp 30-33.

Spotted Lanternfly

The adult spotted lanternfly has a head and eyes similar to the closely related cicada

Common Name: Spotted Lanternfly, Chinese blistering cicada, Spot clothing wax cicada – The term lanternfly is generally applied to several families of planthopper insects even though there is no known species that emits light. Most planthoppers are small insects that are colored to blend into the backdrop of greens and browns. This species of lanternfly is distinctive in having prominent spots on its folded forewings.

Scientific Name: Lycorma delicatulaLyco is Latin for wolf and could possibly be in reference to a color or texture of the body or wings. A more plausible etymology is a derivative of lychnus, Latin for lamp. The species name means dainty or nice. So, it could be construed as delicate lamp, consistent with the common name.

Potpourri: The spotted lanternfly is the latest North American invasive insect invader. It has taken its place alongside Japanese beetles, gypsy moths (spongy moths since 2022), and woolly adelgids in the rogue’s gallery of unwelcome invertebrates.  The invasive species epidemic is the unintended yet almost inevitable result of global trade in shipping containers that pass from continent to continent with almost anything inside in numbers that preclude anything close to universal screening. The spotted lanternfly has followed the invasive biological playbook in reproducing geometrically, eating everything in sight, and taking advantage of an environment devoid of any serious predation. It is unique among insect pests in having been preceded by its favorite host plant, Ailanthus altissima or tree of heaven, which was imported from Asia and intentionally planted for its robust tenacity and rapid growth. It was the tree of Betty Smith’s iconic “A Tree Grows in Brooklyn.” It became the tree that grows everywhere in North America as a ready source of food for its Asian lanternfly cohort.

The spotted lanternfly is a planthopper, a group consisting of mostly tropical, inconspicuous insects that are easily confused with treehoppers, leafhoppers, and froghoppers in the “endless forms most beautiful” of the class Insecta. In that they extract the liquid nutrient produced by plants with a hollow beak, literally sap-sucking, they are generally placed in the order Hemiptera. These are the true bugs as opposed to the more common use of the word bug for almost any insect like ladybugs that are beetles. Hemiptera is Latin for half wing, referring to the forewing that is solid at the base and membranous at the tip. Some entomologists separate those bugs with wings that are membranous from base to tip in a separate suborder Homoptera meaning same wing. The homopterans consist of three broad groups: cicadas, aphids, and planthoppers. [1]

As a close cousin of aphids and cicadas, it is easy to understand why there might be a problem with spotted lanternflies. Aphids are perhaps the most economically damaging insect in the global temperate regions that constitute the breadbasket for most of humanity. Cicadas are masters of reproduction, producing millions of offspring in seventeen, thirteen, or single year cycles. The spotted lanternfly reproduces with cicada fecundity and sucks sap with aphid voracity. Having been first introduced into Pennsylvania in 2014, they have spread with Malthusian certainty over the mid-Atlantic states to the extent that there is a hue and cry for some form of countermeasure before epidemic populations ensue. This will be difficult if not impossible since they feed on a wide variety of plants, are not palatable to most insect predators, deposit mounds of excrement called honey dew that attracts other pests and pathogens and pass from the scene only after having mated and laid massive egg deposits that are well protected and hidden by a waxy overcoat. [2]

The bright orange contrasting wing bars may be a aposematic warning of toxicity to birds.

Even though spotted lanternflies prefer the tree of heaven, they are not finicky. They have been found feasting on over 100 different hosts from 33 plant families that include but are not limited to vines, ornamentals, specialty plants and fruit trees. The list of plants narrows considerably as they grow and molt. Like all insects, lanternflies have a life cycle based on metamorphosis. They overwinter as eggs that hatch in spring as nymphs that are black with white spots that must extract plant sap to survive. As they grow over the next several months, they literally get too large for the original carapace and must molt several times to form a new, somewhat larger body called an instar.  The first three instars are similarly diminutive and inconspicuous nymphs that move to ever larger plants to provide the additional amount of nutrients needed for their larger-sized appetites.  The fourth instar marks a radical change in appearance. The adult is metamorphosed by evolution’s magical genetics into a much larger body with moth-like wings that are brightly colored with stark contrast. It is the adult spotted lanternfly that is nemesis of vineyards and orchards. [3]

Brightly colored defenseless animals seem a contradiction. The goal of every living thing is to reproduce to perpetuate the species. Getting eaten before mating and oviposition leads to genetic extinction. Many animals hide from predators by adapting their coatings to match the colors and textures of their environment. This is called crypsis. However, if an animal is poisonous to its predators it is advantageous from the evolutionary perspective to make that clear in advance. It does not help if the poison is only detected after the insect’s body is mangled. Bright coloration to alert predators of potential toxicity is called aposematism. The monarch butterfly, which consumes the poisonous milkweed plant is the classic example. And this, apparently, is where the tree of heaven comes in. A simple field test of this predator alert effect on birds was conducted using two different batches of suet, one made with crushed spotted lanternflies that had eaten Ailanthus altissima and one with spotted lanternflies that had not. Birds preferred the latter, demonstrating that consuming tree of heaven was effective in protecting the spotted lanternfly. [4] That they actually evolved their distinctive bright orange wing bars to indicate toxicity is correlated but not proven. It has been suggested that the closed forewings are cryptic so that the spotted lanternfly can hide on tree trunks but that the aposematic flash of orange occurs when they are under attack by a pecking bird.  

If the spotted lanternfly ate only A. altissima, that would be a good thing. Were it not for its other inimical activities, it could even be considered a biological control against the tree of heaven, which has invasive problems of its own. This is in part because of its chemistry, producing cytotoxic alkaloids that suppress the growth of other plants. One of its chemicals, named ailanthone for the genus, reduces the growth of other plants by 50 percent at a concentration of only 0.7 ppm. [5] It is not known which of the secondary metabolites of the tree are employed by the spotted lanternfly, but there is some serious chemistry going on. The spotted lanternfly has been used in traditional Chinese medicine since the twelfth century to reduce swelling, presumably due to its tree derived toxins.  Spotted lanternflies have become a biological bane due primarily to their second favorite food, the plant sugars fructose and sucrose that are especially concentrated in the genus Vitis, which includes the various grapes of the global wine industry. [6] In North America, there are 40 other known hosts, including black walnut, tulip tree, oriental bittersweet, multiflora rose, and hops, a key beer flavoring ingredient. [7]

Sap sucking insects require the same three basic inputs necessary for all plants, animals, and fungi: carbohydrates for energy, lipids for membranes, and proteins for amino acids. Sap is high in carbohydrates but low in protein. Much more sap must be extracted than is needed for carbohydrate energy to get enough protein for growth. The result of the extra input of sugar is more output as insect excrement or frass. The high sugar frass produced by sap-sucking insects is fittingly called honeydew. Some ant species herd and protect aphids to collect honeydew as food for their larval offspring. The honeydew of spotted lanternflies becomes a social problem due to their numbers and the volume excreted. The sticky goo builds up on whatever is underneath, which may include things like picnic tables and lawn furniture that become stained with mold. Honeydew is also attractive as a food source for stinging insects like yellow jackets that are disruptive to outdoor human activities.  [8]

The ootheca are almost impossible to see in between ridged tree bark

The global spread of spotted lanternflies is mostly due to the coating that they apply to their eggs that both conceals and protects them. Each gravid female lays about 40 eggs and then secretes a brownish, waxy substance to cover them. The end result is an oothecum, a thick-walled egg case similar to that made by cockroaches. While most insects lay eggs on host plants that will serve as the first meal for the emergent larvae or nymphs, spotted lanternflies will use almost any available surface with a preference for the vertical. In most cases, the ootheca are further protected by placement in obscure locations that range from tree bark fissures several meters above the ground to stone monoliths and building walls. Once the process is complete, the ootheca are almost impossible to find absent a detailed inspection which can only be effective if you already have a good idea where to look. As the egg casing ages, it looks more and more like dried mud, making identification even more challenging. It is believed that the first spotted lanternflies arrived in Pennsylvania as an oothecum attached to a shipment of landscaping stones, almost certainly sent in a shipping container from Asia. [9]

Control and containment of the spotted lanternfly is evolving in concert with its radiating spread outward from its point of origin with concomitant economic damage. Estimates at this point are speculative as they are based on extrapolation of local damages in infested areas. Pennsylvania, where the spotted lanternfly first appeared, may see damages of up to $100 million annually due to crop loss. If spotted lanternflies spread to the Pacific Northwest, losses to cherry, wine grape, and hops crops in Washington are estimated at about $4 billion.  Two spotted lanternflies have already been found in Oregon on packing containers and ceramic pots that both came from Pennsylvania. They were dead, but egg cases cannot be too far behind. [10]

The three basic methods to exterminate pest insects are mechanical, chemical, and biological. Mechanical means range from the satisfying but fruitless attempts to find and squash the bugs to affixing host trees with some form of baited trap. In the case of lanternflies, glue coated sheets, some with attractive pheromones, have been tried with limited success but with the caveat that bycatch of birds and butterflies is always a concern. As discussed above, finding and scraping egg masses is not feasible and getting rid of its preferred tree of heaven host that are the dominant tree along many miles of US highways would be nearly impossible.

This leaves chemical and biological as the only two viable means to combat the spotted lanternfly invasion. Pesticides like neonicotinoids (Dinotefuran is prescribed by federal agencies) and organophosphates are effective, but they are general agents that have been implicated in reducing beneficial insect populations like honeybees. The other problem with pesticides is the development of immunity by species through natural selection. There will almost always be individual insects that are resistant to a pesticide due to the randomness of mutation. The resistant mutants survive the poison to propagate their genes, replacing those that were killed by it.  A second issue is the economic cost of using pesticides over large areas, relegating most applications to field size acres instead of the county size square miles that are necessary for extirpation. Vineyards in Korea that were sprayed with pesticide were rapidly repopulated by spotted lanternflies from nearby forested areas. [11]

Biological controls are more promising. A lot of attention has been paid to the role of birds, which are put off by the toxins that spotted lanternflies extract from the tree of heaven.  It was even suggested that if 70 percent of the spotted lanternflies could somehow be kept from their favorite food, then birds would do the rest. This was proffered with the caveat that “we need to do everything we can” even if this cannot. [12] The best place to look for biological controls is in the country of origin, where the local ecosystem keeps the target species in check. One promising possibility is a wasp native to China that parasitizes up to 80 percent of spotted lanternflies. However, introducing an alien predator species is cumbersome due to the need to test both its efficacy on the target species and its possible harmful effects on other species. However, biological control is in all likelihood the only way to prevent the dystopia of spotted lanternfly proliferation over the long term.

References:

1. Marshall, S. Insects, Their Natural History and Diversity, Firefly Books, Buffalo, New York, 2006, pp 91-104.

2. . “Spotted Lanternfly Pest Alert” (PDF). USDA-APHIS. USDA https://www.aphis.usda.gov/sites/default/files/alert-spotted-lanternfly.pdf  

3. Barringer, L. “Lycorma delicatula (spotted lanternfly)”. www.cabi.org. 17 December 2021

4. Kranking, C. “Birds Are One Line of Defense Against Dreaded Spotted Lanternflies” Audubon Magazine, 17 September 2021. https://www.audubon.org/news/birds-are-one-line-defense-against-dreaded-spotted-lanternflies    

5. https://hikersnotebook.blog/flora/deciduous-trees-and-shrubs/ailanthus-tree-of-heaven/

6. Dara, S.; Barringer, L.;  Arthurs, S. (2015). “Lycorma delicatula (Hemiptera: Fulgoridae): A New Invasive Pest in the United States”. Journal of Integrated Pest Management. 20 November 2015 Volume 6 Number 1. pp 1–6. https://academic.oup.com/jipm/article/6/1/20/2936989?login=false

7. Murman, K, et al. “Distribution, Survival, and Development of Spotted Lanternfly on Host Plants Found in North America”. Environmental Entomology. 31 October 2020 Volume 49 Number 6. pp 1270–1281. https://academic.oup.com/ee/article/49/6/1270/5947504?login=false

8. Barringer, op cit.

9. . Urban, J..; Leach, H. “Biology and Management of the Spotted Lanternfly, Lycorma delicatula (Hemiptera: Fulgoridae), in the United States”. Annual Review of Entomology. 23 January 2023. Volume 68 Number 1 pp. 151–167. https://www.annualreviews.org/content/journals/10.1146/annurev-ento-120220-111140    

10. Department of Agriculture. “Pest Alert: Spotted lanternfly Lycorma delicatula”. Oregon Department of Agriculture Fact Sheets and Pest Alerts    https://www.oregon.gov/oda/shared/Documents/Publications/IPPM/SpottedLanternflyPestAlert.pdf              

11. Dara. S. et al op cit.

12. Grandoni, D. “Squashing lantern flies (sic) isn’t enough; it might be time to send in the birds” Washington Post 7 March 2024.

Timber Rattlesnake

The coiled position is not necessarily for an imminent strike. It is mostly a defensive posture.

Common Name: Timber Rattlesnake, Canebrake Rattlesnake, Banded Rattlesnake, Black Rattlesnake, Eastern Rattlesnake – The name ‘timber’ describes the snake’s preferred habitat of rocky hills and forest uplands. The species is one of several that employ the rattle’s auditory warning.

Scientific Name: Crotalus horridus –   A crotalum is one of a pair of small cymbals that were used in antiquity to make a clicking noise; the castanet is a vestige. The generic name derived from it refers to the clicking noise made by the segments of the rattle. The species name horridus, in spite of its seeming etymological association with ‘horrid,’ has nothing to do with either the human perception of the snake or its venom. Horridus is Latin for ‘rough’ or ‘bristly’ and refers to the rough appearance of the scales due having a raised keel-like edge in marked contrast to the smooth skin of many snakes. [1]

As the only large and relatively common poisonous snake in the Appalachian Mountains, the timber rattlesnake evokes both existential fear and an abiding respect from all who cross its path. There is certainly a justification for these perceptions: its size ranges from 3 to 5 feet (the record is 6 feet); its venom is exuded in copious quantities through long and penetrating fangs; its potentially lethal strike is launched at lightning speed that is almost too fast for the eye and certainly too fast for the reflexes; and its bite can be deadly to humans if untreated. [2] However, the incidence of timber rattlesnake strikes on humans is vanishingly small, resulting in one or two fatalities per decade nationwide. The most common cause is handling snakes as a part of a religious ceremony [3]. The reason for the disparity between the potential for injury and the incidence of injury is that the timber rattlesnake is docile and will only strike if repeatedly provoked and threatened.

Ophidiophobia, the fear of snakes, is the most common type of herpetophobia, which applies to all reptiles. This fear is innate and almost certainly the result of evolution which may extend back in time to the earliest mammals, huddling in dark recesses to escape predatory dinosaurs. [4] Fear of snakes was reinforced in primates that evolved as tree dwellers where constricting snakes followed them in search of a meal. Recent research on macaque monkeys in Japan revealed that a region of the brain that is unique to primates called the pulvinar was especially sensitive to sighting snakes. Furthermore, monkeys that were raised in captivity without prior exposure to snakes displayed fear on first encounter. [5] Culturally, the serpent became a symbol of pernicious influence, chosen by the writer of Genesis as the tempter of Eve. Consumption of the fruit of the tree of knowledge led to expulsion from the Garden of Eden and God’s proclamation that the serpent would always crawl on its belly and eat dust. [6] Fear of snakes is embedded in the brain’s amygdala along with the fight or flight response that triggers panic action. However, cognition based on information stored in memory can override fear, a point enunciated by Roosevelt’s in his first inaugural address: “The only thing we have to fear is fear itself”.

The timber rattlesnake can be up to 6 feet long.

The timber rattlesnake is a consummate predator, well endowed with both the sensory tools and the physical agility to sustain its wholly carnivorous nutritional needs. As a pit viper, it has the namesake opening, or pit, just below the feliform vertical eye slit; the pit is the primary means of detecting prey.  The sensory organ in the pit is a heat receptor, capable of detecting a 1°C difference at a range of about one foot. This is both necessary and sufficient to detect and engage its warm-blooded prey during the preferred nocturnal forays when the cooler air accentuates the temperature differential. The strike is executed by the reflex-quick straightening of the lateral muscles to transition from either an S-shaped or coiled stance to full length extension; the fanged triangular head is projected about half of its body length. In other words, a 4-foot snake can strike at 2 feet. Contrary to popular folklore, the coiled position is not a strike prerequisite, though the snake will typically assume this posture in anticipation of a mammal’s traverse. Following the consummation of a successful strike, the olfactory sensors on the forked tongue are used to locate the head from the emanations of the victim’s mouth odors to initiate cephalic ingestion, a preference based on the anatomical connections of any attached appendages. Digestion is almost total; the gastric fluids in the rattlesnake dissolve everything including the bones, adding about 40 percent of the snake’s body weight annually. The prey consists almost entirely of small mammals. A 1939 survey in George Washington National Forest, which included the capture and evisceration of 141 timber rattlesnakes, found that the stomach content was 38% mice, 25% squirrels and chipmunks, 18% rabbits, 13% birds, and 5% shrews; one had eaten a bat. [7]

Two male snakes “wrestling” to win the heart of a nearby female.

Sex is vitally important for nearly every living thing on planet Earth. While perhaps historically overemphasized and of late deemphasized among humans, it is the essence of evolution. The random genetic mixing that is the result of the male-female “interaction” to form the gamete is how speciation (including our own) occurred. This is equally true for timber rattlesnakes, who take sex pretty seriously. While many snakes spend the cold winter months in a communal burrow called a hibernaculum, they range separately in search of prey for the remainder of the year. About every second year, females over the age of five years will release pheromones in the spring or early summer as they ply the leaf litter pathways of their home turf. The scent is strong enough to attract any and all males that happen by. The stage is then set for one of the most intriguing contests for the right to breed among all animals. Lacking arms, legs, and claws with only venomous fangs for teeth, what amounts to the timber rattlesnake version of an arm-wrestling contest ensues. Rising upward in intertwined arabesque coils, they try to push each other over. The prize goes to the one with the most stamina as the loser retreats dejected from the field. On first encountering the snakes depicted in the photograph, I was convinced that it was a male-female pre-coital ritual until learning of its even more surprising purpose from an expert. [8] Following insemination with one of the successful male’s two copulatory organs called hemipenes, the female gives birth to about a dozen young who are immediately on their own to face the world armed only with venom and slithering stealth. Most will not survive.

The signature rattle is a curiosity in its constitution and a conundrum from the standpoint of how it may have evolved; rattlesnakes are found only in the Western Hemisphere. The rattle starts as a bell-shaped horny protuberance called a button at the end of the tail. Every time the snake molts, which ranges from one to five times a year according to age and growth rate, the caudal end remains attached to form a segment of the rattle; the rattle grows in length by one segment for each molt. While it would theoretically be possible to count the number of times that the snake had shed its skin by counting the segments that constitute the rattle and thereby estimate the snake’s age, in actual practice this is fallacious. The rattle is loosely attached at each of the segments so that the assembly is subject to periodic breakage; it is not unusual to find a detached rattle segment on the trail. The conundrum associated with the rattle is that the rattlesnake employs both aposematism and crypsis simultaneously.  The purpose of the rattle is ostensibly to ward off an attack by a potential predator, an aposematic behavior. However, their primary predators – which include hawks, owls, coyotes and foxes – are apparently not put off by the warning of the rattle. King snakes, the preeminent rattlesnake predators, are immune to the toxins of the rattlesnake. The defensive behavior of rattlesnakes in the presence of a king snake does not involve the rattle in any way; the midsection is arched with the extremities held to the ground in an attempt to club the attacker. Experiments have revealed that the smell of the king snake triggers this response. [9]

Charles Darwin was also perplexed by the peculiar rattle of the American snakes. He wrote that “Having said thus much about snakes, I am tempted to add a few remarks on the means by which the rattle of the rattle-snake (sic) was probably developed. Various animals, including some lizards, either curl or vibrate their tails when excited. This is the case with many kinds of snakes.  Now if we suppose that the end of the tail of some ancient American species was enlarged, and was covered by a single large scale, this could hardly have been cast off at the successive molts. In this case it would have been permanently retained, and at each period of growth, as the snake grew larger, a new scale, larger than the last, would have been formed above it, and would likewise have been retained. The foundation for the development of a rattle would thus have been laid; and it would have been habitually used, if the species, like so many others, vibrated its tail whenever it was irritated. That the rattle has since been specially developed to serve as an efficient sound-producing instrument, there can hardly be a doubt; for even the vertebrae included within the extremity of the tail have been altered in shape and cohere. But there is no greater improbability in various structures, such as the rattle of the rattle-snake.” [10] The improbable evolution of the rattle had to have a provenance that was unique to the Americas; there are no rattle snakes anywhere else. There must therefore have been a predatory threat to the snakes that created the evolutionary rattle warning behavior. It was not human predation, as the land bridge of Beringia was not traversed to bring them from Eurasia until about 10,000 years ago. The only reasonable explanation must be that there was a snake predator among the extinct megafauna of the pre-human Tertiary Period and that the rattle developed as an effective tool to ward off that predator, presumably as an indication that the poisonous venom was, while perhaps not deadly, certainly unpleasant.

The black variant with keeled scales to prevent reflection and improve stealth.

Timber rattlesnakes, for the most part, are colored with earth tone banded markings to blend with the browns and blacks of the forest; this is the camouflage of crypsis which can be employed to deceive prey but is equally useful as concealment from predators. However, it should be noted that there are at least two different cryptic color variants: the first is the canebrake rattlesnake, which was once considered a separate species, – it is more brightly colored to match its cane field habitat; the second is a much darker, predominantly black variant which is an adaptation to promote nocturnal hunting. The stealth of coloration is enhanced by the snake’s keeled scales – each having a central ridge to interrupt the otherwise scintillating sheen of reflectance as is the case with snakes with smooth scales without keels (the etymology of the species name horridus). The overall effect is that the snake is well concealed against its prey, but also against its predators. The fundamental question remains―why did the rattle evolve?

The venom of the timber rattlesnake poses a different evolutionary question that has resulted in some hypotheses as to its origins. Darwin offered “It is admitted that the rattlesnake has a poison fang for its own defense, and for the destruction of its prey”  but offered no specifics as to its likely evolutionary origin. [11] Current thinking is that snakes evolved as large tree dwelling constrictors in the Miocene Epoch some 30 million years ago. When the climate changed so as to promote the grassy savannahs, the snakes became smaller and ground dwelling; some evolved a venomous chemistry for their saliva that promoted hunting and therefore their fitness to survive.  Snake venom evolved as a complex chemistry of protein synthesis; depending on the species of snake, it may have a predominant neurological effect or a predominant vascular effect. Viper venom is of the latter category; its most obvious and potentially fatal symptom is slowing of blood circulation due to coagulation. From the standpoint of its intended small mammal prey, the venom achieves its objective of immobilization attendant to consumption. While the venom can be and to some extent is used to attack predators, it is not very effective. The king snake is immune to rattlesnake venom and other predators are either unaffected or able to avoid its application. One firsthand account reports that a wild turkey held down a timber rattlesnake with both feet that was “repeatedly striking at the bird’s long, armored legs and folded-in wings, but to no avail.”  The turkey eventually killed the snake by cutting it through at the neck and then ate it. [12] Humans are another matter.

In any given year, approximately 45,000 people are reported to have been bitten by snakes in the United States; 6,000 of these are from venomous snakes and less than 10 results in fatalities – due almost entirely to the Eastern and Western variants of the Diamondback Rattlesnake. A larger number of domesticated animals are also bitten, though the numbers are of questionable merit as reporting is arbitrary and not required by law. The symptoms for snakebite vary according to the size of the snake and the amount of envenomation; about one fifth of poisonous snakebites are inflicted without the transfer of venom. This may be due to a dearth of venom after a recent kill or due to an intentional forbearance in order to preserve the venom for a future kill. The immediate symptoms of envenomation by a rattlesnake include intense pain at the point of penetration, edema and hemorrhaging. As the venom spreads through the body in the first few hours, the swelling and discoloration become more pronounced and systematic cardiovascular distress causes weakness, nausea and a diminution of the pulse to near imperceptibility. In the worst cases, a comatose state and death can result. In the twelve-to-twenty-four-hour period that follows, the affected limb suppurates and swells enormously, a condition that can also lead to cardiac arrest. In most cases, the symptoms abruptly cease after about three days as the body neutralizes the toxins. [13]

What to do in the case of a poisonous snakebite is and always has been a matter of some considerable conjecture.  Traditionally (the cowboy hero western paradigm), a tourniquet is established between the bite and the heart to arrest the flow of blood-borne toxin, the area of fang penetration is cut open to afford better access, and an oral suction is established to extract the venom. Snakebite kits were (and probably still are) sold that have a razor blade and a suction cup to carry out this procedure with some efficacy. According to current thinking, the cut and suck method does not work very well, though human trial data is probably nonexistent. But the logic against the cut and suck method is compelling. Applying a tourniquet concentrates the venom to a smaller area, where the damage will be more profound. It is actually better to allow the body to dilute it the venom to diminish its effects. The location of the penetration is not necessarily where the venom is concentrated, as the snake’s fangs are long and curved; cutting will likely only result in a greater potential for infection. Suction is not a good method to remove the viscous venom, as it will have immediately permeated the tissue to the extent that it cannot be extracted with a vacuum pressure.  The generally accepted procedure at present tends to a more plausible and less radical approach. After getting the victim clear of the immediate vicinity of the snake, the bite area should be cleaned with antiseptic wipes (if available), any jewelry or tight-fitting clothing should be removed to allow for swelling and the victim should then immediately be transported to a medical facility for the administration antivenom, which is now widely available. In the event that the snake bite has occurred in a remote area, the victim should be transported, either by being carried if possible or by slowly walking if not to the closest point of egress where medical attention can be obtained. [14] However, the only certain way to ensure survival from the bite of a timber rattlesnake is to not get bitten in the first place; if you see a timber rattlesnake on the trail, give it wide berth.

References:

1. Simpson, D. Cassell’s Latin Dictionary, Wiley Publishing, New York, 1968, pp 159,279.

2. Behler, J. and King, F. National Audubon Society Field Guide to North American Reptiles and Amphibians, Alfred A. Knopf, New York, 1979, pp 682-689

3. “Snake-handling W.Va. preacher dies after suffering bite during outdoor service”. The Washington Post. The Associated Press. May 31, 2012.

4. Öhman, A. and Mineka, S. “Fears, Phobias, and Preparedness: Toward an Evolved Module of Fear and Fear Learning” Psychological Review, 2001 Vol. 108 pp 483-522.

5. Hamilton, J. “Eeek, Snake! Your Brain has a Special Corner Just for Them” National Public Radio All Things Considered, 28 October 2013.

6. The Holy Bible, Revised Standard Edition, Thomas Nelson and Sons, Camden, New Jersey, 1952, p 3 Genesis 3:14.

7. Linzey, D and Clifford, M. Snakes of Virginia University of Virginia Press, Charlottesville, Virginia, 1981, p 134-138.

8. Demeter, B.  Herpetology expert for the Smithsonian Museum of Natural History. Private communication.

9. Linzey and Clifford, Op. cit.

10. Darwin, C. The Expression of the Emotions in Man and Animals. New York, D. Appleton & Company, 1872):  pp 102-103

11. Darwin, C. On the Origin of Species, Easton Press special edition reprint, Norwalk, Connecticut, 1976. p 166.

12. Furman, J. Timber Rattlesnakes in Vermont and New York, University Press of New England, Lebanon, New Hampshire, 2007.

13. Linzey and Clifford, pp 124-126

14. American Red Cross First Aid/CPR/AED Participants Manual pp 96-98. Available at https://www.redcross.org/content/dam/redcross/atg/PDFs/Take_a_Class/FA_CPR_AED_PM_sample_chapter.pdf

Celandine, Greater and Lesser

Greater Celandine

Common Name: Celandine or Greater Celandine (above) and Lesser Celandine (below) – Celandine is frequently called greater celandine to distinguish it from its unrelated namesake. It is derived from the Latin word chelidonia which means swallow (the bird not the verb) in English. The purported reason is that celandine flowers bloom in early spring when swallows arrive to its original Mediterranean habitat and wilt when the swallows depart. Celandine is also called swallowwort due to this association and tetterwort or nipplewort for its medicinal applications. Lesser celandine got its name due to superficial resemblance to the celandine, both having yellow flowers and proliferating in similar wet areas. It is sometimes called fig buttercup or pilewort for its use in treating piles, another name for hemorrhoids.

Lesser Celandine

Scientific Name: Chelidonium majus – The genus of greater celandine means swallow in Latin as per discussion above. The species name is Latin for major, a synonym for greater. It is in Papaveraceae, the Poppy Family. Lesser celandine is Ficaria verna. The genus is from ficus, the Latin word for fig which is attributed to the two plants having similar root structures. The species name accounts for its spring (vernal) blooming. It is a member of Ranunculaceae, the Buttercup Family.

Potpourri: Even though the greater and lesser celandines share the same name, they are not closely related according to taxonomy. While both are in the Order Ranunculales of flowering plants, they are in two different families: Poppy and Buttercup. There is, however, a good reason for mistaken identity. Aside from growing in similar wet habitats as weedy plants, they share a long history of similar uses by humans for medicinal applications. It is likely that early herbalists who sought plants for potions and poultices looked for yellow flowers and found one or the other. Since greater celandine was almost certainly the first to be exploited for its chemical compounds, the addition of lesser celandine became a useful mnemonic for herbalists. Because both are overly successful in reproduction, spreading out from a small clump to take over relatively large areas, they are both subject to the universal pejorative for anything that grows were humans don’t want it to. A weed is “a form of vegetable life of exuberant growth and injurious effect” according to Merriam -Webster Third International Dictionary.  Lesser celandine is by far the most notorious and is considered an invasive species in some areas.

Another reason for referring to celandine as greater celandine is to distinguish it from the celandine poppy, also known as the wood poppy, a plant indigenous to North America. Celandine poppies are in a different genus (Stylophorum diphyllum) but are otherwise very similar in terms of chemical, and therefore medicinal properties, common characteristics of many Poppy Family plants. [1]  In all likelihood, the original name of this flower was wood poppy, and due to its superficial resemblance to the greater celandine, it was given an alternative name celandine poppy by settlers moving inland from the original colonies. This has some credence as they are found mostly in the Midwest, which was subject to waves of migration from the original New England states after the passage of Northwest Ordinance in 1787 as one of the first acts of the newly established Congress. The use of the celandine name for both the lesser celandine and the celandine poppy is almost certainly because it was well known to many settlers who came to the New World from Europe. Greater celandine was (and is) one of the more common herbal remedies for a wide range of ailments in the Old World.

Greater celandine, like most herbal remedies, was adopted by apothecaries based on trial and error oral tradition that singled out natural plant medicines. Prior to the scientific revolution in chemistry of the nineteenth century that led to pharmaceutical formulations, nature was the only choice. However, even in the modern era of big pharma, many if not most drugs are synthesized based on plant (and fungal) chemistry. Since every plant needs to grow large enough to reproduce, many evolve smells and tastes to ward off predator animals that may range from larvae to deer. If their primary threats were bacteria and microbes, then these evolved chemicals could be good candidates for human medicines for the same effect. Greater celandine exudes a bright yellow-orange liquid from its roots and stem. This likely drew attention since yellow was one of the colors the four humors mediating human health that were postulated by the Greeks of antiquity and dominated Europe in the Middle Ages. Based on the formulation of Galen in the first century CE, red blood, yellow bile, black bile, and white phlegm were associated with sanguine, choleric, melancholic, and phlegmatic attributes. [2] Within the religious construct called the Doctrine of Signatures, a plant that had yellow juice must surely have been put there by God as a natural source of yellow bile. Greater celandine was therefore one of the more important herbals of history.  

What was Greater Celandine used for? John Gerard, one of the earliest and most well-known herbalists in Europe, attributes Aristotle with its use in the treatment of “the eies (sic) of Swallows that are not fledge, if a man do prick them out, do afterwards grow again and perfectly recover their sight.” What to make of this? Treating baby bird eye disorders in the fifth century BCE is probably not literal, losing its original meaning over years and translation and interpretation.  Gerard continues with “The juice of the herbe is good to sharpen the sight, for it clenseth and consumeth away slimie things that cleave about the ball of the eye and hinder the sight.” [3] The shrine of Saint Frideswide, the patron saint of Oxford, England and reputed to be a “benefactress of the blind” is decorated with a bas-relief of greater celandine, presumably for its curative power since the flower is a prolific weed in and around Oxford. She supposedly called forth a spring in a village near Oxford whose waters were used as a wash to help restore vision, one basis for her sainthood. The eye cure remedy is unlikely, as the yellow-orange liquid exuded from greater celandine is highly corrosive and can only have blinded those who tried it, swallows and all. [4]

Greater celandine has been used as a folk medicine across Europe eastward into China for millennia and in North America after its introduction by advancing settlers in the eighteenth century. The root and stem juices were used topically to treat a variety of skin problems including warts, ringworm, and eczema. In modern medicinal practice, salicylic acid and/or cryotherapy (freezing) are similarly used, a measure of the strong reactive chemistry of the plant. Taken internally, it was not surprisingly used to treat yellow jaundice, a liver ailment that could suggest a lack of adequate yellow bile that needed augmentation. There has been a neo-renaissance in the use of greater celandine in the treatment of cancer over the last several decades. This takes the form of what amounts to natural chemotherapy, using the chemicals chelerythrine, copticine, sanguinarine, and citric acid produced by the plant for its own defense to kill tumorous cancer cells. [5] The most well-known greater celandine based product is Ukrain (named for the country) that was developed in 1978 and successfully tested in several small sample size studies for its effectiveness in treating pancreatic cancer. [6]

As an herbal remedy, greater celandine is not subject to the rigorous testing and certification necessary to qualify as a drug. It can therefore be procured over the counter without a physician’s prescription for use according to alleged and/or perceived (placebo) benefits. It is promoted for intestinal digestive problems, as a mild sedative, to prevent gallstones, and to treat liver disease. This is in addition to its long-standing use to treat skin problems like warts and to reduce eye irritation, despite the inconsistency of these countervailing therapies. However, treatment with greater celandine derived herbals is controversial. There is some indication that it causes hepatitis, a liver disease it is supposed to cure (discovered when patients using it got better when the treatment stopped). It is a known skin hazard, causing rashes and itching, and in some cases, severe allergic reactions. It is poisonous for dogs and some farm animals. [7] It is telling that the European Medicines Agency concludes that “the benefit-risk assessment of oral use of Chelidonium majus must be considered negative.” [8]

Lesser Celandine the beautiful

Lesser celandine is a doppelgänger of its greater cousin. It is a harbinger of spring in two ways. On the positive side, it blooms in profusion with a delicate, yellow-rayed flower arrayed on bright green sculpted leaves that evokes the color and warmth of the sun to erase the drab grays of winter. Since it is a variety of buttercup, the petals have the characteristic glow that is the subject of childhood play in determining preference for butter by its reflection on cheek or chin.  However, lesser celandine doesn’t know when to stop, spreading outward in all directions until it is a green blanket that covers everything. Simply put, it is invasive―an early reminder of the summertime onslaught of plants that range from Japanese stilt grass to dandelions. On its European home turf, it is beloved and eulogized as the very essence of spring. In North America, it is a weed, choking out native flowers and replacing them with a striking, but nonetheless monoculture, greensward. The good Doctor Jekyll and the selfsame but sinister Mister Hyde.

Lesser Celandine the scourge

The US Department of Agriculture defines a noxious weed as “any plant or plant product that can directly or indirectly injure or cause damage to crops (including nursery stock or plant products), livestock, poultry, or other interests of agriculture, irrigation, navigation, the natural resources of the United States, the public health, or the environment.” Just about anyone with a lawn or living near a woodland stream will agree that lesser celandine qualifies.  It was introduced into the United States sometime before 1867 when the first documented specimen was recorded in Pennsylvania. It was almost certainly planted as an ornamental; its aesthetic qualities enhance the color and seasonal variety in flower gardens. Like many introduced species, its ability to spread and dominate its new habitat was neither expected nor even realized. And, like most invasive species, it took decades for it to radiate from its original site growing geometrically in reproduction. The USDA estimates that 79 percent of the land area of the United States is suitable for its habitat and that it has an 82.6 percent chance of becoming a “major invader” if introduced. [9].

There are two reasons why introduced plants (and animals) become invasive. The first is that in most cases, new introductions will have none of the environmental constraints that were extant on its home turf. It is a tenet of ecology that all living things are constrained from exponential growth by competition for resources. In the real world where resources are limited, population growth is constrained to a finite limit called the carrying capacity which it reaches by following what is called a logistics curve. Every species occupies a biological niche that includes all of the resources available to it in its ecosystem, a term coined in 1935 referring to both the physical and biological surroundings. When a species is taken from its evolved ecosystem and placed in a new one, the rules of the game change. The checks imposed at home are removed and growth continues until it is stopped by the ecology of the new habitat. [10]

The tuberous roots of Lesser Celandine

The second factor associated with invasive behavior is the ability of the introduced species to spread and multiply so as to dominate the new environment. Lesser celandine has three methods of propagation that almost guarantee survival and promote spread. In addition to the seeds defining all angiosperms, it has not one but two means of vegetative cloning. The roots form small tubers and the stems form bulbils in the leaf axils. Both become detached and are spread by mowing, digging, and, most importantly, flowing water. The densest patches are found in wet areas due to the significance of the latter.  Once it gets established, it is almost impossible to get rid of it. Anything short of digging up the entire plant, roots and all and being careful not to drop any bulbils, will only result in a brief hiatus for a year or maybe two. Only a powerful herbicide like glyphosate will truly excise it.  [11]

In its European homeland where it is naturally kept in check, lesser celandine is not only tolerated but admired. In the UK, revered might be more appropriate. Described as a “sweet little plant,” it appears at the very beginning of spring (which is how it crowds out the competition) with the bright sun-like flowers, it is sought out by gardeners and bred by horticulturalists. There are over a hundred cultivars that range from “aglow in the dark” to “yaffle” and include  dusky maiden, mister brown, and the ghost. [12] The poet William Wordsworth admired the lesser celandine, writing “It is remarkable that this flower, coming out so early in the Spring as it does, and so bright and beautiful, and in such profusion, should not have been noticed earlier in English Verse.” [13] So he proceeded to write a poem that begins with:

                                     There is a Flower, the Lesser Celandine,

                                     That shrinks, like many more, from cold and rain;

                                      And, the first moment that the sun may shine,

                                      Bright as the sun itself, ’tis out again! [14]

Had Wordworth been an American poet the leitmotif might have been beauty and the beast instead of sunshine.

References:

1. Niering, W. and Olmstead, N. National Audubon Society Field Guide to North American Wildflowers, Alfred A. Knopf, New York, 1998, pp 670-675

2. Parker, S. Kill or Cure, Illustrated History of Medicine,  DK Publishing, New York, 2013, pp 106-107.

3. Gerard, J. Gerard’s Herball – Or Generall Historie of Plantes, London, 1633, pp 39-41.

4. Mabey, R. Weeds, Harper Collins, New York, 2010, pp 188-194.

5. Foster, S. and Duke, J. Medicinal Plants and Herbs, Houghton-Mifflin, Ne York, 2000, p. 105.

6. Sloane Kettering Medical Center . https://www.mskcc.org/cancer-care/integrative-medicine/herbs/ukrain      

7. . “Celandine”. American Cancer Society. August 2011. https://web.archive.org/web/20150423221233/http://www.cancer.org/treatment/treatmentsandsideeffects/complementaryandalternativemedicine/herbsvitaminsandminerals/celandine     

8. . “Assessment report on Chelidonium majus” European Medicines Agency, Committee on Herbal Medicinal Products (HMPC) EMA/HMPC/369801/2009  13 September 2011

9.  “Weed Risk Assessment for Ficaria verna  (Ranunculaceae) – Fig buttercup”  Animal and Plant Health Inspection Service. United States Department of Agriculture. August 12, 2015

10. Nowicki, S. “Biology: The Science of Life” The Teaching Company, Chantilly, Virginia, 2004.

11. . “Lesser celandine, Ficaria verna”. Washington State Noxious Weed Control Board. https://web.archive.org/web/20160324080851/http://www.nwcb.wa.gov/detail.asp?weed=185    

12. http://www.johnjearrard.co.uk/plants/ficariaverna/genus.html     

13. Mabey, op cit.

14. https://en.wikisource.org/wiki/Poems_(Wordsworth,_1815)/Volume_2/The_small_Celandine

Catoctin Formation

After about 600 million years, the Catoctin Formation still looks like lava.

Catoctin Formation:  A catoctin is defined as “a residual hill or ridge that rises above a peneplain and preserves on its summit a remnant of an older peneplain,” where peneplain is “an erosion area of considerable area and slight relief.” [1] It is derived from Catoctin Mountain in north-central Maryland where the Catoctin Formation was first noted as consisting of a geologic plain rising above a plain. Some sources contend that a tribe of Native Americans called Kittocton were resident in the general area and if that is the case, it is almost certain that Catoctin is a toponym. [2] However, the existence of a tribe named Kittocton is probably specious as the tribe is not listed by the National Geographic Society. [3] Many geographical names came into common parlance without any records―ancient wooded hill, land of many deer and speckled mountain have also been proffered as the meaning of Catoctin in one of now lost Native American languages.  

Potpourri: The Catoctin Formation is the most recognizable geological feature of the Blue Ridge Province of the Appalachian Mountains. Its origin as lava that flowed out of fissures in the earth’s crust is evident in the sequential cascades that solidified as they spread over the pre-Cambrian landscape about 600 million years ago (mya). Even though it was named for Catoctin Mountain, where it can only be seen in a relatively few and out of the way places, it is the capstone rock assemblage in Shenandoah National Park. The Marshall Mountains dominate the northern section of the park, benches of lava flowing outward to form the roadbed for Skyline Drive.  White Oak Canyon, a cynosure of the central section follows the circuitous lava flow path. The Appalachian Mountains are over a billion years old. In contrast, the 60-million-year-old Rocky Mountains and even younger Himalayas are relative newcomers to terra firma. The Catoctin Formation is the keystone that connects the arc of the ancient past to the ever-evolving present. [4] How could magma from earth’s liquid mantle flow through and then out over the crust in a place that is now as placid and peaceful as a national park in western Virginia?  

Plate tectonics emerged as a coherent and scientifically supported theory of geology in the middle of the last century. It was first postulated by the German meteorologist Alfred Wegener in 1915 based on the conformity of the contours of the western coastline of Africa and the eastern coastline of South America. Supporting observations of geologic and fossil similarities that straddled not only South America and Africa, but also Australia and India could only be explained if these areas had at one point been connected in a single land mass, eventually named Gondwanaland for a region in India. The idea that massive continent-sized chunks could somehow move around, floating on top of a pool of molten rock agitated by planetary rotation and lunar gravity, and plow through oceanic crust like an ice breaker seemed too fanciful to many geologists until the middle of the century when further research revealed a viable mechanism. Sea floor spreading was confirmed by the observance of magnetic field shifts in solidifying magma flows in the mid-ocean ridges to provide a source of new crust. That earthquakes recurred in known prone zones led to the notion that plate movement was involved. The term subduction was given to the sliding of great arcs of oceanic crust under adjacent and less dense regions of crust to be remelted as magma in the mantle. With a supply of new magma emerging from the ridges and a recycling facility for old magma in subduction, there was no need to plow through anything. [5] So what has all this got to do with the Catoctin Formation?

The tectonic plates, many with lower density sections that are the land mass continents contained within their boundaries, have been floating around driven by the chaotic forces of physics by the liquid mantle for most of Earth’s 4.5-billion-year history. When two plates with the less dense continental crust float into each other, subduction is not an option and a headlong crash results. When an irresistible force meets an immovable object, something has to give and the only option is skyward. The result is an orogeny, from the Greek oros, meaning mountain. The mountain building orogeny that created the original Appalachian Mountains about 1.2 billion years ago is named Grenville for a small town in Quebec on the Canadian Shield central core of the North American plate. Wegener’s preliminary hypothesis that there was a contiguous area he called Gondwanaland was later expanded to include a second northern land mass named Laurasia that joined to form the supercontinent Pangaea (Greek for all earth) about 300 mya. Over the last several decades, additional geological analysis of bedrock on a global scale has concluded that the movement of plates results in the reassembly of at least 75 percent of all the jigsaw puzzle of landforms into a supercontinent roughly every 750 million years. Pangaea was proceeded by Rodinia, a name derived from the Russian word rodit meaning to give birth as it was at first thought that Rodinia was the original supercontinent that gave birth to all others. Further research has posited an additional supercontinent named Columbia that precedes Rodinia with evidence of additional combinations that extend as far back as the Proterozoic Eon that started 2.5 billion years ago. [6]

Catoctin Formation dike through older bedrock

The bedrock of the Appalachian Mountains was then the result of the collision of the land mass containing North America named Laurentia with the land mass containing northwestern Eurasia named Baltica that gave rise to what was to become Laurasia (North America and Eurasia) about 1.2 billion years ago. When Rodinia started to break apart about 700 mya, fissures opened, allowing magma in the form of lava to flow upward out of the mantle, through the bedrock of the Grenville orogeny, and spread out over its surface. This is the fons et origio of the Catoctin Formation.  Continued expansion in a manner similar to the opening of the eponymous Atlantic Ocean in the present geologic age resulted in its precursor named Iapetus, the father of Atlas in Greek Mythology. Initially, the cooled magma was covered by rough gravel at the shallow waters edge as the mountains were worn away by erosion. As the ocean expanded, the now submerged Appalachian bedrock with its lava coating became covered by smaller sized particles, and eventually the fine silted sand of mid ocean. The gradation of sediments of stone to pebble to sand on top of the Catoctin Formation is evident in the present day Weverton, Harper’s, and Antietam formations that make up the Chilhowee Group. [7] Iapetus stopped opening and began to close about 400 mya, creating Pangaea from Laurasia and Gondwanaland with a series of three orogenies named Taconic, Acadian, and finally Alleghany as the various plates collided from north to south. The resultant Appalachian Mountains were probably as high or higher than the Rockies at their peak uplift. Pangaea’s disassembly started near the end of the Mesozoic Era about 65 mya and is still in progress, the once buried lava rocks of the Catoctin Formation now in full view after millennia of erosion of the once majestic mountains to create the coastal plane. [8]

Geology as the science dealing with the physical nature and history of the earth has evolved extensively through the ages; even the rather obvious origins of lava have been misunderstood. While the Greeks and Romans appreciated the nature of lava and eruptions (the burial of Pompeii by the eruption of Vesuvius in 79 CE could hardly have been misinterpreted), the ensuing Dark Ages of biblical doctrine stifled the study of nature. According to Archbishop James Ussher of Ireland, the earth was created on Sunday, 23 October, 4004 years Before Christ and Noah’s flood was responsible for all current landforms. Even when science rebounded after the Renaissance, geology was especially difficult since it is mostly out of sight in tangled knots of rocky confusion. The noted German geologist Abraham Werner conceived that a universal ocean originally covered the earth and that all rock precipitated from it, dismissing the volcanic origins of lava altogether. His adherents, which included most geologists in the eighteenth century, were called Neptunists for the Roman God of the Oceans Neptune. The Vulcanists named for the Roman god of fire and the hearth, restored lava to its true provenance as magma emerging from fiery mantle. The word lava came into wide use in the 17th century from the Italian dialect around Naples, Italy (near Vesuvius) and meant something like falling― presumably from Vulcan’s home which had become a volcano.  Lava, in current parlance that reflects decades of study, comes in three basic forms: A’a for rough, fragmented blocks; Pahoehoe for smooth, undulating flows; Pillow for lava that emerges under water. A’a and pahoehoe are of Hawaiian origin due to the importance of the perennial lava flows that were key to early studies in volcanology. The lava of the Catoctin Formation is primarily dry, flowing, pahoehoe.

The primary constituent of the Catoctin Formation is basalt (from the Greek basanites, a type of slate used to test gold from basanos meaning test).  Basalt is an igneous (ignis is Latin for fire) rock, the generic name for any rock created directly from magma, the liquid rock of the mantle. Because of the low silica content, basalt has a low viscosity, so that the lava flow can move relatively quickly and travel as far as 20 kilometers from the source, which can be either a single vent or a long fissure.  Basalt is erupted at temperatures that range from about 2000 to 2100 °F, to become either a’a or pahoehoe depending on temperature and topography.  Basalt is the most abundant igneous rock in the earth’s crust, comprising almost all of the ocean floor. A rock is defined by the combination of minerals that it contains. A mineral is “a natural substance, generally inorganic, with a characteristic internal arrangement of atoms and a chemical composition and physical properties that are either fixed or that vary within a definite range.” [9] The primary minerals that make up the rock basalt are pyroxene and feldspar.

Pyroxene is from the Greek pyr and xeno meaning “alien to fire.” The pyroxene of Catoctin Formation basalt is a complex of different minerals that are silicates of magnesium and calcium and which include iron and manganese.  The general formula is X(Si,Al)2O6 where Xcanbe calcium, sodium, iron, or magnesium.  Magma that contains significant amounts of magnesium (Mg) and iron (Fe) is called mafic as an acronym for these elements. The other major component of magma consists primarily of feldspar and silica; it is called felsic according to the same logic. Feldspar, the other major constituent of Catoctin Formation basalt, is a complex of aluminum silicate minerals, i.e. containing aluminum and silica, in combined with potassium (KAlSi3O8), sodium (NaAlSi3O8) or calcium (CaAlSi2O8). Feldspar is derived from feldspat, German for “field flake” referring  to common rocks typically strewn about an open area that could readily be cleaved into flakes.  Feldspar comprises over fifty percent of the earth’s crust. The similarity between basalt and feldspar in terms of elemental composition is due to the dominance of oxygen in chemical combinations. The earth’s crust is about 50 percent oxygen combined with 30 percent silicon, 8 percent aluminum with iron, calcium, sodium, potassium and magnesium making up most of the balance at 2 to 5 percent each. [10]

The basaltic lava flows that first emerged from the mantle during the breakup of Rodinia have been subject to 600 million years of change. This included some millions of years under the Iapetus Sea and the crushing pressures of the assemblage of Pangea. The effects of the pressures and temperatures of deep depths and orogenies on existing rocks changes their shape, structure and properties. The name for the resultant rocks is metamorphic, literally changed body. To provide an overarching order to the otherwise intricate complexities of the mineral combinations of individual rocks, they are subdivided into three general types. Igneous rocks of the magma were first, solidifying in the first days of the nascent Earth’s cooling. Water evaporated from the primordial oceans precipitated as rain over lava lands, causing the erosion to transport grain by grain into the ocean to form sediments that gradually sank under their own weight to form sedimentary rocks. As the physics of balancing forces formed separated plates that drifted in their own magma ocean, the resulting colossal forces changed or metamorphosed the igneous and sedimentary rocks. Sedimentary shale became slate and igneous basalt became metabasalt. The Catoctin Formation that remains is the result of an unimaginable journey that took it from the peak of the tallest mountains to the deep sea and back again. While it still retains its basic lava-like appearance in places, it is comingled with many other rock types with their own histories. It has equally been subjected to differing environs that changed its core composition.

Catoctin Formation bounded by metabolized sandstones.

The Catoctin Formation has the colloquial name greenstone due to the gray-green coloration of many outcroppings, a result of its metamorphic journey. The Catoctin basalt is comprised of phenocrysts (large crystals) of plagioclase feldspar named albite in a fine-grained matrix of the minerals chlorite, magnetite, actinolite, pyroxene and epidote.   Epidote is a structurally complex mineral of calcium, aluminum, iron and silicon [Ca2 (Al, Fe) 3(SiO4)3(OH)] that has a green color described as pistachio.  It is this mineral that, when present, gives the Catoctin Formation a distinctive greenish hue. The sequential lava flows over an extended period are reflected in the diversity of the Catoctin Formation. The boundaries between the lava flows are marked by breccias, metatuffs, and metasandstone.  Breccia is a rock comprised of smaller rock fragments cemented together by sand, clay and/or lime.  These rocks identify areas where a crust formed on a lava flow that was disrupted by subsequent flows.  A tuff is a porous rock created by a consolidation of volcanic ash.  The metabolized tuffs, or metatuffs, are attributed to a rapidly moving cloud of molten ash.  The metabolized sandstones, or metasandstones, mark the boundary between one lava flow, a period of erosion and sedimentation, and a second lava flow. [11]

References:

1. Webster’s Third New International Dictionary of the English Language, C. G. Merriam Co.  Encyclopedia Brittanica, Inc, Chicago, 1971 p 354, 1669.

2. http://www.npshistory.com/publications/cato/index.htm   

3. “Indians of North America” National Geographic, Volume 142, Number 6, December 1972

4.Gathright, T., Geology of the Shenandoah National Park, Virginia Department of Mineral Resources Bulletin 86, Charlottesville, Virginia, 1976, pp 19-25.

5. Cazeau, C., Hatcher, R., and Siemankowski, F. Physical Geology Harper and Row Publishers, New York, 1976, pp 374-393.

6. Meert, J. “What’s in a name? The Columbia (Paleopangaea/Nuna) supercontinent”. Gondwana Research. 14 December 2011, Volume  21 Number 4 pp 987–993.    https://www.gondwanaresearch.com/hp/name.pdf   

7. James Madison University Geology Notes –  https://csmgeo.csm.jmu.edu/geollab/vageol/

8. Schnidt, M. Maryland’s Geology, Shiffer Publishing, Arglen, Pennsylvania, 2010, pp 88-112.

9. Dietrich, R. Geology and Virginia, The University Press of Virginia, Charlottesville, Virginia, 1970, p 4.

10. Cazeau et al op cit.

11. USGS Geological Survey Bulletin 1265 “Ancient Lavas in Shenandoah National Park Near Luray, Virginia” https://www.nps.gov/parkhistory/online_books/geology/publications/bul/1265/sec2.htm

Cardinal

Male cardinal pausing between assaults on his reflection in window – Photo by A. Kholmatov

Common Name: Cardinal, Northern cardinal, Redbird, Common cardinal, Cardinal grosbeak – The eye-catching red color of the male plumage is almost identical to the color that distinguishes the echelon of ecclesiastical prelates that rank just below the pope in the Roman Catholic Church. While officially named the Northern cardinal to distinguish it from other members of the genus that predominate in Central and South America, its range from Maine to Florida and west to Texas leads to the more common use of cardinal throughout the United States.

Scientific Name: Cardinalis cardinalis –  The genus and species names are the original Latin form of the word cardinal, derived from cardo, meaning “hinge.” The implication is that it is something of central importance, like the cardinals of Rome, the cardinal (N,S,E.W) directions, and the cardinal (1,2,3 …) numbers. The double genus-species designation connotes that the northern cardinal is the type species for the genus, which in a way does stress centrality.

Potpourri: The male northern cardinal is arguably the most recognizable and popular bird in North America. It was chosen as the official bird by seven states, foregoing uniqueness for panache. It is the only team name shared by two professional teams―baseball in Saint Louis and  football in Arizona. It is one of the official color of colleges ranging from MIT in Massachusetts to Stanford in California. The cardinal was chosen for its eye-catching, strident redness and not for any particular avian vitality, ubiquity,  or the singularity of song.  The cardinal is not especially notable, just one of the many so-called songbirds of the order Passeriformes that flit from tree to tree in search of food, nest-building materials, or each other. And all the while, the female cardinal is swathed in the brown feathers to match the colors of the trees and soils. [1] Why then, is the male cardinal cloaked in cardinal red?

There is also a Sacred College of Cardinals, the source of both the name and the color of the bird. The first use of the term cardinal to indicate a person of pivotal importance (literally on which things hinged from the Larin word cardo) was the deacons that presided over the seven regions of Rome in the 6th century. These prelates eventually became a privileged class as Roman magistrates and adopted the red that had long been used in Roman society to indicate rank and importance. [2] Red has been a key color in almost every society in human history, from the red ochres used in cave drawings to the war paint of Native Americans. The red that later became the robes of royalty throughout Europe was a rare and expensive commodity, ranking just behind royal purple in prominence. Red that was symbolic of power and wealth in the Roman Empire was sourced from miniscule, sap-sucking insects of the genus Kermes that fed on oak trees in the Mediterranean basin that were collected, crushed, and strained. A great deal of painstaking labor went into making just a few drams of dye. The red bug goo color that passed from Roman centurion to cardinal in antiquity was and still is scarlet and not cardinal red.

So why are North American red birds called cardinals and not scarlets? The bird cannot have been seen by Europeans before the 15th century, when the mainland of North America was first colonized. The striking red bird was almost certainly noticed by the French moving their bateaux up the Saint Lawrence River to lay claim to the region as New France. Suffering a dearth of settlers, the French government, directed by Cardinal Richelieu, chief minister to King Louis XIII, encouraged emigration starting in the middle of the 17th century. The new settlers who expanded along the Saint Lawrence River from Quebec City to Montreal were in a sense his agents, eventually renaming a tributary the Richelieu River. A bird named cardinal as Richelieu’s signature color would be equally apt. The cardinal bird name probably carried south with commerce and cultural contact to reach English colonists moving inland from Boston. No friends of persecuting papists, they may have favored the cardinal name in mockery. This is not outside the guardrails of the bawdy humor of the age. When Mark Twain was presented a scarlet robe on his receipt of an honorary doctorate at Oxford, he remarked “There is no such red as outside the arteries of an archangel.” [3] The bird is cardinal in both French and English with only a change in pronunciation as distinctive.

Cardinals have some characteristics that distinguish them as unusual when compared to the other perching birds of the Order Passeriformes more commonly called songbirds. Their most obvious is the pronounced color difference between the male and the female, a trait called sexual dimorphism. While there are subtle differences in the hue of plumage between the sexes of many birds, none take it to the extreme of a scarlet red male and a forest brown female. One hackneyed rationale is that the male would draw predators away from the nest so that the female could remain hidden with the brood. More chicks would then survive to retain the color dichotomy in perpetuity. The female, as procreator, would therefore choose a more cardinal red mate to enhance the survival of her genes. This doesn’t make much sense, since mammal egg snatchers like foxes and ferrets cannot see red. While demonstrably true physiologically and experimentally, the reason mammals cannot see red (including bulls charging at capes) can only be a matter of conjecture. The operative theory is that mammalian origins in the shadows of the dominant dinosaurs was literally devoid of much light but movement mattered; smell and hearing were paramount. Over evolutionary time, mammals retained only  blue and green cones for rudimentary color vision, with a surfeit of rod cells for dim light peripheral movement perception. (Red cones were regained by primates like us as a consequence of taking to the trees to facilitate locating the bright colored fruits that became their mainstay diet). [4] The consequence is that the red male cardinal might as well be brown since its movement is all that would matter for a predator mammal.  There are other cardinal predators such as owls, hawks, and snakes that do see red, but there is no correlation between the degree of male redness, which is referred to as “ornamentation,” and predator avoidance behavior in field studies. In fact, female cardinals have been observed fighting back against predation with no reliance on male participation. [5]  

Mate choice is a more compelling reason for cardinal red. The selection of the most desirable male by a female has been well established in some species of birds. In New Guinea, there are male birds of paradise that put on elaborate feathered displays to impress females and male bowerbirds that build extravagant nests with colorful decorations that range from red fruits to green fungi as proffered bridal suites. [6] The elaborate tail of the peacock can have no other function than to impress the pea hen. Mate choice, however, is not just for the birds. To a greater or lesser extent, it is pervasive throughout the animal kingdom from fruit flies to fruit bats and especially humans. Our very identity depends on a random sequence of mate choices that were made by parents and grandparents that extends through hundreds of generations. Mate choice can be defined as “any pattern of behavior, shown by members of one sex, that leads to their being more likely to mate with certain members of the opposite sex than with others.” In biological jargon, these are called the courter and the chooser. While there is no serious scientific disagreement about the existence of mate choice as an essential component of the birds and the bees doctrine, there is neither consensus about its actual mechanisms nor understanding of the way it evolved. [7] It is complex, inclusive of combinations of sight, smell, sound, and perhaps touch (but rarely, if ever, taste). For female chooser cardinals, some combination of sight for color and sound for birdsong are the most likely factors.

The unusual characteristics of birds were not lost on Charles Darwin, whose evolution epiphany was inspired at least in part by the different beak sizes and shapes of Galapagos Island finches. The importance of what have come to be known as Darwin’s finches on his ultimate conclusions concerning survival of the fittest has been oversubscribed. In visiting the islands of the archipelago, Darwin was struck by the similarities of a Galapagos mocking-bird to one called Thenca that he had recently seen in South America. On traveling to a second island and finding a third type of mocking-bird and observing that the indigenous giant tortoises were equally varied, he first posited that there must be something about isolated islands that promotes variations. In his field notes, he wrote that “such facts would undermine the stability of species.” It was only on his return to England with his collected finch specimens that an ornithologist named John Gould reached the conclusion that the finches were “so peculiar as to form an entire new group containing twelve new species.” [8] In the seminal work Darwin published about twenty years later, his thoughts on birds were much more nuanced. In a chapter entitled “Difficulties with the Theory,” he observes that “beautiful colours” and “musical sounds” must be due to sexual selection since “natural selection acts by life and death.” He concluded that structures created “for the sake of beauty” would be “absolutely fatal to my theory.” [9]

Darwin’s radical theory of evolution was in direct contradiction to the Bible’s origin story of the Great Flood and Noah’s Ark, an issue that resonates to this day despite overwhelming DNA evidence of evolution’s veracity. He purposely excluded any discussion of mankind’s origins so as to mitigate shock and backlash from the ecclesiastical establishment of the Victorian Era. A decade later, he elected to take on Adam and Eve directly in a second book, The Descent of Man, with the almost forgotten subtitle and Selection in Relation to Sex. Here then is Darwin’s full blown retraction: “If female birds had been incapable of appreciating the beautiful colors, the ornaments and voices of their male partners, all the labor and anxiety exhibited by the latter in displaying their charms before the females would have been thrown away; and this it is impossible to admit.” He even alludes to the use of bird feathers in women’s fashion that was popular at that time to assert that “the beauty of such ornaments cannot be disputed.” [10] There must then exist a sexual selection based on perceived beauty that operates hand in hand with natural selection based of fitness that combine to produce the tree of life. The dating game of young adult humans only differs from the pairings of birds such as cardinals in range and scope.

Sexual color dimorphism in cardinals must have something to do with mate choice, but it may not be the only factor. The intricacy, variation and tonal quality of song is also considered to be one of the primary means by which male courters seek the attention of the female choosers among passerines. In most species, only the male sings, lending some credence to this behavior as mate related. However, cardinals are unusual in that both the male and the female sing. In fact, the songs are so similar between the two that to the human ear they are indistinguishable.  However, when the male and female cardinal songs are separately analyzed by frequency and amplitude, the two songs are shown to be distinct.[10] Since bird songs are learned and, in some cases, embellished by practice, the question would be whether males learned their version of the song from other males and females likewise learned if from other females. A third intriguing possibility is that the female learned from the male and then modified the sounds ever so slightly as a way to respond. The reverse, with the male learning from the female is also possible but unlikely. This would suggest that the male and female cardinal share in a more or less egalitarian fashion.

Female cardinal engaged in nest building.

Cardinals are very aggressive―males and females in almost equal measure. This is especially notable in the late spring and early summer when adequate and suitable territory for nesting is established. Any intruder cardinal that attempts to penetrate the guarded perimeter of a mate pair’s domain will be subject to assault by the male, the female, or both. With lowered crest and eyes fixed on the aggressor, defending cardinals have been observed lunging after the intruder, using their feet and beak as weapons to force expulsion. The physical onslaught is often augmented by vocalizations described as chips and pee-toos. Intruder bird chases can go on as long as thirty minutes. This pronounced defensive posture is the cause of one of the more notable cardinal behaviors. Since birds are not self-conscious like humans and a few other animals, they do not recognize themselves in reflective surfaces like window glass. Cardinals are therefore frequently given to aggressively attacking their image in a window or even a shiny car bumper, pecking at the imagined intruder that will never go away until they themselves do. Sapience as its benefits. They eventually cease in fatigue and probably frustration.

Cardinal appearance goes beyond the red color of the male plumage to the broader category of ornamentation, inclusive of the length of the crest, bill coloration, and face mask contrast. Many attempts have been made to correlate variations in cardinal ornamentation to variations in body size and condition, feather growth, parental care, territorial defense, and mating choices. In general, the results have failed to establish any definitive relationship between any ornamentation trait including male redness and any other aspect of cardinal behavior or physiology. For example, in a trial in a rural area of New York found that males with brighter colors were positively correlated with reproductive success but those in an urban area in Ohio were not. In a more controlled experiment called a captive mate trial, females showed no preference for colorful males. [12] The only variable that can be directly attributed to a cardinal’s relative redness is the availability of fruit during the molting period when feathers are renewed. Fruits are colored by the  chemicals called carotenoids that are found in many plants to augment chlorophyll by absorbing light energy from additional frequency bands. When cardinals are fed a diet devoid of carotenoids, they vary in color from pale red to yellow. [13]

Why are male cardinals red and female cardinals brown? There is clearly a mate choice of some sort in operation, but it is not a choice favoring redness. Cardinals have elaborate courting behaviors that demonstrate evolutionary development of sex related activities. Sex matters. Many if not most birds are monogamous, retaining the same mate for life. Cardinals are a bit less stoical, changing mates not regularly but on occasion. So there must be come choosing going on and that would be  under the purview of the female chooser. This is an evolutionary result related to the lack of an external male sexual organ in most birds. Sex therefore requires the consent of the female since copulation involves contact of the male and female cloacae, known euphemistically as the cloacal kiss. This could not happen without mutual consent. (Cloaca once meant  sewer, the name given to the opening in birds, reptiles, amphibians, and fish that serves for both excretion and conception). One hypothesis is that the female cardinal chooses a male for a mate due to his compatibility. Female and male cardinals have very similar behaviors that range from having almost identical songs to being equally aggressive. The hypothesis is that this similarity was the result of female cardinal mate choice. The complexities of human mate choice are equally qualitative. If this is the case, then the red color of the male cardinal is more likely a genetic coincidence incident to female selection of a companionable mate. This is not without precedent. Dogs bred for friendliness by humans develop rounded snouts and drooping ears.

References:

1. Alderer, J. editor, Complete Birds of North America, National Geographic Society, Washington, DC, pp 597-606.

2. “Cardinal” Encyclopedia Brittanica Micropedia, William Benton, Chicago, Illinois 1972.Volume 11, p. 560.

3. Rossi, M. The Republic of Color, University of Chicago Press, Chicago, 2019, p 132.

4. Drew, L. I, Mammal, Bloomsbury Sigma, London, 2017,  pp 254-256.

5. Jawor, J. and Breitwisch, R.. Multiple ornaments in male Northern Cardinals, Cardinalis cardinalis, as indicators of condition. Ethology 2004, Volume 110 Number 2 pp 113–126.

6. Prum, R. The Evolution of Beauty, Doubleday, New York, 2017, pp 184-205

7. Rosenthal, G. Mate Choice, Princeton University Press, Princeton, 2017, pp 3-30.

8. http://darwin-online.org.uk/EditorialIntroductions/Chancellor_Keynes_Galapagos.html  

9. Darwin, C. On the Origin of the Species, The Easton Press, Norwalk, Connecticut, 1976, pp 164-166, 360-366.

10. Darwin, C. The Descent of Man, The Easton Press, Norwalk, Connecticut, 1976, pp 79-80.

11. Yamaguchi, A. “A sexually dimorphic learned birdsong in the northern cardinal”. The Condor. 1 August 1998, Volume 100 Issue 3, pp 504–511.   

12. Cornell Lab of Ornithology. “Cardinalis cardinalis” at https://www.allaboutbirds.org/news/   and https://birdsoftheworld.org/bow/species/norcar/cur/behavior#sex    

13. McGraw, K. et al “The Influence of Carotenoid Acquisition and Utilization on the Maintenance of Species-Typical Plumage Pigmentation in Male American Goldfinches (Carduelis tristis) and Northern Cardinals (Cardinalis cardinalis)”. Physiological and Biochemical Zoology. University of Chicago Press. November, 2001 Volume 74 Number 6 pp 843–852.

Hemlock for a Happy New Year

Hemlocks are among the many pines and fir evergreens that are symbolic of the holiday season. This hemlock is a new generation growing to replace those lost to an invasive species and a devastating hurricane at Limberlost in Shenandoah National Park.

Common Name: Eastern Hemlock, Canada hemlock, Hemlock spruce – Hemlock is the name for the hop plant in both the Germanic (homele) and Finno-Ugric (humala) language groups. The hop plant is the source of “hops” used for centuries across much of northern Europe to impart a bitter flavor to liquors made from malted grain. The small flowers of the hop plant are similar to the flowers of the poison hemlock (Conium maculatum) which shares the same etymology and from which the hemlock tree gets its name (by indirect association). In other words, the poison hemlock looks like and was named for  the hop plant and the hemlock tree shares a number of attributes with poison hemlock. The Carolina hemlock is very similar and difficult to distinguish from its collocated cousin.

Scientific Name: Tsuga canadensis – The generic name is from the Japanese word for the larch tree which, like the hemlock, is a member of the pine family. Most of the other trees in the genus Tsuga are indigenous to east Asia, primarily Japan. The species name is reference to the first classification of the tree in the Linnaean taxonomic system based on a specimen first sighted and identified in Canada. The Carolina hemlock is Tsuga caroliniana first distinguished in the Appalachian uplands further south.

Potpourri: Hemlocks are members of the ubiquitous Pinaceae or pine family which consists of conifer or cone-bearing trees that grow throughout the temperate regions of both the Northern and Southern Hemispheres and in mountainous tropical regions. The Pine family includes pines (Pinus), spruce (Picea), firs (Abies), hemlocks (Tsuga), larches (Larix), and Douglas-firs (Pseudotsuga or false hemlock). [1] Since they are large trees that grow in dense clusters, they are among the  most important trees of the timber industry, providing 75 percent of all lumber, and 90 percent of paper  pulp.  There are over 200 species worldwide of which about 60 are indigenous to North America. Pine family trees are self-pollinating, or monoecious, contributing to their evolutionary success at the expense of genetic diversity. The “naked seeds” that literally define the Gymnosperms (gymno is Greek―gymnasiums were places for naked exercise) are at the base of the female pinecone scales fertilized by male cone pollen wind-blown from the same tree. The pollen that is deposited on the megasporangium of the female cone in the spring ceases growth through the winter, consummating fertilization the following year. [2] In good time, you get a pine.

Hemlocks can most easily be distinguished by their needles, a term referring to the narrow, pointed leaves that, except for the larch, do not fall off over winter giving rise to the more general term evergreen. Hemlocks needles are short, arrayed in two neat rows, one of nature’s better options for higher mountains and boreal forests. However, needles do have a lifespan. Pine trees lose about one fourth of their needles every year resulting in trails coated with a soft cushion of decaying needles that suppresses almost all other plant growth, one of the best treads for foot travel. The “evergreen” needle as a leaf form is an evolutionary result of several factors involving both latitude and geology. The primary determinant is the length of the growing season, which can vary from as short as 65 days in New England to an average of 250 days in the southeast. All things being equal, a plant will trend toward greater leaf area exposed to as much sunlight as possible. Photosynthesis in the chloroplast cells of the leaves converts sun photon energy to the hydrocarbon molecules of biology. Broadleaf trees grow where they can, and evergreen needle trees grow where they can’t.

Hemlock needles (with woolly adelgids)

When the non-growth colder season approaches, broadleaf trees are better off  wintering over with bare branches, having adequate time to replenish their foliage the following spring. In northern latitudes, there is simply not enough time to restock the canopy with sun gatherers, so they persist year-round as narrow needle-like leaves. Temperature is a second factor due primarily to physics; when the freezing point is reached, the uptake of water is squelched and growth is curtailed.  Since average temperature drops about 3 degrees F every 1,000 feet, mountainous terrain has the same effect as latitude on the growing season so evergreens also prevail in higher elevations. Needle trees are also favored in northern latitudes and uplands because they are winterized with wax-coated  needles and resin-infused wood and roots. The conical shape of many conifer trees with their one dimensional needles are also better at survival in heavy snowpack. It should be noted that the pine barrens of New Jersey and the wide expanses of scrub pines across the south are neither mountainous nor northern. Some species of pine thrive in dry sandy soils where periodic wildfires have historically been the norm. Their cones are serotinous, which means that they evolved to burst open after a fire to spread the seeds of restoration, eventually becoming the dominant species. [3]  

That hemlock trees have the same name as the poisonous hemlock plant cannot be a matter of chance etymology. They have some things in common, but not the notorious toxins of the latter. The “drinking of the hemlock” was the standard method of execution in Ancient Greece. One of history’s most enduring dramas is the trial of Socrates by the popular court or dikasterion comprised of 500 Athenian citizens in 399 BCE. He was prosecuted for undermining religious faith in the  “gods that the state recognizes” by introducing new “demonical beings” and for “corrupting the youth” and found guilty by a slight majority. The hemlock execution of Socrates is considered by many historians to mark the end of the Golden Age of Greece. [4] Poison hemlock was thus well known throughout Europe by the Middle Ages both for its toxicity, and, in small doses, for treatment of a variety of ailments. There is evidence of its use for the treatment of cancer, as a narcotic or analgesic, and even as an anti-aphrodisiac (perhaps by killing the object of desire). [5] Because of this, many Europeans were familiar with its shape when growing and its smell when ground into powder. However, since there were no hemlock trees in Europe, it took the discovery and exploration of the Americas to associate the poison hemlock plant with its namesake tree.

The hemlocks of North America were almost certainly first sighted along riparian riverbanks by French explorers who penetrated the mainland by sailing up the St. Lawrence from the North Atlantic in the 16th century. Their knowledge of the smell and branching pattern of the poison hemlock led to applying the familiar name to the unfamiliar evergreen tree due to its similar characteristics. This is corroborated by the British Cyclopedia of 1836 in noting that the hemlock tree was “so called from its branches in tenuity and position resembling the foliage of the common hemlock.”  Conium, the genus of the poison hemlock, was purposely chosen because the plant looked like a miniature cone-bearing tree. In the New World, where there were so many new and strange plants, any means of distinguishing one species from another by using a mnemonic brought some order to the chaos. To differentiate the evergreen version of hemlock from its doppelgänger, the compound name “hemlock spruce” was applied. [6] Spruce trees of the genus Picea prevail in boreal forests across North America and Eurasia. Spruce is an anglicized version of “from Prussia” due to the prevalence of native spruce trees along the Baltic Sea near present day Lithuania. Prussia was  the ancestral home of the medieval Teutonic Knights that grew in prestige and power, uniting the disparate Germanic states to form a unified Germany in the 19th century. The hemlock spruce is called Pruche du Canada in Quebec, further evidence of  Prussian origin. It was later moved from the spruce to the pine family.

Eastern hemlock or hemlock spruce is the most shade tolerant of all tree species and can survive with as little as 5 percent full sunlight. Since the conversion of solar energy to produce hydrocarbon energy is the foundation of life, its lack can only be compensated for by slow growth. Like Treebeard, the ent of Tolkien’s mythical Fangorn Forest, hemlock growth is slow but inexorable. A one-inch diameter (usually reported as dbh―diameter at breast height―to account for irregularities) hemlock can be over 100 years old. Since hemlocks can grow to over six feet dbh with a height of over 150 feet, it follows that longevity is another characteristic trait. The record age for a hemlock is 988 years, older than Noah’s 969-year-old grandfather Methuselah, the epitome of lifetime endurance. Once established, a hemlock canopy blocks sunlight from penetrating to the understory, snuffing out most arboreal competition. The subsequent microclimate of dense shade with a deep duff layer retains moisture and sustains uniformly reduced ambient temperatures. Not surprisingly, the relatively exacting moisture and temperature requirements for hemlock germination are met by the conditions that they create. [7] But there is more to forest soil management than trees. There are also fungi.

Hemlock polypore growing on dead hemlock.

Pine family trees like hemlock are connected through their root systems with fungi that surround them, an arrangement know as ectomycorrhizal, “outside fungus root” in Greek. About 90 percent of all plants form mutualistic partnerships with fungi to gain access to essential soil nutrients like phosphorus and nitrogen, with the plant providing up to ten percent of its hydrocarbon sugar output to root fungi in return. For most plants, the mycorrhizal relationship is an option that results in more robust growth. For trees of the Pine family like hemlock, the mycorrhizal relationship is universal. Many different species of fungi are involved with the roots of any given tree. While there have been no studies for hemlocks, the closely related Douglas firs (Pseudotsuga menziesii) are estimated to have over 2.000 different species of associated fungi. [8] The kingdom Fungi is not uniformly benign, however, as all living things must find their niche in the tangled web of life as a matter of survival. The subsurface soils kept moist by the hulking hemlocks are an ideal habitat for mold, another broad category of fungi. Seven species of fungi attack the seeds of hemlock resting on the moist soil awaiting the magic of germination. One mold species, Aureobasidum pullulans, was found growing on almost three fourths of all hemlock seeds, impeding their full function. Hemlocks, when they eventually keel over, provide yet another form of fungi, the saprophytes that feed on the dead. Were it not for the fungi that consume the cellulose and lignin from which tree trunks are made, the world would be covered with tree trunks and none of their carbon would be returned to the atmosphere. Because hemlocks are so pervasive, one species of fungus aptly named Ganoderma tsugae or hemlock polypore, subsists exclusively on its deadwood.  Also called varnish shelf, it is one of the most recognizable of all fungi and is closely related to one of the most important fungi in Asian medicine (see full article for further details).

Hemlock growing adjacent to fallen old growth hemlock trunk in foreground.

The hemlock is listed on the International Union for Conservation of Nature Red List as near threatened. [9] This surprising state of affairs is not the result of clear cutting and overharvesting, although human impact has surely had deleterious effects. The high point of hemlock harvest was at the turn of the last century when the wood was used primarily for home construction roofs and flooring. As the population surged in the decades that followed and newspapers of the golden age of Hearst and Pulitzer proliferated, hemlocks became one of the primary sources for paper pulp.   The effects are exemplified by Michigan’s growing stock decreasing by over 70 percent between 1935 and 1955, a result of the slow growth of hemlock relative to its removal. However, the real culprit that threatens hemlocks is a sap sucking insect closely related to aphids, the bane of gardeners and food for ladybugs. The woolly adelgid was probably introduced from Japan in the early 1950s somewhere in New England and has now spread to 19 states and two Canadian provinces.[10] The larvae of the adelgid suck the body fluids from hemlock needles at their base, covering themselves with a fluffy white layer (hence woolly) to protect against predation (see full article for further details). A death by a literal thousand cuts ensues that can take decades but is in most cases inevitable. The hemlocks of Limberlost were the only old growth tract in Shenandoah National Park. They had been so weakened by woolly adelgids that they toppled during hurricane Fran in 1996. The hemlocks are just starting to recover almost thirty years later (note fallen hemlock trunk in foreground in photo). 

Unlike its poisonous namesake, hemlock is not only edible but salubrious. It has been attested that the entire Pine family “comprises one of the most vital groups of edibles in the world.” [11] This would mostly apply to northern latitudes where the paucity of winter food could result in starvation absent the resort to eating pine tree inner bark, a thin layer called the cambium.  The nutritious cambium is responsible for the formation of the water transport xylem on the inside and the hydrocarbon food transport phloem on the outside; in other words, it makes the tree trunk. For soft wood pine trees stripping off the outer bark layer to gain access to the cambium can be readily accomplished with primitive scraping tools. The native peoples of North America collected cambium which was cut into strips eaten either raw, cooked, or dried and ground into flour to make bread, a practice adopted by early colonists. The Adirondack Mountains of New York derive from the Mohawk word haterỏntaks, which means “they eat trees.” The healthful benefits of hemlocks and other pines are further enhanced by high concentrations of anti-inflammatory tannins and anti-oxidant ascorbic acid/vitamin C in all parts of the tree. The various Indian tribes had diverse uses, extending from pine tea tea to treat colds to thick pinesap paste applied to wounds as poultice.[12] One early colonist wrote in his diary in the mid 19th century that “I never caught a cold yet. I recommend, from experience, a hemlock-bed, and hemlock-tea, with a dash of whiskey in it merely to assist the flavor, as the best preventive.” [13]

References: 

1. Little, E. The Audubon Field Guide to North American Trees, Eastern Region, Alfred A. Knopf, 1980, pp 276-301.

2. Wilson, C. and Loomis, W. Botany, Holt, Rinehart and Winston, New York,1967, pp 549-570

3. Kricher, J. and Morrison, G. A Field Guide to Eastern Forests of North America, Peterson Field Guide Series, Houghton Mifflin Company, Boston. 1988, pp 9-10.

4. Durant, W. The Life of Greece, Simon and Schuster, New York, 1966, pp 452-456.

5. Foster, S. and Duke, J. Medicinal Plants and Herbs of Eastern and Central North America. Peterson Field Guide Series. Houghton Mifflin Company, Boston, 2000, pp 68-69.

6. Earle, C. Tsuga, The Gymnosperm Database, 2018, at https://www.conifers.org/pi/Tsuga.php      

7. Godman, T. and Lancaster, K. “Pinaceae, Pine Family” U.S. Forest Service Report at https://www.srs.fs.usda.gov/pubs/misc/ag_654/volume_1/tsuga/canadensis.htm   

8. Kendrick, B. The Fifth Kingdom, Focus Publishing, Newburyport, Massachusetts, 2000. Pp 257-278.

9. https://www.iucnredlist.org/species/42431/2979676    

10. https://explorer.natureserve.org/Taxon/ELEMENT_GLOBAL.2.131718/Tsuga_canadensis  

11. Angier, B. and Foster, K. Edible Wild Plants, Stackpole Books, Mechanicsburg, Pennsylvania, 2008, pp 168-169.

12.Ethnobotany Data Base at http://naeb.brit.org/uses/search/?string=tsuga+canadensis   

13. Harris, M. Botanica, North America, Harper Collins, New York, 2003, pp 44-46.