Wood Frog

The most recognizable feature of the wood frog is the black “robber’s mask” eye stripe.

Common Name: Wood Frog – Frog is among the oldest of Indo-European words originating as the Sanskrit pravate, meaning “he jumps up.” It evolved to English through Old Norse as frauki. Wood frogs are found around wet areas in woodland habitats, but not on wood as the name suggests. The reference may be to its characteristic brownish hues which are similar in color to wood bark. However, brown frog would then be a better choice and no less uncreative.

Scientific Name: Rana sylvaticaRana is the Latin word for frog which differs from the Sanskrit origin as an onomatopoeia of their call …  like croak or ribbit in English. The Latin word for woodland is silvae. The scientific name is literally “frog wood,”  the opposite of the common wood frog name. Wood frog have been reclassified by modern DNA taxonomy to Lithobates sylvaticus from the Greek lith meaning “stone” and bates meaning “one who treads,” which would connote “stone walker.” This could be literal, as is the case for the wood frog depicted above climbing on a lichen-covered rock.

Potpourri:  Wood frogs appear in the spring after having endured even the coldest of winters as if immigrating from remote, warmer habitats, like anuran snowbirds. Surely an amphibian noted for its slimy wetness cannot have survived near frozen-through skateable ponds that dot the woods they inhabit. But they do. The extraordinary tenacity of life in the savagery of the wild is the result of the survival of mutants.  After the basics of what it took to be a frog were successfully worked out in the deep recesses of time, populations of jumping, amphibian carnivores lurking in or near water burgeoned. To escape the crowds competing for the same resources, the more adventurous individuals left for greener, but sometimes colder, pastures. The resulting diaspora to new environments is one driving force for speciation. Wood frogs, like humans among mammals, have managed by sheer luck to evolve in the right direction to become among the most successful of their amphibian cohorts. Not only do they survive arctic winters, but they are first to emerge in spring to fill any emergent pool of water with thousands of eggs. It is only a matter of time until a new mutation will offer better chances elsewhere.

The first question is how do thin-skinned animals survive iced-in ponds without the coat of a beaver or down like a duck? This conundrum perplexed naturalists whose warm blooded judgement was skewed toward bears denning in caves and caribou gathering in tightly packed  herds to share or conserve body heat. The cold blood of reptiles and amphibians lacks the metabolic wherewithal of thermoregulation. Consensus was that burrowing deep into the ground below the frost line was the only possible palladium; toads had been found buried up to four feet deep during excavations.  John Burroughs, an eminent nineteenth century American naturalist, chanced upon a frozen frog he found under some leaf litter and concluded that “… frogs know no more about the coming winter than we do, and that they do not pass deep into the ground to pass the winter as has been supposed.” [1] Finding an animal frozen and lifeless would lead most to conclude that it died of exposure, having failed to account for weather extremes. The foolish frog theory, which would make a passable subject for Aesop’s fables or a subplot in Disney’s Frozen, is false. Frogs freeze on purpose.

Science entered the picture in the 1980’s when a Minnesota-based researcher with some knowledge of frog adaptability took up the subject. The experiment consisted of collecting a number of frog species in the fall and subjecting them to freezing in the laboratory under controlled conditions. After six days at -6°C, the frozen frogs were moved to a refrigerator and thawed at +6°C. Wood frogs began to show vital signs and limb movement after three days but mink and leopard frogs subjected to the same conditions froze to death and stayed that way. The resulting paper concluded that “an accumulation of glycerol during winter was correlated with frost tolerance, indicating that this compound is associated with natural tolerance to freezing in a vertebrate.” [2] In other words, wood frogs seemed to be making antifreeze. In the four decades that have followed since this seminal experiment, further research has revealed the true nature of the wood frog’s magic.  

What better place to study frozen wood frogs than Alaska where arctic winter is the norm and spring thaw the exception? Researchers located frozen frogs in the wild and measured ambient temperatures with sensors placed directly on their skin. After two seasons with temperatures as low as -18°C and a seven-month long period of deep freeze suspended animation, every wood frog came back to life. [3] That the frogs survived the natural habitat test at much lower temperatures for a much longer time period than in the laboratory test led to some speculation as to the mechanics of freeze protection. Vertebrate metabolism is based on energy generated primarily from the oxidation of glucose derived from dietary carbohydrates. Excess glucose is stored in the liver and in muscle tissue as glycogen for future energy needs. The key to the deep freeze conundrum was that in the laboratory, temperature was lowered to below freezing just once and the frogs froze. In the wild, frogs are subjected to multiple freeze/thaw cycles according to weather fluctuations. It was discovered that each cycle ratcheted up the production of glycogen, ultimately increasing its concentration by a factor of five. To accommodate the stockpile, liver size increased by over fifty percent ― one researcher described the wood frog as a “walking liver.” When compared to wood frogs monitored in more moderate Midwest climates, the Alaskan frogs had three times as much glycogen. [4] While Darwin’s Galapagos finches provided a hint of adaptations for survival, Alaskan wood frogs are a compelling affirmation case study.

The actual mechanism employed not only by wood frogs, but also by spring peepers, gray tree frogs and chorus frogs to revive after freezing to death (heart stoppage and breathing cessation) is now understood to involve both glycerol and glucose in addition to some specialized proteins. Glycerol lowers the freezing point of water to protect membranes from freezing just as it does for automobile cooling systems. Glucose in high concentrations prevents the formation of ice crystals inside cells. Ice crystals are like small daggers, shredding cell membranes and wreaking havoc with organelles. This is why freezing is normally lethal to animals and why frozen vegetables that are not dehydrated turn to mush when defrosted. When a frog senses first frost, adrenaline is released to convert liver glycogen into blood glucose. This is the same mechanism that provides energy for fight or flight (and freeze in frogs). It originates in the amygdala, the brain region that provides for immediate action in emergencies known as the sympathetic nervous system.  The difference with wood frogs is magnitude. Human glucose ranges from 90 to 100 milligrams per deciliter with a diabetic threshold at 200 mg/dl. Frogs boost their glucose to as high as 4,500 mg/dl, well over lethality for humans, and probably for just about every other living thing. The specialized proteins act as ice nucleation sites outside the cells where about 65 percent of total body water ends up frozen. [5] Cryobiology may well be the next frontier in the quest for life everlasting if the lessons learned from wood frogs can be mastered. [6]

Male wood frog in amplexus grip of female amid fertilized eggs.

The spring thaw fills vernal pools with the cacophony of male wood frogs courting, a behavior known as explosive breeding. As the amphibian exemplar of the early bird gets the worm, the quest for sex begins in early March, even before wet areas are free of ice. Filling the air with their duck-like quacking, male wood frogs frenetically search for something to mate with, not infrequently grasping other males and even other species, including large salamanders. The tenacious grip is called amplexus, aided and abetted by swollen thumbs and expanded foot webbing that won’t let go. [7] It is necessary because females are generally larger than males and slimy frogs are slippery. Mating success of male wood frogs is dependent on physical size,  one of nature’s enduring correlations. It is also true that larger females are more likely to mate as size in this case correlates to the number of eggs produced. After an embrace that can last for over an hour for egg fertilization, the female deposits as many as 3,000 eggs in a gelatinous, globular mass about four inches in diameter. After a time, the ball flattens and collects algae for disguise as pond scum. One month after oviposition, the eggs hatch into aquatic tadpoles for the race against the clock to metamorphose into terrestrial wood frogs before the pool, which may be seasonal, dries up and they expire. Wood frogs can freeze but their young need water. The odds are stacked against survival, but only one tenth of one percent of the eggs in the brood must reach adulthood for survival of the species, at least for those that are fittest. [8]

Amphibians first appeared in the Devonian Era about 400 million years ago as something like a walking fish and have never broken free from their aquatic “roots” even as evolution has run its course. True frogs of the family Ranidae, which don’t appear in the fossil record until 57 million years ago, are long-legged, narrow wasted, and web-footed with horizontal pupils including wood, green and bull frogs. Since their origination occurred after the breakup of the supercontinent Pangaea, global dispersal required continent jumping. DNA assessment of 82 Ranidae species revealed that the North American clade of true frogs came from East Asia, hopping across Beringia and spreading across the New World by 34 million years ago. The first genetic split of the true frogs that spread out in North America was the mutation that became the wood frog, suggesting a significant adaptation. [9] Are wood frogs still evolving? The short answer is yes because everything does, including humans. It is just too slow to notice.

Amphibians are the proverbial “canary in the coal mine” when it comes to planet Earth. They need both clean water because they are aquatic for at least a portion of their life cycle and clean air because we all do. Wood frogs offer a case in point. With climate getting warmer and not colder, ice survival may not have quite the same importance in the future.  One study found that pond temperature had a marked effect on wood frog tadpole development time. Those in colder ponds grew faster. Conversely, warmer water not only slowed tadpole growth but also evaporated more quickly. Rising ambient temperatures will thus reduce the chances for slower growth tadpoles to metamorphose into lunged froglets before the water evaporates due to accelerated desiccation. [10] On the other side of the survival ledger, empirical data from the beginning and end of the last century revealed that temperatures had risen about 3°F and that male wood frogs were calling for mates about two weeks earlier. This would then move conception time up to account for more time needed to gestate and grow. [11] Given their historical evolutionary success over the last 34 million years, it is reasonable to conclude that Rana sylvatica is more likely to survive climate change than Homo sapiens, who have only been around for less than one million.

References:

1. Heinrich, B. Winter  World, Harper-Collins, New York, 2003, pp 169-175

2. Schmid, W. “Survival of Frogs in Low Temperature” Science, 5 February 1982,  Volume 215, Issue 4533  pp. 697-698.

3. Pennisi, E. “How to Freeze and Defrost a Frog”, Science, 8 January 2014.

4. Servick, K. “The Secret of the Frozen Frogs”  Science, 21 August 2013.

5. Heinrich, op cit.

6. Costanzo J et al  “Survival mechanisms of vertebrate ectotherms at subfreezing temperatures: applications in cryomedicine”. The FASEB Journal. 1 March 1995 Volume 9 No. 5 pp 351–358.

7. https://www.nasw.org/users/nbazilchuk/Articles/wdfrog.htm    

8. https://animaldiversity.org/site/accounts/information/Rana_sylvatica.html

9. Yuan, Z. et al. “Spatiotemporal diversification of the true frogs (genus Rana): A historical framework for a widely studied group of model organisms”. Systematic Biology.  Issue 65  No. 5, 10 June 2016  pp 824 – 842.   

10. Renner, R. “Frogs not croaking just yet” Science, 12 My 2004.

11. Wong, K. “Climate Warming Prompts Premature Frog Calls” Scientific American 25 July 2001’

Greenshield Lichens

Rock Greenshield Lichens decorate the winter snowscape.

Common Name: Rock Greenshield Lichen – The rosette shape is like a rounded shield and is greenish gray in color ― a green shield found almost exclusively on rocks. Lichen has an obscure etymology but may derive from the Greek word leichein which means “to lick” just as it sounds. There is no extant clue for this association as very few lichens are eaten (and thus licked). Some, like this species, have small lobes that could be a metaphor of sorts for little (leichein) tongues. The Common Greenshield Lichen is found mostly on trees.

Scientific Name: Flavoparmelia baltimorensisParmelia is Latin for shield,  the genus that was used broadly for all lichens that were shield shape until 1974 when it was subdivided. Flavo as a prefix means yellow, distinguishing these lichens from the blue tint of other shield lichens… yellow hues combine with blue so that the overall effect is green. This species was first classified from a Baltimore specimen giving rise to the familiar nomenclature.

Potpourri: The rock greenshield lichen and its virtually indistinguishable  cousin the common greenshield lichen (F. caperata) are encountered clinging to a substrate of  rock or wood while traipsing along almost any trail. In the winter months when  deciduous trees are devoid of greenery and mostly annual undergrowth has died back, only the grays and browns of rocks, dirt, leaf litter, and boles remain. The exceptions are the greenshield lichens that spread their leaflike (and tongue-like) lobes outward and onward, oblivious to the reduced light and frigid temperatures by which the rest of the forest is constrained. Their persistence is testimony to the lichen lifestyle, one of the natural world’s wonders. Comprised of a fungus that has partnered  with one or more organisms from a different kingdom, 14,000 identified lichens have mastered the art of survival in the most inhospitable of habitats from hot, dry desert to frozen tundra. They are even found on Mount Everest at elevations exceeding seven kilometers. [1]

According to the International Association of Lichenology, a lichen is “an association of a fungus and a photosynthetic symbiont resulting in a stable vegetative body.” The fungal partner is called the mycobiont and constitutes about 95 percent of the lichen body structure or thallus. Since fungi are heterotrophs and therefore cannot make their own food, they must rely on autotrophs that photosynthesize the sun’s energy to produce nutrients necessary for growth and reproduction. Some fungi consume dead plants as saprotrophs, some parasitize living organisms, and some connect to living plant roots in a mutually beneficial association called mycorrhizal (fungus root). Lichenized fungi evolved a relationship to photosynthesizing organisms that falls into the category of symbiosis, which is defined as an intimate relationship between two living things. The photosynthetic partner of the lichenized fungus is called the photobiont and can be either green, brown, golden algae or cyanobacteria, a type of bacteria that contains chlorophyll formerly called blue-green algae. Algae is now a broad non-technical name for several types of polyphyletic eukaryotes that photosynthesize, which is all that matters to the fungal partner. The photobiont for greenshield lichens is a green alga species in the genus Trebouxia, which is the most common photobiont for all lichens. [2]

The relationship between the fungus and the algae in a lichen is complex. Traditionally the symbiosis of lichens has been characterized as mutualism in which both partners benefit equally. In reality, the relationship frequently ranges from commensalism, where the fungus benefits but the algae do not, to outright parasitism, where the algae are harmed for the benefit of the fungus. Some insight into the living arrangements is afforded by the observation that the lichen’s fungi need the algae but not vice versa. That is to say that none of the lichen forming fungi, comprising almost half of ascomycetes, the largest division of the Fungi Kingdom (mushroom are in the other large division – the basidiomycetes), exist in nature without algae, whereas the algae can and do lead independent lives on their own. However, having a place to live with enough water and air for photosynthesis to make carbohydrates and respiration to oxidize them for energy (both plants and fungi need to breathe) is certainly an algal advantage. It is at the cellular level that the controlling dominance of the fungus can become sinister. The root-like tendrils of the fungus called hyphae surround and penetrate the algal cells, releasing chemicals that weaken the surrounding membrane so that the carbohydrates leak out, feeding the fungus. Weaker algal cells thus violated die, and were it not for periodic reproduction, so too would the lichen. [3] A lichen has been described as a fungus that discovered agriculture, an apt aphorism. The fungus uses the algae for subsistence in like manner to a farmer tending fields to extract their bounty ― it would be nonsensical to assert that farmers and soybeans therefore benefit mutually in symbiosis.

Lichen reproduction is also complicated, as it involves two different species that must reproduce independently and then come into close contact to form a union. While this union must have occurred at least once for any lichen to exist, a singular rare event in the millions of years of geologic time is not unusual. The mycobiont, in this case Flavoparmelia baltimorensis, produces reproductive spores in a fruiting body called an apothecia in a manner analogous to the gills of mushroom fruiting bodies. The photobiont, in this case Trebouxia, also reproduces using spores when it is independent of the fungus, but only reproduces asexually once lichenized. Apothecia are very rarely seen on greenshield lichens, direct evidence that, like most lichens, they have no pressing need for reproductive spores.  Since they are abundantly distributed and can on occasion cover vast swaths of boulder fields (F. baltimorensis) and exposed wood surfaces (F. caperata), it is evident that there is a successful reproductive workaround. In general, this consists of a lichen forming a detachable unit that includes both the fungus and its algal partner for windborne distribution to new locations. These “lichen seed packets” take various forms including soredia that are miniscule balls of fungal hyphae surrounding a few algal cells and schizidia, which are simply flakes of the upper layer of the fungal thallus which also contains the algal layer. One of the ways to tell rock and common greenshield lichens apart is that F. baltimorensis has schizidia and F. caperata has soredia. However, identifying small irregular components on the gnarled surface of a lichen is a challenge even for a lichenologist with a lens. It is much easier to identify a rock or on a tree and look for lichens.

Greenshield lichens often cover broad expanses of rock and tree surfaces to the extent that long term effects come into question. Do lichen covered rocks disintegrate at an accelerated rate? Do trees weaken due to the amount of bark covered by lichens? For the most part, lichens are self sustaining in the sense that the heterotrophic fungus is supplied nutrients from autotrophic algae. While sunlight and water are the essential ingredients for photosynthesis, nitrogen, phosphorous and potassium are also required for plant growth (the three numbers on a fertilizer bag refer to these elements).  It is less well known that fungi need these same nutrients for the same metabolic reasons. [5] In many cases, lichens are able to get all of the nutrients they need from minute amounts dissolved in water. The quality of precipitated rainwater is why lichens are useful for environmental monitoring as their growth correlates to air quality. The two main substrate characteristics associated with lichen growth are moisture retention and exposure to sunlight. For lichens growing on exposed tree bark, the degree to which moisture is retained as it flows down the tree is the key factor. While it is true that the lichen will “rob” some of the nutrients that would otherwise go to the tree roots, the amount is negligible. Deciduous trees have more lichens than conifers because their leafless trunks are sunlit for six months of the year whereas evergreens are ever shaded. Rocks are not good at retaining moisture. Consequently, lichen hyphae penetrate rock surfaces to depths of several millimeters seeking water, and, depending on the type of rock, minerals as well. This contributes to the long-term weathering of rocks for soil formation, and more broadly to the million-year geologic cycle of mountain building and erosion. The answers to the two questions are yes, lichens do disintegrate rocks at a geologic rate, and no, lichens do not harm trees ― they are sometimes called epiphytes for this reason.

Common Greenshield lichens do not harm the trees they use for support.

Chemistry is another important aspect of lichen physiology. More than 600 unique compounds are concocted by lichens in surprisingly large quantities … up to five percent of total bodyweight. It is instructive to note that when lichenized fungi are artificially grown without algae in a laboratory, chemical output is negligible. This can only mean that specific chemicals promote the associative nature of the individual lichen species.  There are any number of hypotheses that might explain this. Bitterness as deterrence to animal browse is certainly one possibility, as lichens grow quite slowly on exposed surfaces and are easy to spot. However, some lichens, notably reindeer moss (Cladinia rangiferina), are a major food source for animals and are quite likely propagated in their droppings. It is also believed that some chemicals act to coat sections of hyphae to provide air pockets necessary for photosynthesis by the algae. The chemical footprint of a lichen species is one of the main diagnostic tools used in field identification. Lye, bleach and several other reagents are dripped onto the surface; a change in color indicates the presence of a specific chemical that is related to a specific lichen. [4] There are many unknown aspects of lichen physiology. This was made manifest recently when it was discovered that many lichens contain a type of basidiomycete yeast (also a fungus), which is embedded in the body of the ascomycete fungus in varying concentrations that correlate to anatomical differences. Some if not all lichens may actually consist of two fungi and an alga or two, a far cry from simple symbiosis. [6] The function of yeast fungi is not yet known.

The Flavoparmelia genus was separated from the other Parmelia (shield) lichens in 1986 in part due to their production of the chemical compound usnic acid. [7] It is a large molecule with the formula C18H16O7 which simplifies the recondite but recognized international IUPAC standard 2,6-Diacetyl-7,9-dihydroxy-8,9b-dimethyldibenzo[b,d]furan-1,3(2H,9bH)-dione. Usnic acid is found primarily in the top layer of the fungus along with another chemical called altranorin just above the area where the algal bodies are concentrated. It is surmised that they contribute to shielding green algae from excessive sunlight exposure since bright sun is inimical to photosynthesis, the source of all lichen energy. Usnic acid is also a potent antibiotic, collected primarily from Usnea or beard lichens due to higher concentration for use as an additive in commercial creams and ointments. Flavoparmelia caperata is one of several lichens that have historically been used by indigenous peoples as a tonic taken internally or as a poultice applied to a wound. [8] The medicinal uses of lichen fungi should come as no surprise, as many polypore type fungi growing as brackets on tree trunks have been used medicinally for millennia. The abundance of rock and common greenshield lichens is evidence of successful adaptation. In addition to thriving on bountiful rock and wood surfaces, the chemical shield screens sunlight to protect the green algal energy source and guard against assault by microbes and mammals.  In other words, they are literally green shields.  

Carl Linnaeus assigned lichens to the class Cryptogamia meaning “secret life” along with everything else that created spores and not seeds. [9]  One of the more enduring lichen secrets is how and when the coalition between fungi and algae began. It is widely accepted that simple replicating organisms started out in aqueous habitats, as water affords bodily support and nutrient transport. The transition from sea to shore would have been nearly impossible for an alga with no structure or a fungus with no food. There is good reason to suppose that some form of union like a lichen may have come about by chance and was then promoted by survival.  Scientific research over the last several decades has cast some light into the dark shadows of this distant past. What look like lichen hyphae embedded in the soil around fossils from the pre-Cambrian or Ediacaran Period (635-541 million years ago) suggest that lichens may have been the first pioneers on dry land. [10] This is supported by the finding that marine sediments from this same period contain not only the root-like hyphae of fungi but also the rounded shapes of blue green algae or cyanobacteria. This suggests that something lichen-like started out in the water was left high and dry in a tidal flat to make the critical transition. [11] However, recent DNA analysis of primitive ferns and lichenized fungi revealed that the lichens evolved 100 million years after vascular plants.[12]  Lichenology, like all science, is a continuum that never ceases in its quest for knowledge. Future field tests and experiments are certain to clarify the origin story.

References

1. Kendrick, B. The Fifth Kingdom, 3rd edition, Focus Publishing, Newburyport, Massachusetts, 2000, pp 118-125.

2. Brodo, I., Sharnoff, Steven and Sylvia. Lichens of North America Yale University Press, New Haven, Connecticut, 2001. pp 1-112, 316-317, 479-484.

3. Wilson, C. and Loomis, W. Botany, 4th edition, Holt, Rinehart, and Winston, New York, 1967, pp 451-453.

4. Brodo, op. cit.

5. Kendrick, op. cit. pp 142-158.

6. Spribille, T. et al “Basidiomycete yeasts in the cortex of ascomycete macrolichens” Science Volume 353 Issue 6298, 21 July 2016, pp 488-492.

7. Hale, M.  1986. “Flavoparmelia, a new genus in the lichen family Parmeliaceae (Ascomycotina)”. Mycotaxon. 25 (2): April-June 1986.pp 603–605

8. Brodo, op. cit.

9. Linnaeus, C. Species Plantarum. Vol. 2. Stockholm: Impensis Laurentii Salvii.1753. p. 1142

10. Frasier, J. “Were Weirdo Ediacarans Really Lichens, Fungi, and Slime Molds?” Scientific American 13 December 2012.

11. Yuan, X. et al “Lichen-Like Symbiosis 600 Million Years Ago” Science  Volume 308, Issue 5724, 13 May 2005, pp 1017-1020

12. Frederick, E. “Hardy lichens don’t actually predate plants” Science 20 November 2019

White-tailed deer

A white-tailed deer doe evidently not alarmed by a proximate hiker

Common Name: White-tailed deer, Virginia deer, Whitetail – The etymology of deer extends to the origins of Indo-European languages in Sanskrit as dhvamsati meaning “he falls to dust” (perhaps to indicate mortality). As new languages arose according to human migration and custom, the root was modified … in Old Norse dyr meant wild animal or beast. The interpretation of deer as any animal, such as in Shakespeare’s King Lear Act III, Scene IV – “But mice and rats, and such small deer,” dropped out of common usage long ago. A “modern” deer is any ruminant animal of the family Cervidae. The white underside of the tail is exposed by raising it vertically erect  as a warning signal to its cohorts as it flees from a perceived threat.

Scientific Name: Odocoileus virginianus – The genus name is from the Greek odon meaning tooth and koilos meaning hollow due to the pronounced indentations in the crowns of the molar teeth, prominent in herbivores. The species was first classified in Virginia.

Potpourri: The life and times of the white-tailed deer from bust to boom in the 20th century affords a case study in adaptation to human habitats. It is one of the species that have earned the coined scientific sobriquet synanthrope applied to any wild species that lives near and benefits from their association with humans ― like pigeons and possums, they are partners of the Anthropocene. For those who have been around for a half century or so, the renaissance of the deer is something of a miracle.  About the only way for an east-coaster to see deer around the middle of the last century was to go “deer stalking” in Pennsylvania’s Poconos, driving around the mountains at dusk hoping for the rare treat of seeing even one. The pre-Columbian deer population is estimated to have been about thirty million, held in check by Native American hunters who burned the forest underbrush in part to aid in their quest and by marauding wolfpacks seeking the young and weak. Everything changed with the onslaught of the Europeans moving into the wilderness believing that its resources were endless until it was no longer wild and they weren’t. By 1900, the North American deer population had plunged to about 500,000 and they had been largely extirpated in many states in New England and the Midwest. The tide turned with the passage of the Lacy Act proscribing the sale of wild animals which ended practices such as using deer hides as currency. [1]

After World War II, GI Bill-educated veterans moved away from farms to work in cities settling in suburban developments pioneered at Levittown near Philadelphia. The economics of farming was meanwhile transformed in favor of larger, consolidated fields. Many small farms became fallow as forest succession restored the original woodland habitats that once predominated. The combination of suburban housing tracts now abutting deep wood sanctuaries reconstituted the sustaining and nurturing habitats and the renaissance of the white-tailed deer. The absence of wolf predation and the diminution of hunting by humans removed the check that had once balanced burgeoning deer populations. The phenomenal  return of the white-tailed deer that was a symbol of ecological restoration has become dystopian due to their resurgent numbers. The deer-human relationship has come under strain due to crop damage, deer/car collisions, landscape plant damage, Lyme disease  epidemiology, and, most importantly, ecosystem imbalance. These five friction factors all arise from one of two innate and irrevocable deer behaviors: eating and roving. Remediation requires reduction of the deer population to that which is sustainable in a specific area. This can be done locally with barrier fencing but this is impractical for large fields and local roads and for the most part only moves the problem to another location since deer are both mobile and resourceful. Restoring predator populations would also be effective, but the acceptability of wolves lurking in the woods is contrary to an understandable Little Red Riding Hood mentality. The only action that can be controlled and widely implemented is to resort to human predation, euphemistically called culling the herd. [2]

An understanding of deer behaviors associated with browse foraging and procreation roving is necessary to appreciate both the nature of the overpopulation problem and the most effective way to resolve it. White-tailed deer are hooved ruminant animals in the Family Cervidae (Latin for deer) which also includes elk, caribou, and moose. There are 17 recognized geographically dispersed subspecies based on minor differences in physiological traits including the endangered Key deer of south Florida and the threatened Columbian deer of the Pacific Northwest. However, since DNA testing shows no distinctions, there is scant justification for retaining the regional variants. The closely related mule deer endemic to western North America that can and do hybridize with white-tailed deer are a separate species (O. hemionus) even though they look just like their cousins with marginally longer “mule-like” ears. Depending on the species, males are called stags, harts, bucks or bulls, females are called hinds, does or cows, and the young are called calves or fawns. The most recognizable and unique of cervid features are the osseous antlers that rise from the male (and female caribou) skull like a  crown of bony spears, nature’s most magnificent weapons.

The symbolism of antlers permeates the cultural history of the hominids who hunted the animals that bore them. From the cave paintings of Chauvet in France to many heraldic coats of arms of medieval Europe, antlers are metaphor for strength and courage. Hung over the fireplace of the iconic hunting lodge, they are meant to convey honor on the hunter, even if the hunt was less so. Antlers are the weapon that one buck wields against an opponent in contesting for conjugal rights, a survival of the fittest ordeal of the highest order. Success depends on a combination of bulk strength and antler effectiveness. Size matters, and the winner spreads the genetic heritage that will produce even larger antlers with more projections. Cervid buck antlers can extend up to four feet upward and arch backward halfway across the back. The ramose headgear is an impressive feat of physiology that seems impossible as an annual event; but antlers are shed every winter and regrown the following spring. [3] Growth at the rate of up to one inch per day to produce 20 pounds of bone tissue is necessary to complete the process to meet rutting time constraints. Aside from doubling the buck’s energy input needed over baseline, antlers require calcium and phosphorous that must be drawn from existing bone resulting in seasonal osteoporosis. That this is a truly remarkable trait is supported by genomic analysis which suggests that it arose from a single evolutionary mutation of an ancestral cervid. Antlerogenesis, as the process is now named, offers potential insights into tissue generation that could plausibly be used to produce bone tissue prostheses for amputees. [4]

The deer life cycle starts in early summer, when does give birth to one or two fawns, although quintuplets have occurred. Fawning areas are selected for the safety of the suckling fawns as they gain a half pound a day to triple their weight in a month. During the first few postpartum weeks, the doe keeps the fawns hidden from predators by leaving them separately in dense undergrowth while foraging. The fawns remain immobile, even withholding their feces and urine until the doe returns, ingesting the waste to eliminate any vestige of telltale scent. Even with this exceptional maternal care, forty percent of the fawns succumb to a variety of mishaps and maladies, the majority to coyote predation.  Leaving the palladium of the fawning area, the extended family of does, their fawns, and female offspring from previous years assembles with as many as twelve deer. Moving about together in their home range averaging one square mile, they forage in the crepuscular light of dusk and dawn. Ruminants like deer have a four chambered stomach that allows them to digest almost anything.  Food travels to the rumen which contains bacteria to break down the vegetation.  The reticulum circulates the food back to the mouth as cud to be chewed again, whence the omasum pumps the food to the abomasum to complete the process.[5] The deer diet is therefore diverse, consisting of over 600 plant species broken down according to type: 46% browse (sedges, shrubs, and trees); 24% forbs (herbaceous flowering plants);  11% mast (nuts and berries); 8% grass; 4% agricultural crops; and 7% other (like fungi and lichens). These wide-ranging menu options provide adaptability to ensure deer species survival during weather and climate vagaries.  However, although deer are consummate herbivores, they have preferred foods as all animals must.  Crops and mast top the list whenever available and forbs trump browse as provender. Soybean scavenging in farmed fields and begonia bodegas in suburban gardens is an integral part of the deer population debate.

A newborn fawn left by its foraging mother in a secluded area.

Male deer separate from the maternal group when they become yearlings and form loose-knit and unrelated bachelor groups of as many as five individuals. As the summer wanes and autumn colors appear, bucks ready for the annual procreative ritual by prepping antlers. Growing from a skull projection called a pedicel, the antlers are at first covered with a nurturing blanket of nerves and blood vessels with the texture of velvet. At maturity, the covering falls away in shredded tatters that are scraped off by rubbing the antlers against a tree. These “buck rubs” impart a glandular secretion to mark out the buck’s home turf. In the late fall, nominally November, the rut race begins with bucks setting the pecking order by locking horns and pushing one against the other like furry Sumo wrestlers until brute force prevails; finesse has nothing to do with it. Bucks attract females by creating a scrape, a two-foot diameter area that has been cleaned of underbrush to expose the bare earth to which generous amounts of his urine have been applied.  When a doe enters one of up to seven estrous periods that last for one to two days every month, pheromones are exuded from glands on the inner side of the back legs at the knee joint. Drawn by the scent of the buck rubs and marked scrapes according to primordial evolutionary attraction, the doe urinates there to provide a beacon for the dominant buck to follow. A successful buck may mate with as many as twenty does, fighting off competitors whenever challenged. The resultant fawns emerge in the spring to complete the cycle. With ample food and moderate weather conditions, deer populations double every two to three years. [6] Because does are motivated during the rutting season to move independently at speeds that can reach 35 mph, most collisions occur in November along rural roads.

Two bucks browse together until mating season begins.

Returning to the five major deer population – human activity impacts, long term ecosystem deterioration due to deer overbrowse and traffic fatalities due to deer collision stand out as exceptional. Tick infestation, garden grazing, and crop consumption are relatively minor issues in almost all cases. Lyme disease is a tick-born pathogen that has severe medical consequences if left untreated. Deer are one of the hosts of the black-legged tick which carries of the disease’s bacterial vector Borrelia burgdorferi, resident parasites of white-footed mice. The rising incidence of Lyme disease over the last several decades correlates with the increasing number of deer. However, correlation is not causation, and numerous field studies have been completed in the last thirty years to separate scientific fact from emotive fiction. In one New Jersey study, the deer population was reduced by half from 45.6 to 24.3 deer per km2 over a three-year period without any appreciable change in the tick population except that it increased in the second year. [7] In Connecticut, the epicenter of Lyme disease named for one of its towns, a more robust venery reduced the deer population to 5.1 deer per km2  corresponding to an 80 percent reduction in reported disease incidence. [8] A study in Massachusetts by Harvard researchers was based on data from a deer culling operation that was carried out from 1983 to 1991 that reduced the deer population from 400 to 100 without any decrease in Lyme disease incidence. The analysis of this data considered the population dynamics of mice, ticks, and deer, concluding that to have an appreciable effect, the deer population would need to be reduced to .07 deer per km2. [9] It is reasonable to assert from these examples that deer do contribute to the number of diseased ticks and therefore to the frequency with which ticks infect humans. However, effecting a measurable diminution in Lyme disease incidence would necessitate near extirpation of white-tailed deer which is not an option.

Deer consumption of plants grown by exurban homeowners to landscape an expanse of manicured grass is a nuisance at worst. Garden center horticulturalists are generally competent in recommending plants that have evolved natural repellents like thorns or bitter leaves. Even deer delicacies like young tree saplings can be deer-proofed with chemical sprays that mimic predator smells, protected by netting and fencing, or surrounded by metallic, wind-powered “scare deer” devices. Deer consumption of plants grown by farmers to feed cattle for beef and corn for hogs and ethanol is another matter; agroeconomics and food security outscore lawn aesthetics. The economic loss to farmers due to wildlife in the United States is approximately $4.5 billion annually, about one percent of the annual $450 billion Gross Cash Farm Income (GCFI) tallied by the USDA. While measurable, this is hardly catastrophic. The particular crop eaten and the specific animal snacking on it varies significantly by locality due primarily to the extent to which fields abut woodland wildlife refuges. Indiana provides a good benchmark as it is 65 percent farmland with average farm income. An independent  study recently conducted by Purdue University provides quantitative data concerning wildlife crop losses to corn and soybeans, its major crops. White-tailed deer, raccoons, and groundhogs were the main culprits with rabbits, squirrels, and wild turkeys contributing at the low end of the scale. Racoons were responsible for 87 percent of corn predation, eight times more than deer. Deer and groundhogs were the primary soybean consumers. Questionnaires filled out by farmers provided the interesting feedback that, while they thought that deer were mostly responsible for all damage, only one in five considered deer a nuisance. Economically, most fields surveyed had damage of less than $100 and even the most damaged fields did not exceed $500. Indiana, like most states, strikes a delicate balance by allowing farmers to kill deer only if they submit proof of excessive damage ($500 in this case). [10] This is a reasonable tradeoff between allowing for wildlife survival (it is their land too) with some degree of control.

Deer collisions, particularly when fatal, are surely consequential. While official numbers are approximate, insurance claims provide a good  surrogate. According to the Insurance Institute for Highway Safety, there were 185 deaths due to deer collisions in 2019, one half of one percent of the 36,120 automobile fatalities reported by the National Highway Traffic Safety Administration.  State Farm Insurance reported  2.1 million animal collision insurance claims nationwide from July 2020 to June 2021 of which two thirds were deer. The highest incidence was in West Virginia and the lowest in Washington DC. [11] The only three ways to deal with this problem are to change driver behavior, change deer behavior, or reduce the deer population. Deer crossing warning signs are an attempt to address the former but these are now so common that they are largely ignored. Driving slower at dusk on rural roads from October to December is good practical advice, but so is don’t drive drunk. Deer behavior controls consist of fencing and wildlife passage corridors along and around major highway routes but this is impractical on side roads.[12] Evolution may play a role as deer have been observed stopped at the side of the road waiting for traffic to pass, a trait that would be passed along as those that bound ahead are removed from the gene pool. Reducing the deer population is the only option that remains.

Synanthropic animals impose a burden on ecosystems that is comparable to that of invasive plants.  Both result directly from the Gargantuan footprint of almost 8 billion humans. People transport plants (and animals like pythons in the Everglades) both wittingly and unwittingly from places where they evolved to places where they didn’t which frequently means that there is no natural way to stop their invasion. Synanthropes mostly do what they have always done, taking advantage of those human landscapes and structures to which they are suited. Pigeons nest on rocky cliffs like tall city buildings, Canada geese throng in grassy wetlands like golf courses, and deer browse along wooded glades like housing developments. From the ecological standpoint, deer are by far the most insidious in subverting the entire process of forest succession. All trees eventually die and must therefore be replaced by saplings that resulted from the seeds that they dropped. Deer browse removes the shoots at the ground level so that there are no saplings to succeed … ultimately, there would be no forest. The disruption of natural ecosystem processes has cascading effects due to the complexity of association. One study compared the bird populations on islands which had no deer to those with deer and found that there was a 55 to 70 percent reduction (depending on species), with the largest reductions in those birds that depended on forest floor plants. [13]   To make matters worse, deer removal of native plants from the understory creates disturbed areas which are havens for weedy invasives. Forty areas that were fenced off from deer for three years in Pennsylvania and New Jersey had fewer invasives than the surrounding woods. [14] Since deer are considered ecologically excessive in 73 percent of their North American range, the implications are clear. Something must be done.

Deer population control has always been a contentious issue. The “Bambi effect” predisposes many to decry killing altogether. Disney’s namesake fawn barely survives his mother’s death at the hands of  hunters and the raging fire that spread from their untended campfire to succeed his father as prince of the forest. There are humane methods to reduce the fawn count but these are labor intensive and therefore expensive. Contraceptives in the form of vaccines that cause antibodies to block pregnancy or inhibit reproductive hormones must be periodically administered, practically impossible for roving herds. Tranquilizing and sterilizing deer is a better option since it is permanent. Doe  sterilization has been unsuccessful since bucks continue to seek out and impregnate those still in estrus. A program to sterilize bucks on Staten Island by performing five minute vasectomies has reduced fawn births by 60 percent and the deer population by one fifth at a cost of $6.6 million. [15] This would be unaffordable on a scale needed to deal with the 30 – 40 million deer wandering around the continent. Most people are now willing to accept the inevitable and condone managed deer hunts. A legitimate rationalization is that humans have preyed on ruminants like deer for at least as long as they have made cave drawings depicting the hunt. The U. S. National Park Service has established a deer culling program to “protect and restore native plants, promote heathy diverse forests, and preserve historic landscapes.”  The 6,500 pounds venison harvested from the parks in 2020 was donated to local food banks. [16] While some may still object to killing deer, it cannot be without a tinge of hypocrisy. The automated killing in slaughterhouses to produce hamburger made from the flesh of another ruminant is accepted practice.  How can hunting free range ruminants be worse?

References

1. Jarvis, B. Animal Passions, The New Yorker, 15 November 2021, pp 38-44.

2. Swihart, R. K. and DeNicola, A.  Public involvement, science, management, and the overabundance of deer: Can we avoid a hostage crisis?  Wildlife Society Bulletin 1997 Volume 25  pp 382-387 

3. Emlen, D. Animal Weapons, Henry Holt and Company, New York, 2014 pp 2-4. 

4. Ker, D. and Yang, Y. “Ruminants: Evolutionary past and future impact” Science, Volume 364, Issue 6446, 21 June 2019, pp 1130-1131.

5. https://dnr.maryland.gov/wildlife/Pages/hunt_trap/wtdeerbiology.aspx       

6. https://www.fs.fed.us/database/feis/animals/mammal/odvi/all.html     

7.  Jordan, R. et al “Effects of reduced deer density on the abundance of Ixodes scapularis (Acari: Ixodidae) and Lyme disease incidence in a northern New Jersey endemic area” Journal of Medical Entomology, Volume 44 no. 5 September 2007 pp 752-757.

8. Kilpatrick, H, et al “The relationship between deer density, tick abundance, and human cases of Lyme disease in a residential community” Journal of Medical Entomology, Volume 51 no. 4 July 2014 pp 777-784.

9. https://www.hsph.harvard.edu/news/features/kiling-deer-not-answer-reducing-lyme-disease-html/    

10. McGowan, B. et al. “Corn and Soybean Crop Depredation by Wildlife” FNR-265-W, Department of Forestry and Natural Resources, Purdue University. June 2006. https://www.extension.purdue.edu/extmedia/FNR/FNR-265-W.pdf      

11. https://www.iii.org/fact-statistic/facts-statistics-deer-vehicle-collisions     

12. Hallock, Timothy J. Jr (2016) “The Effect of the Deer Population on the Number of Car Accidents,” Journal of Environmental and Resource Economics at Colby: Vol. 3 : Iss. 1 , Article 14. Available at: https://digitalcommons.colby.edu/jerec/vol3/iss1/14.   

13. Staedter, T. “Deer Decreasing Forest Bird Population” Scientific American, 31 October 2005.

14. Stokstad, E. “Double Trouble for Hemlock Forests” Science 19 December 2008.

15. Jarvis. Op cit.

16.Hedgepeth, D. “Deer cull set for parks in Md., W. Va.” Washington Post 7 November 2021

Great Falls at the fall line

The Great Falls of the Potomac River

Fall LineA fall line is defined as a line of numerous waterfalls such as at the edge of a plateau where water flows over hard, erosion-resistant rocks onto a weaker substrate that washes away to create a vertical drop. Also called a fall zone where it extends over a large geographic area.

Potpourri: Fall line waterfalls occur globally wherever the edge of a land continental plate abuts the coastal plain formed on its perimeter from the sediments deposited from upland erosion. Examples include Africa, Brazilian South America. Western Australia, and the Indian subcontinent of Asia where the Jog Falls plunges 830 feet. The world’s most prominent and prevalent fall line is where the crystalline Appalachian Mountains meet the sedimentary coastal areas of eastern North America. From north to south, the major rivers that cross the fall line are the Delaware, Schuylkill, Patapsco, Potomac, James, and Savannah originating to the west and north and flowing southeastward through the Appalachian Piedmont Province toward the Coastal Plain abutting the Atlantic Ocean. The Susquehanna River is not included because it is older than the mountains through which it passes.  Because falling water junctures block river traffic upstream from moving downstream and vice versa, they are a natural location for entrepôts of goods passing from the coast to the hinterland to be traded for produce and raw materials. The rushing waters became the motive force for the mills and factories that formed the backbone of the nineteenth century Industrial Revolution in the United States. Ultimately, the cities Trenton, Philadelphia, Baltimore, Washington D. C., Richmond, and Augusta arose along the Atlantic Seaboard Fall Line that are connected by US 1, the original north south connecting route now paralleled by I-95. There are also many smaller rivers flowing generally eastward with towns of varying size at their fall line such as Wilmington, Delaware on the Brandywine River and Fayetteville, North Carolina on the Cape Fear River. [1]

The geography of the fall line is complicated. It has a north-south range of almost a thousand miles of locally diverse terrain encompassing an array of canals, dams, bridges, and harbors engineered over the centuries to move around and over the waterfalls.  The colonial settlements of the coastal plain reaching ever westward to penetrate into the  upland regions of the Appalachian Piedmont Province had a major impact on its waterways, notably the Patapsco and the Potomac River basins. Two years after the Catholic King Charles I signed a charter for the heirs of Lord Baltimore to establish a colony in honor of Queen Henrietta Maria, the Ark and the Dove anchored off an island in the Potomac to settle at a spot soon to be named St. Mary’s. The success of tobacco plantations over the course of the next fifty years encouraged population growth  and investors with venture capital such as Charles Carroll, a physician from Annapolis. In August 1729, the Maryland assembly passed an act providing for a town named Baltimore on land owned by Carroll at the mouth of the Patapsco. The location along the western shore of the Chesapeake Bay was the terminus of the falling waters of a Patapsco River tributary that ended in a sheltered cove ideal for a harbor. With a flour mill for grain brought downriver from the west and an iron mill to take advantage of the iron deposits and the abundant forests for charcoal fuel, Baltimore became established as a fall line city, shipping almost 2,000 tons of pig iron to England between 1734 and 1737. At about the same time, a village in Prince George’s County along the Potomac River became prominent as a mercantile location near what were called the Potomac Rapids. It was eventually named Georgetown. [2,3]

The Potomac River proved more consequential than the Patapsco due to its extensive drainage, emanating from the heart of the Appalachian Plateau in what is now West Virginia. The importance of linking the eastern seaboard with the western hinterlands was one of George Washington’s first and most enduring predilections. Having surveyed the extensive holdings of Lord Fairfax in western Virginia and operated on the frontier during the French and Indian War, Washington gained an appreciation for the geographic constraints of the colony. Emerging victorious and idolized for his pivotal role in the Revolutionary War, he returned directly to his grand design, setting off on 1 September 1784 to take stock of the western lands [4]. Based on his journey, he wrote a letter to Governor Harrison of Virginia that it was necessary “to prevent the trade of the western territory from settling into the hands of the Spaniards or the British” to  “extend the navigation of the eastern waters; communicate them as near as possible with those which run westward.” The first challenge was to enlist the support of the legislatures of both Maryland and Virginia to overcome the jealousies and rivalries between the merchants of Baltimore and Georgetown. Appointed as chairman of a commission to investigate the feasibility of making the Potomac River navigable by the Maryland General Assembly in 1784, Washington chaired a joint conference with Virginia that issued a resolution in favor of “removing the obstructions in the River Potomac, and the making the same capable of navigation from tide-water as far up the north branch of the said river as may be convenient and practicable.” The Patowmack Company was organized in 1785 to get around the fall line. George Washington was its president until 1792 when he resigned to start a new job with greater responsibilities. [5]

There were five sections of the Potomac River that were impassable to watercraft due to rapid currents, shallow depths, and waterfalls. Great Falls, the only true waterfall and Little Falls, an area of rapids just upstream of Georgetown, were the only two that would require a series of locks and canal segments to bypass the river altogether. The other three, Seneca Falls on the Virginia side of the Maryland creek of the same name and House and Shenandoah Falls near Harper’s Ferry would require only dredging of canal passages for deep draft boat access. Even though the Patowmack charter provided “liberal wages” for “any number not exceeding one hundred good hands” each provided with one pound of salt pork and three jills of rum daily, laborers were hard to find. The task of building multiple canals in five different locations proved to be daunting, with progress plagued by periodic flooding, delays in construction material supplies, and poor management. The five locks for the 80 feet of vertical elevation change for the canal passage around Great Falls were not completed until 1802, seventeen years after the first work started and fourteen years later than predicted at the outset. The Patowmack canal system boats and barges carried thousands of barrels of flour and whiskey and bushels of corn and wheat between Georgetown and Cumberland until 1828, when bankruptcy forced divestiture.[6] Its successor as fall line bypass was the Chesapeake and Ohio Canal, initiated on the 4th of July 1828 with President John Quincy Adams presiding at the ground-breaking ceremonies.  On that same day in the Patapsco basin near Baltimore, the first rails of the Baltimore and Ohio Railroad were laid with Charles Carroll presiding as the only surviving signer of the Declaration of Independence. The C&O Canal would eventually meet the same fate as the Patowmack, becoming obsolescent and financially unsound as the railroads that could move heavy loads across the fall line without the need for waterborne carriage offered a cheaper alternative.

The Patowmack Canal was George Washington’s special interest project

The geology of the fall line is both complicated and simple. Its inception as the edge of a tectonic plate that was once part of a larger continuum, in this case Pangaea, is simple. The complex conglomeration of both igneous and sedimentary rocks subject to the metamorphism of crushing plate forces is complicated. Alfred Wegener is the Charles Darwin of geology, having first explained the jigsaw puzzle fit of South America into the indented west African coast with the theory of continental drift in 1912. Widely dismissed by most geologists at the time, the idea gained traction in the 1960’s when the periodic (about once every 500,000 years) magnetic reversal of the north and south poles were detectable in the bedrock of the Atlantic Ocean, which would be the case if the sea floor was spreading. Subsequent geologic expeditions confirmed similar rock types and ages leading to the notion that the Wegener’s drifting continents were actually plates which at one point had been conjoined as the single global continent Pangaea that formed in the late Paleozoic Era about 300 million years ago. With a motive force thought to be the convective upwelling currents of the molten magma in which the plates “float,” Pangaea broke up between 200 and 65 million years ago. The eastern edge of the North American plate that had been abutted by the African plate to the south and the Eurasian plate to the north became over eons of time the fall line, its erosive rivers filling the coastal plane with the clastic sediment along its edge [7] Tectonic plate direction and drift rate of about 1.5 inches a year will move them back together in what has been called Pangaea Proxima with Eurasia to the east, North America to the west and Africa above South America in the middle some 250 million years in the future inhabited by whatever life forms have by then evolved. [8]

An orogeny is defined as the formation of mountains through structural disturbance of the earth’s crust through folding and faulting. In other words, the drift of one continental plate is arrested by the inertia of its neighbor as the edges of both are crumpled on impact. The Appalachian Mountains are the result of the Eurasian, African, and the North American plates coming together in three separate orogenetic events called Taconic, Acadian, and Appalachian that occurred with geologic persistence over millions of years involving sequential collisions with various land masses now intermingled within the twisted rocks that remain. The Taconic orogeny occurred in the later Ordovician Period ~ 450 million years ago when Pangea first started to form. Island arcs called terranes to the east in the Iapetus Ocean (the predecessor to the Atlantic – Iapetus was the father of Atlantis in Greek mythology) were pushed into and upon the North American Plate. These would be similar to the islands of Great Britain and Japan off the western and eastern sides of the Eurasian Plate. The resultant formations are a hodgepodge of igneous blocks and sediments with some sections of oceanic crust material like the serpentinite of Soldier’s Delight in Maryland. The Acadian orogeny followed as the Eurasian plate collided in the north causing extensive volcanic activity, uplifting, and metamorphic folding in New England and southeastern Canada in the region called Acadia with less uplift to the south. The collision of the African plate with the North American plate about 250 million years ago is called the Appalachian orogeny as it was the coup de grâce for its namesake result, which were originally as high and jagged as the Rockies. [9] The dynamism of plate tectonics never ceases and eventually Pangaea broke apart along the seam that is now the Atlantic ridge in middle of the expanse of water that fills it. What remains is the complicated and postulated geology of the Patapsco and Potomac River basins.

The fall line marking the Paleozoic igneous and metamorphic rocks of the Appalachian Piedmont that is the eastern edge of the North American plate is not really a line. The buildup of sediments flowing seaward to form the Mesozoic and Tertiary sedimentary rocks of the Coastal Plain is uneven and irregular, even if inexorable. The falling water gradually moves inland according to flow that varies in both time and place. The original Potomac River fall line was at Theodore Roosevelt Island just downstream from Georgetown. It first appeared about seven million years ago when the Atlantic Ocean subsided from the region. For the first five million years, the Potomac River slowly eroded the bedrock due to the gentle gradient of the run-off. The climate changed about two million years ago due to variables in orbital geometry such that the Earth was more distant from the Sun (according to the Milankovitch theory changes in eccentricity or roundness, obliquity or tilt, and precession affect the amount of sunlight reaching the earth). The four major ice ages of the Pleistocene Epoch sequestered water in ice caps and glaciers that resulted in lower ocean levels and an increase in the vertical drop and water flow erosion. Consequently, the location of the fall line moved inland at the rate of a half inch per year to Great Falls fourteen miles upstream where it is now situated. The erosional progression of flowing water follows the path of least resistance. The original Potomac riverbed on the Virginia side of the river shifted eastward toward Maryland to flow down a less resistant fault line at some point in the last million years. A fault is an area where a fracture has formed due to the opposing movements of two adjacent rock formations. The result is the canyon called the Mather Gorge with the cliffs of the fault on either side exposed by the water that removed the loose aggregate between them.  There are some eight miles of the hard rock yet to be eroded until the softer deposits of upstream sediments are reached ― in one million years the falls won’t be great, only a shallow rapid. [10]

The Mather Gorge is a fault line preferentially eroded by the Potomac River.

The rock formations at Great Falls and the shoreline trails along both the Maryland and Virginia sides of the Potomac basin are the Rosetta Stone of its beleaguered geologic past. At the simplest level, rock types fall into three categories according to their origin. Sedimentary rocks are deposited into oceans by rivers as sediment, igneous rocks originate from the molten lava of the mantle, and metamorphic rocks are sedimentary or igneous rocks that have been folded and twisted by pressure and temperature. The geology of the Great Falls basin is mostly metamorphic with an occasional igneous seam of granite called a dike that was intruded during intermittent volcanism; there are no sedimentary rocks. The transformed rocks that form a wide expanse from Virginia to Pennsylvania comprise the Wissahickon Formation, consisting of two main types of metamorphic rock: graywacke and schist. Graywacke, German for “gray, earthy rock,” is an appropriate euphemism for what is dull, monotonous, and virtually devoid of color or texture. Its origins are equally mundane as mudstone that accumulated in sedimentary layers as runoff into a shallow sea. Schist, which is Greek for “split” is the companion to graywacke in both origination and placement. After the graywacke mud thought to have been caused by undersea “avalanches” settled to the bottom of the Iapetus Ocean around 650 million years ago, sand deposits formed in layers on top of it. This is the origin of mica schist, sandstone transformed by heat and pressure into mica and quartzite. These rocks typically retain the striations of the original bedding and can therefore be split (schist) into layers like shale. Granitic rocks that are lighter in color penetrate the graywacke and schist in seams called dikes without regard to the twists of metamorphism indicating that they were inserted or intruded into these formations later. Radiometric dating of the granite establishes a datum of its formation 470 million years ago.  The mica schist has pockets of andalusite, a metamorphic mineral (which differs from a rock in having a single chemical formula ― in this case Al2SiO5) which provides quantitative detail of the metamorphic conditions of its origin. Laboratory testing demonstrates that this mineral forms at temperatures of 650 ֩C and 5.5 kilobars which occur at a depth of 25 kilometers, or 15 miles underground. Deciphering the hieroglyphics of the rock formations at Great Falls tells the story that unfolded in its Pangaean assembly. Mud and sand sediments that collected in the proto-Atlantic Ocean were first compressed by succeeding layers and then twisted and bent by the vice of colliding tectonic plates. The flow of lava incident to orogeny ensued forming granitic intrusions. Everything came to a grinding halt when Pangaea was fully formed, leaving the relict as evidence when the ocean again parted. [11]

The twisted metamorphic rocks of the Wissahickon Formation

On September 30, 1861, the Union army’s 71st Pennsylvania Regiment camped near Great Falls in Maryland where Private McCleary (or possibly Carey) took note of the quartzite rock outcrops that he saw there.  Based on his perception that the rock type and disposition were characteristics of gold deposits, he formed a prospecting group after the war that purchased the property from a local farmer and sank a 100 foot exploratory shaft into the underlying bedrock. After arduous hard rock digging for several years that produced only eleven ounces of gold, the Maryland mine project was abandoned. In 1900, the Maryland Gold Mining Company obtained the property and tunneled extensively until 1908 with similar limited success; several years later a second mine site several miles west into what was called the Ford vein was established and prospected extensively. The final gold rush followed the increase in the price of gold to $35 an ounce in 1934 when a new Maryland Mining Company was formed, extracting 6,000 tons of ore yielding 2,500 ounces of gold before the final shutdown in 1940. Over the seventy years of operation, the total amount of gold produced from  Maryland gold mines was 5,000 ounces with a market value of $150,000 … that amount of gold is now worth  $9M. The geologic origin of the seams of gold in quartzite veins is speculative in timing but settled in sequence. Quartzite is metamorphosed sandstone that entered preexisting fault cracks in the original rock formations as a hot solution with a variety of minerals entrained with occasional flakes of gold. The anastomosing or interconnecting veins range in size from a few inches up to twenty feet that extend in  a belt roughly one quarter mile in width.  In addition to elemental gold, iron sulfide or pyrite (also called fool’s gold) and lead sulfide or galena are the primary minerals. The cracks were formed long after the deposition and uplift of the graywacke and shist, probably during the Triassic Period 200 million years ago based on the existence of fissures in known and dated Triassic rocks in the general region. The faulting and hot solution sand insertion was the result of a final stage of uplift in the formation of Pangaea.[12] It took humans only a few years to dig out what nature had painstakingly assembled over eons. And all that for less than a ton of  the entrancing metal  called gold.

References:

 1.  Lustig, L. K. “Waterfalls” Encyclopedia Britannica, William and Helen Benton Publisher, Chicago, Illinois, 1974, Volume 19, pp 638-643

2. Brugger, R. Maryland, A Middle Temperament, Johns Hopkins University Press, Baltimore, Maryland, 1988, pp. 63-70.

3. Patapsco Valley Heritage Greenway, Inc., “History/Culture” https://web.archive.org/web/20100310084026/http://www.patapscoheritagegreenway.org/history/HistPersp.html

4. Hulbert, A. George Washington and the West: Being George Washington’s Diary of September 1784, The New York  Century Company, 1905

5. Pickell, J. A New Chapter in the Early Life of Washington with the Narrative History of the Potomac Company, D. Appleton and Company, New York, 1856. pp. 25-50.

6. https://www.mountvernon.org/george-washington/the-potomac-company/

7. Cazeau, C, Hatcher, R., and Siemankowski, F; Physical Geology, Principles, Processes, and Problems, Harper and Row, New York, 1976, pp 374-393.

8. Romano, M. and Cifelli, R. “100 years of continental drift,” Science, Volume 350, Issue 6263, 20 November 2015.

9. Schmidt, M. Maryland’s Geology, Schiffer Publishing, Atglen, PA, 2010, pp 88-112.

10. http://www.virginiaplaces.org/regions/fallshape.html     

11. Reed, J., Sigafoos, R., and Fisher J. River and the Rocks, The Geologic Story of the Great Falls and the Potomac River Gorge Geological Survey Bulletin 1471, United States Government Printing Office, Washington DC 1980.

12. Reed, J. Gold Veins near Great Falls, Maryland, Geological Survey Bulletin 1286, United States Government Printing Office, Washington, DC, 1969.

Blueberry

Blueberries are blue because anthocyanin turns blue when the PH is basic – and to attract animals

Common Name: Blueberry, bilberry, whortleberry –  Blueberry would be a strong candidate for the most innocuous common name. Unlike many fruits called berry, the blueberry is true to its name. A true blue berry.  Bilberry is a European blueberry and a North American variant; whortleberry is of British origin.

Scientific Name: Vaccinium spp – The genus name was assigned by Carolinus Linnaeas in 1753 and is of obscure origin, possibly originating as an early Latin name for the bilberry or whortleberry but also implausibly from hyacinth, a member of the lily family native to the eastern Mediterranean region. Spp is an abbreviation meaning several species and is used whenever there is considerable variation due to hybridization as is the case with blueberries.[1]

Potpourri: Blueberries are the most dependable and commendable trailside fruit eaten as browse by hikers on the move.  In the fall as the trees and shrubs wrap up the annual cycle of flowering, growing, and seeding to pass the cold winter months in quiescent repose, the berries await their purposed fate. The upland woods where they mostly grow are awash in the reds of maples and the yellows of hickories in the canopy above the contrasting blues at their base that bring the joys of sweetness and not the doldrums of despair.  The three primary colors represent the fullness of the visual light spectrum and metaphorically the completeness of nature’s cycle. The fruit that bears the seeds of the next generation is the most important part of the plant. But there are many questions that arise: Why do the berries start out red and turn blue? Why do they grow where they do? Why are there so many of them when each seed contains all the DNA instructions necessary and sufficient for a whole new plant?  All in good time. 

There are two general types of blueberry. To be consistent with the hackneyed name of the fruit with a berry that is blue, the shrubs on which the blueberries grow are called highbush and lowbush (HW41 and Dubya43?). Both are grown commercially according to geographic climate preferences. That is far from the whole story of blueberry types, however, as there are myriad hybrids both between them and separately from each.  A recent scientific paper described the blueberry tribe Vaccinieae as a “large and morphologically diverse group that is widespread in the temperate and tropical zones of most continents.” DNA analysis was described as difficult because the characters normally used don’t work well. One of the reasons is that blueberries and their kin are not only diploid with two sets of chromosomes (like humans and many plants and animals), but also triploid, tetraploid, pentaploid … up to six sets or  hexaploid depending on the species.  It was concluded that the genus Vaccinium is not monophyletic, which means that there are multiple ancestors. For the purpose of a practical understanding of the habitat and range of the blueberry, highbush and lowbush will suffice. [2]  The highbush blueberry Vaccinium corymbosum is on the tall side at five to fifteen feet, a tree of a bush. The species name indicates that it grows in corymbs, which are clusters of flowers and then berries more or less in a planar array. They start out with a pinkish-red tinge that matures to fully white, producing a blue berry after pollination, a truly patriotic succession. The highbush blueberry of the dry upland areas in the Appalachian Mountains is the fons et origio of the commercial blueberries cultivated in the United States. [3]

Flowers must attract pollinators to make a fertile seeded blueberry fruit

Those who deprecate the federal government for its intrusion into local affairs should consider Frederick Vernon Coville, a botanist with the USDA who may be considered the progenitor of the commercial blueberry. In the introduction to the USDA bulletin written to document his research, he provides the rationale for his quest. A nine foot high, three inch diameter highbush blueberry transplanted on the grounds of the Smithsonian Institution in Washington DC had been there since before 1871 and was probably fifty years old. That this was not unique was confirmed at the Arnold Arboretum in Boston which a number of thirty year old blueberries that had been grown from seed or transplanted prior to 1880. However, it was also true that all  attempts to grow blueberry bushes using rich garden soils at agricultural research stations from Maine to Michigan had failed. [4]  Why the former and not the latter? What was the missing ingredient? Starting in 1906, Coville planted test plots with different combinations of soil and nutrients to discover four years later that the ingredient was acid … blueberries and a number of related ericaceous plants like cranberry and huckleberry required an acidic soil (PH < 7).  In 1911, he began a series of cross pollination experiments to create cultivars with attributes like larger, sweeter, and denser berry clusters. [5] His field notes, which are listed as collection 413 in the USDA archives, consist of daily penciled entries of research field varieties and fruiting quality. [6] This was the way gene modification was accomplished in the pre-CRISPR Cas-9 era. The historic trial and error method relies on random chance while the new method is scientific, putting the chosen gene in the right place. Unfortunately, the use of science to add genes to produce beneficial cultivars like higher nutrition and better drought tolerance earns the damning epithet GMO (genetically modified organisms) and us shunned by some as “Frankenfood.”

The lowbush blueberry Vaccinium angustifolium, referring to the narrow leaves which in Latin is angustus – folium … is literally diminutive in size at less than a foot. These are the dense berry thickets of New England and the Upper Midwest, notably Maine where it is the state berry ― blueberry pie is second only to lobster in defining the local cuisine. Called the late sweet blueberry in edible wild plant field guides, it was probably the most important fruit for Native Americans as one of the main ingredients for pemmican, a concoction of meat and berries that was a trail food staple. [7] The importance of the lowbush blueberry extended beyond edibility as Chippewa placed dried flowers on hot stones as a treatment for “craziness” and Iroquois used the berries in ceremony to invoke health and prosperity for the coming season. [8] Even though it readily hybridizes like its highbush cousin, the lowbush blueberry is more generally recognized by researchers as a single species that is highly polymorphic. The dense blueberry patches that occupy significant swaths of otherwise relatively sparse northern habitats are a major source of food for a wide variety of mammals and birds. Of particular note are black bears, whose reproductive success has been correlated to the size of the blueberry output in a given year, and large ground dwelling birds like wild turkeys and ruffed grouse. [9]

The lowbush blueberry patches dominate the landscape of Acadia in Maine

Why are blueberries blue? Bright red cranberries that are in the same genus grow in similar habitats. Everything in nature has a purpose ― in most cases related to Darwin’s epiphany that  survival is the outcome only if fit, meaning adaptable. Since fruits contain the seeds for future generations, fruit color must have evolved over time to attract animals as agents for transport. The animals in question  cannot have been  hunter gatherer H. sapiens since North America was devoid of hominids until about 12,000 years ago. While the determination of the responsible  animal is speculative, there are some interesting facts and correlations that provide a basis for hypothesis.  The first clue is that non-primate mammals cannot see red. The generally accepted reason is that the smell and hearing senses are much more important for survival in the brushy shadows of their habitats than sight. It is also reasoned that primate red vision was beneficial in locating ripe fruits in the jungle canopies of African forests; more food, more survivors. Birds also see red presumably for the same reason. It is demonstrably true that most berries are red and that avian dispersal is the primary vector for dissemination of the seeds of red-berried plants. A 2015 study compared the amount of berry seeds from 25 different plants (16 native and 9 invasive) in 450 bird dropping samples and found that birds ate almost exclusively from native plants. This can only  make sense if birds evolved eating native berries and that they would continue to do so, eschewing the invasives (which unfortunately does not seem to deter them much). [10] As an interesting correlation, the areas where cranberries predominate are in the same general area as the breeding ground for passenger pigeons. Though they are now extinct, having been extirpated by humans in the last century, a reasonable theory is that passenger pigeons ate cranberries and deposited the seeds all over their brooding areas, where the resultant cranberry bushes flourish.

Conversely, blueberries are not for the birds. This is not to say that birds won’t eat them  but only that bird consumption cannot have been the primary evolutionary forcing function … blueberries would have to be redberries. It is much more likely that blueberries became blue and sweet to attract mammals to propagate the seeds contained therein. A few observations again provide some basis for speculation. The first is that black bear fecundity, as pointed out above, correlates to the quantity of blueberries produced in any given year. Correlation is not causation but there is more. It is demonstrably true that the area of North America where black bears are indigenous is largely convergent with the natural habitat of the lowbush blueberry. It is also true that black bears eat a lot of berries, which can easily be verified by looking at bear scat encountered on the trail ― invariably dotted with seeds. In my view, black bears or perhaps their evolutionary ancestors are the best candidate as the original  blueberry propagator. That would also help to explain why there are so many berries … it takes a lot of berries to attract a bear.

One of nature’s more interesting innovations is how the color of a fruit is controlled to attract an animal. Anthocyanins are complex chemical compounds produced by some plants for which they perform no other function but pigmentation. Like litmus paper, acidic environments cause red pigmentation and  basic means blue. Anthocyanin is what makes roses red, violets blue, and plums purple. It is also the source of the brilliant colors of red and sugar maples in autumn.  Here however, red has purpose which is thought to shield individual leaves from damaging infrared radiation in early fall so that more nutrients can be retained in the roots over winter.  It is interesting to note that blueberries start out pinkish-red and gradually become blue as they ripen and thereafter are ready to eat. Coville demonstrated that blueberry bushes grow in acidic soil, so clearly the chemistry of the plant is on the red or acidic side. This is amply demonstrated in the fall, when blueberry bushes turn brilliant red, indicating that they produce copious amounts of anthocyanin and that they are acidic. [11] Blueberries must therefore be turned blue by the plant which can only be achieved with high PH or basic chemicals that must be produced for this purpose – to attract bears.

Commercially grown blueberries in the form of both V. angustifolium and the cultivars that originated with V. corymbosum are second only to strawberries in quantity and value with annual sales of nearly $1B. Cranberries (V. macrocarpon) which are also heath family plants closely related to blueberries, are equally popular but almost wholly as juice and the essential holiday sauce. Raspberries and blackberries are at the back of the pack. But here we are literally talking apples and oranges. The botanical berry has little to do with the vernacular use of the term. A berry is defined as a fleshy fruit with seeds embedded in the pulp, like blueberries, huckleberries and even tomatoes and watermelons. The strawberry is an accessory fruit with the seeds embedded in the external skin of the fruit and both raspberries and blackberries are aggregates, consisting of many tiny berries clustered together. The natural foods movement gained ground at the turn of the millennium in reaction to the realization that processed food consumption had resulted in an epidemic of obesity according to both the CDC and NIH. Blueberries were among the more favored foods in the fruit and vegetable category that emerged as the gold standard for a healthy diet. Domestic production consequently rose 284 percent between 2000 and 2019 supplemented by South American imports that rose 1000 percent during that same period for blueberries year round. [12] In that nature operates according to opportunistic species seeking perpetuation and dominance, the large numbers of nutritive blueberries were ripe for exploitation by a fungus appropriately named Monilinia vaccinii-corymbosi for its assault on the species. It can destroy over 50 percent of the crop (85% in New Hampshire in 1974). The infected fruit goes by the pejorative mummy berry for the swollen wrinkled and gray blobs that result. [13]

Blueberries are good food for all animals,  not just bears. High levels of vitamin C vitamin K, and manganese are conveniently packaged with high fiber, the stuff of proper stools.  More importantly from the health standpoint, blueberries have exceptional levels of flavonoids, notably anthocyanins … more than cranberries, strawberries, plums and most other fruits. Flavonoid compounds  are for the most part considered  beneficial for health  due to antioxidant and anti-inflammatory activity. The arrant actor is the free radical, a malevolent name for an insidious chemical. Simply put, it is any chemical element that is either missing one electron or has one too many. Since chemistry abhors untidy stray charges in favor of the neutral state, free radicals seek to balance their electrical charge by reacting with anything available, like body cells and the DNA that they all contain. Disrupted DNA mostly results in cell death but certain mutations can have more inimical effects, like the uncontrolled growth of a cancerous tumor. Antioxidant chemicals are in their most general form hydrogen donors that combine and neutralize free radicals. Recent studies have cast some doubt on the role of anthocyanin as an effective antioxidant. However, the benefits of blueberries as part of a healthy diet are indisputable. The only question is why. Affirmed  benefits of blueberries for human health. include reducing cognitive degeneration, promoting heart health, reducing susceptibility to cancer, and lowering blood pressure, among many others. [14]   

So why are blueberries blue? Because they contain anthocyanin in an aqueous environment with  a high enough PH so they are basic blue in lieu of acidic red. This requires a certain amount of finesse since the bush on which they grow is rooted in acidic soil with  acidic leaves marked by the red of the same anthocyanin chemistry. But that is not really why blueberries are blue but rather how they became blue. Why they are blue is because that was the best color to attract an animal with enough drawing power and consistency to propagate the species. They are blue because they are in the ecosystem of which they are but one of the many interconnected parts.   

References:   

1.  https://naturalhistory2.si.edu/botany/ing/genusSearchTextMX.cfm  

2. Kron, K. et al  “Phylogenetic relationships within the blueberry tribe (Vaccinieae, Ericaceae) based on sequence data from MATK and nuclear ribosomal ITS regions, with comments on the placement of Satyria”. American Journal of Botany. February 2002 Volume  89 Number 2 pp 327–336.

3. Niering, W. and Olmstead, N. National Audubon Society Field Guide to North American Wildflowers, Alfred A. Knopf, New York, 1998, pp 508-509

4. Coville, F. Experiments in Blueberry Culture. US Government Printing Office. 1910.

5. https://federallabs.org/successes/success-stories/blueberries-making-a-superb-fruit-even-better  

6. https://www.nal.usda.gov/speccoll/collectionsguide/collection2.php?search=coville

7. Elias T. and Dykeman, P. Edible Wild Plants, A North American Field Guide, Sterling Publishing New York, 1990, pp 164 – 167.

8. Ethnobotany data base http://naeb.brit.org/uses/search/?string=vaccinium

9. https://www.fs.fed.us/database/feis/plants/shrub/vacang/all.html     

10. Runwal, P. “Migratory Birds like Native Berries Best.” Audubon Magazine, 12 June 2020

11. Wilson, C, and Loomis, W. Botany, 4th Edition. Holt, Rinehart, and Winston, New York, 1967,  pp.52-53.

12. https://www.ers.usda.gov/data-products/charts-of-note/charts-of-note/?topicId=14849     

13. Batra, L. “Monilinia vaccinii-corymbosi (Sclerotiniaceae)”Its biology on blueberry and other related species” Mycologia  Volume 75 number 1 1985 pp 131-152.

14. https://www.ars.usda.gov/plains-area/gfnd/gfhnrc/docs/news-2014/blueberries-and-health/

Coral Fungi – Clavariaceae

Crown – tipped Coral is one of many fungi that have a branching pattern similar to ocean corals

Common Name: Coral Fungus – The branching of the fungal thallus resembles the calcium carbonate structure of  ocean corals. Other common names are applied to differentiated shapes, such as worm, club, or tube fungi for those lacking side branches and antler fungi for those with wider, flange-like appendages. An extreme is cauliflower fungus which looks nothing like coral but is usually included in the coral-like category in field guides. The common Crown-tipped coral is depicted; the ends of the coral segments have tines like miniature crowns.

Scientific Name: Clavariaceae – The family name for the coral fungi is derived from clava, the Latin word for “club;” the type-genus is Clavaria. The coral fungus above was originally Clavaria pyxidata, became Clavicorona pyxidata, and is now Artomyces pyxidatus. Pyx is from the Greek word pyxos meaning “box tree” from which boxes were made (and the etymology of the word box – a pyx is a container for Eucharist wafers). The implication for its use as a name for this species is “box-like.”

Potpourri:  Coral fungi look like coral. The verisimilar likeness can be so convincing that it seems plausible that they were uprooted from a seabed reef and planted in the woods for decoration. The delicate ivory and cream-colored branches rising in dense clusters from a brown-black dead log are one of the wonders of the wooded paths sought by those who wander there. There is an abiding benefit to have some knowledge of the things that nature has created and coral fungi is a good collective mnemonic to apply to the group that surely must be closely related. And so it is  for the traditionalists steeped in the lore of musty mushroom field guides who are referred to collectively as the “lumpers.”  The new world order of DNA has taken the science of biology on a wild ride with many hairpin turns and dead ends; in the case of mycology, the train has left the tracks more than once. Coevolution … that which created a marsupial mouse in Australia unrelated to the placental house mouse everywhere else … globally demonstrates Darwin’s vision. Fungi that branch is a natural evolutionary path for two individual organisms that started at different places and times.  The diaspora of species from one genus to another in search of a home on the genetic tree of life has exploded the coral fungi into fragments. This is the realm of the “splitters,” the subdividers for whom a bar code will become the only true arbiter of species. There is of course a hybrid middle ground, acknowledging the latter but practicing the former, the province of most mushroom hunters.  

Like all epigeal fruiting bodies extending upward above the ground from the main body of a fungus, which is hypogeal or below ground, the branching arms of coral fungi function to support and project the spore bearing reproductive components called basidia. Gilled or pored mushrooms maximize the number of spores they can disperse by creating as much surface area as possible in the limited space beneath the cap or pileus. Similarly, coral fungi branch again and again or extend myriad singular shafts to get as many fingers of spore bearing surface into the air as possible. [1] The topology of using multiple  extensions into a fluid medium is one of the recurring themes of evolution ― coevolution. In this case, it has nothing to do with fungi per se. They look like coral because real coral is doing essentially the same thing; the namesake polyps secrete a type of calcium carbonate called aragonite to form protective exoskeletons in reefs that extend outward into the water where their food floats by.  To extend the analogy to the rest of biology is a matter of observation. Trees send branches covered with photosynthesizing leaves toward the sun and roots toward the water and minerals of the earth where they encounter the branching mycelia of fungi.

Fungi have evolved to distribute reproductive spores with different mechanisms that could only have been naturally selected by the variations in form and function of random mutation. Among the more creative methods are the puffing of puffball spores out a hole in the top by the impact force of raindrops, the odorous spore-laden goo of stinkhorns that attracts insects seeking nutrients, and the redolence of truffles sought by burrowing or digging animals as food digested, their spores excreted intact. The coral fungi are among the most primitive of all basidiomycete fungi in having their club-shaped spore bearing basidia positioned along the upper reaches of each prong so that they can be carried away by either wind or water. [2] Having more fruiting bodies with more branches creates more spores, which is why coral fungi are frequently found growing saprophytically in dense clusters on dead tree logs or growing in mycorrhizal clusters on the ground. Simply sticking indistinguishable club shapes into the air with a bunch of short rods with spores attached to the end is the most straightforward way to disperse them for germination.

The phylogenic diversity of the coral fungi belies their similar ramified appearance. Historically, structure was thought to be the basis for taxonomic classification, an assumption that works reasonably well with plants and animals but not with fungi. The delicate and colorful appearance of the coral fungi brought them to the attention of the earliest naturalists, who grouped them according to shapes. Since fungi were then considered members of the Plant Kingdom (Subkingdom Thallophyta), this was consistent with practice. The French botanist Chevallier placed them in the order Clavariées in 1826 with only two genera, Clavaria and Merisma noting that “se distingue du premier coup d’oeil” – they can be identified with a fleeting glance in having “la forme d’une petit massue” – the form of a little club. [3] The assignment of fungi to families according to form lasted for over a hundred years until the nuances in microstructure and spore appearance initiated cracks in the biological foundation. Toward the end of the last century the fungi were recast as one of five different kingdoms, the foundational genus Clavaria was dissected into six genera with derivative names like Clavulina (little club)  and Clavariadelphis (brother of Clavaria), which is how they appear in the most popular fungi field guides. [4]

In spite of the distinctive shape that suggests a unique origin, coral fungi are agarics, the historical group name for almost all gilled fungi. What is now the order Agaricales is comprised of over 9,000 species, containing over half of all known mushroom forming macrofungi assigned to one of 26 families with about 350 genera that range from Amanita to Xerula. Carl Linnaeas, who established the first taxonomic structure in biology with the publication of Systema Naturae in the 18th century, placed all gilled mushrooms in a single genus that he named Agaricus. One hundred years later, Elias Fries published Systema Mycologicum, which separated the agarics into twelve genera based on macroscopic features such as the structure of the spore bearing surface or hymenium (e.g. gills, pores, teeth, ridges, vase-shaped) and spore color (white, pink, brown, purple-brown, or black). Six groups of basidiomycetes were recognized based on the shape of the sporocarp or fruiting body ― “coral-like fungi” was one of them. While there was some expansion of genera over the ensuing decades, the so-called Friesian approach to gilled mushroom identification has persisted and  is what is still generally in use, spore print color and all. The use of field characteristics is crucial to the practical application of mycology that serves the community of foragers looking for edible species and other aficionados who enjoy their company. [5,6]

Over the last several decades, the use of DNA to map out the true phylogenetic relationships has upended the traditional taxonomy based on macroscopic structure and spore color. Unravelling the complex weave of evolutionary threads from one species to its predecessor  is a monumental task that is just now gaining momentum. The goal is to determine the real or cladistic family tree so that a clade, the term adopted to refer to all species with a common ancestor, can be established with certainty. In one analysis, the agarics fell into six major clades, or single-ancestor groupings named Agaricoid, Tricholomatoid, Marasmioid, Pluteoid, Hygrophoroid and Plicaturopsidoid. The coral fungi are in the latter, which diverged from all the other agarics at the earliest evolutionary branching in the Cretaceous Era some 125 million years ago. It is not unreasonable to conclude from this analysis that the coral fungi evolved a reliable and efficient method of spore dispersal early on and have thrived ever since, branching out to form new species all using the same technique. It is now equally evident that the shape of a fungus does not necessarily establish its proper branch in the family tree. The agarics, now the Eugarics Clade, not only has fungi shaped like mushrooms and coral, but also puffballs like Calvatia and Lycoperdon. Likewise, shapes extend across multiple clades.  For example, coral-shaped fungi also appear in the Russuloid Clade (Russulas)  as Artomyces as pictured above and Sparassis as pictured below in the Polyporoid Clade (Polypores). This is then the dichotomy between the taxonomists of the old school steeped in the Linnaean traditions of field identification and the DNA systematists of the new school for which only the laboratory will do. [5,6]

The new biological life history of coral fungi is still subject to the findings of the most recent research paper devoted to the group and it may be decades before a settled taxonomy emerges. As a brief and incomplete history, in 1999 “four lineages containing cantharelloid and clavarioid fungi were identified,”  with the clavarioid containing most of the corals, but also noting that “Clavicorona is closely related to Auriscalpium, which is toothed, and Lentinellus, which is gilled.” [7] In 2006, it was acknowledged that coral shaped fungi must have evolved at least five times over the millennia and that the “evolutionary significance of this morphology is difficult to interpret because the phylogenetic positions of many clavarioid fungi are still unknown.” The new genus Alloclavaria was added to accommodate the unique fungus Clavaria purpurea  “not related to Clavaria but derived within the hymenochaetoid clade,” which consists mostly of bracket fungi. [8] Seven years later, the coral fungus family was found to consist of four major  clades:  Mucronella,  Ramariopsis-Clavulinopsis,  Hyphodontiella, and Clavaria-Camarophyllopsis-Clavicorona. This thorough phylogenic analysis of 47 sporocarp sequences merged with 243 environmental sequences concluded that “126 molecular operational taxonomic units can be recognized in the Clavariaceae … an estimate that exceeds the known number of species in the family.” [9] Phylogenic studies are continuing.

Returning to the more mundane walk through the woods looking for coral fungi, the two most pressing questions concern edibility and toxicity. Neither of these subjects is broached in the scientific literature, and, like most fungi, data points are empirical, relying on random trial and error anecdote. For coral fungi, this is complicated by the fact that most are small and delicate and therefore rarely sampled by those seeking massive brackets of Chicken-of-the-woods and yellow clusters of chanterelles. Edibility has been a question ever since Chevalier first singled them out in 1826, noting that “Presque tout les clavaires  fournissent a l’homme une nouriteure saine, on mange ordinarements les plus grosses”  –  almost all are good to eat but only pick the big ones,  and “Elles n’ont aucune qualité vénéneuses; quelques-une ont une saveur amèrenone are poisonous but some are bitter. [10] This sweeping assurance cannot have been the result  of a thorough assessment, as there are good and bad corals.  Modern guides are more circumspect, offering a range of information about edibility from choice to poisonous with caveats about having a laxative effect on some people and causing gastrointestinal distress in others. Many are of unknown edibility and likely to remain so. There is one standout worth noting that has the hallmarks of broad acceptability. The Cauliflower Mushroom (Sparassis americana – formerly crispa) is large, unusual, and common. It neither tastes nor looks much like a cauliflower. The “Elizabethan ruff of a mushroom” [11] is hard to miss and there is no doppelganger to fool the hapless hunter.

The Cauliflower Mushroom looks nothing like coral, or cauliflower for that matter. More like a neck ruff of Elizabethan England.

References:

1. Aurora, D. Mushrooms Demystified, Ten Speed Press, Berkeley, California, 1986 pp 630-658

2. Schaechter, E. In the Company of Mushrooms, Harvard University Press, Cambridge, Massachusetts, 1997, p.49   

3. Chevallier F. Flore Générale des Environs de Paris, Ferra Jeune, Paris, France, 1826  p. 102.

4. Lincoff, G. National Audubon Society Field Guide to North American Mushrooms, Alfred A. Knopf, New York, 1981, pp 398-414.

5. Matheny P. et al “Major clades of Agaricales: a multilocus phylogenetic overview”  Mycologia August 2006, Volume 98 Number 6 pp 982–995.

6. https://www.mykoweb.com/articles/Homobasidiomycete_clades.html        

7. Pine E. et al “Phylogenetic relationships of cantharelloid and clavarioid Homobasidiomycetes based on mitochondrial and nuclear rDNA sequences”. Mycologia. 1999. Volume 91 Number 6 pp 944–963.

8. Dentinger B and McLaughlin D. “Reconstructing the Clavariaceae using nuclear large subunit rDNA sequences and a new genus segregated from Clavaria”. Mycologia. Volume 98 Number 5 September 2006 pp 746–762.

9. Birkebak J et al. “A systematic, morphological and ecological overview of the Clavariaceae (Agaricales)”  Mycologia. Volume 105 Number 4, February 2013, pp 896–911.

10. Chevallier, op cit. p.104

11. Lincoff, op cit. p. 412.

Red-spotted Purple Butterfly

The Red-spotted Purple is mostly purple and has red spots at the wing tips

Common Name: Red-spotted Purple and White Admiral – Butterfly names are in most cases descriptive, using color and patterns as leitmotif. The mostly dark blue wings tinged with enough red to produce purple culminating in red wing spots provides one of the more mnemonic names. The alternative name White Admiral is a result of one of the more tantalizing tales of the lepidopterans as they change colors and patterns in mimicry detailed below.

Scientific Name: Limenitis arthemis –  The genus name literally means harbor goddess in Greek. The nautical association is apparently related to or is a result of  the fact that they are called the admiral butterflies, as in White Admiral. The species name is from Artemis (Diana in Roman mythology), the Greek goddess of the hunt and hence the woods. A butterfly as metaphor for a goddess captures the graceful beauty of both.    

The White Admiral has a single broad white stripe – like a US Navy admiral

Potpourri: The  Red-spotted Purple and White Admiral are the same species, Limenitis arthemis. Mimicry, the term for an animal mimicking another object in shape and/or color, is an evolutionary and genetic  response to  the inexorable tug of survival. Although it may seem especially notable in this case because of the striking result afforded by the difference between white striped and stripe-less wings, mimicry in its broadest sense is widespread. Some prey animals change colors according to age and season to provide better camouflage. The spotted fawn turns light tan as a doe or buck in summer and darker in winter to match the scenery. Predators must do the same in order to hide from their quarry long enough effect the coup-de-grace at the last moment. There are no black panthers, only melanoid leopards and jaguars becoming night stalkers (both are in the genus Panthera). Aposematism is similar to mimicry in that coloration is used to ward off predators. But rather than being cryptic, the colors stand out  against the background in sharp contrast, alerting the wary predator that poisons there lurk. The juvenile red eft of the red-spotted newt is a good example of aposematism … a defenseless amphibian that protects itself with vivid orange hues similar to those  used by hunters to accentuate visibility.

White Admirals, the northern version of the Red-spotted Purple, are named for their prominent white stripes. It is perhaps only coincidence that the progression of officer ranks in the U.S. Navy ranges from an ensign’s single narrow to a single broad stripe for an admiral. The Red Admiral (Vanessa atalanta) which has a similarly placed red stripe is otherwise unrelated. Boldly contrasting prominent stripes on two species suggests purpose and coevolution. While striping may be related to species or mate recognition, it is more likely a matter of predator avoidance, the moving flashes of colored streaks creating a disorienting stroboscopic effect. [1]  On progressing geographically southward, the White Admiral’s broad stripe disappears and the red spots move forward to the edge of the wing tip to become both red-spotted and purple. This rather extraordinary transformation is a combination of the aping of mimicry and the warning of aposematism, a hybrid scheme called apatetic in general or Batesian in particular.  The wing is now more uniformly dark in color, resembling that of a butterfly of a different genus and species ― a poisonous doppelgänger.

The Red Admiral has a single red stripe

It is widely known that the Monarch butterfly is unpalatable to birds because its caterpillars eat milkweed (Asclepias syriaca) that produces cardiac glycosides that are toxic to most animals. It is mimicked by the Viceroy (Limenitis archippus) butterfly, a generic cousin of the Red-spotted Purple, as a matter of enhanced survival. [2] The Green Swallowtail butterfly (Battus philenor) is better known as the Pipevine Swallowtail because its larvae feed on Dutchman’s Pipe (Aristolochia durior), a vine that produces a toxin called aristolochic acid. Since the range of the toxic Pipevine Swallowtail butterfly extends only as far as its namesake food, it is a southern butterfly because that is where the vines are. [3]  The change is not cognitive choice, but rather choice by chance. The White Admirals that ventured south with less prominent stripes survived more frequently since they were more likely to be avoided by predators. Over time and subsequent mating of diminished stripe White Admirals, the stripes disappeared altogether and the Red-spotted Purple became the southern variant, sometimes listed as a separate subspecies Limenitis arthemis astyanax. The name extends the mythological association to include Astyanax, the son of Trojan hero Hector who was defenestrated by the Greek Achilles so he could not avenge the death of his father.

The Pipevine Swallowtail is copied by the Red-spotted Purple to escape predation by birds. It is called Batesian mimicry.

The kaleidoscopic patterns of butterfly wings are among the most artistic creations of nature. Their evolution that began during the Cretaceous Period 150 million years ago was marked by three random mutation “inventions” that radiated in time and space along the way to produce the 18,000 plus extant named species. [4]  The first and defining mutation was wing scales from which the name of the order Lepidoptera was derived, lepis meaning scale and ptera wing in Greek.  Scales are genetically modified sensory bristles, that became flat. senseless, and slippery, probably to avoid capture … the survivors passed the scale genes along. The second invention was changing the scale colors, possible because each scale is from a single cell with control of hue and texture, the combination producing different shadings and sometimes even iridescence. Lastly there was pattern, the genetics of placing colors in ordered arrangement. Spots in general and eyespots in particular start in the caterpillar stage, where an organizer puts them in the right position on the wing, a disc at this point. Colors are added in the chrysalis phase so that the adult butterfly wing emerges after metamorphosis with spots. These are usually at the margins of the wing so that a predator would first strike there, removing only a small portion of the wing as the butterfly flitted to safety. The efficacy of this is demonstrable, as many lepidopterans are found with a bite out of one wing. [5]

Butterflies are among the most studied of all animals, surely more a matter of beauty and ease of net capture than for their scientific import as just another type of insect. Henry Walter Bates spent eleven years in the Amazon rainforest in the mid-nineteenth century, identifying 8,000 species that were then new to science, many of them butterflies. His studies led to the observation that some butterflies had patterns that were quite similar in appearance to unrelated species that were unpalatable to birds. He hypothesized that birds would learn to avoid them after only a few experiences and that this would then perpetuate the verisimilitude. When he returned to England in 1859 to recover from his epic jungle ordeal, he presented a paper on his discovery of butterfly mutations and to what he considered to be one of the best examples of  the “origin of all species and all adaptations.” [6] The phenomenon, known ever after as Batesian mimicry, became one of Darwin’s favorite examples of his epochal Origin of Species which had just been published. The two developed an enduring friendship, corresponding periodically on the new ideas of evolution. Bates became one of the primary adherents to the nascent theory, writing on one occasion that “I think I have got a glimpse into the laboratory where Nature manufactures her new species.” [7] The headwinds of religious dogma required decades to overcome, but gradually and fitfully the theory has gained near universal acceptance excepting those that adhere to biblical literalism.

With the advent of DNA as a roadmap of evolutionary change, Darwin’s insight only remains a theory insofar as it cannot be proven according to the scientific method of testing, which would require going back in time to reset the biological clock. The White Admiral conversion to Red-spotted Purple is one of the most documented of butterfly DNA subjects because of the infraspecific Mason-Dixon line that separates them. Proceeding north, the White Admiral prevails, while the far south is dominated by the mimetic Red-spotted Purple. The validity of the Batesian mimicry has been well established. A thirty year data set of Fourth of July Butterfly Counts confirmed that mimicry occurs even when the population of the unpalatable Pipevine Swallowtail species is low and that a sharp phenotypic geographical transition marks the boundary.[8] Between the two extremes, there is range over which hybridization occurs, affording a singular opportunity to study the interaction between the two variants according to DNA changes. Scientific research has established that the White Admiral variant is monophyletic (single ancestor) and that the hybridization of mimicry occurred just once. The hybrids that exist in the transition zone are thought to be due to mating between the two, producing on occasion a Red-spotted Purple with faint or partial white stripes. [9] More recently the location of the mutation responsible for Batesian mimicry on the genes of two different types of butterflies (Limenitis and Heliconius) that diverged 65 million years ago demonstrates the coevolution of this important survival trait. [10] Genetic confirmation provides the scientific “how” corresponding to the Batesian “why,” proof  for all practical purposes of Darwin’s “theory.”

The employment of Batesian mimicry of Limenitis arthemis in scientific research of butterfly sex  must surely have been considered for the Ig Nobel Prize in biology. One of the more compelling examples of female reproductive choice is sperm retention and storage after mating for fertilization at a later, more auspicious time. In that this would enhance the survival of subsequent generations, it has coevolved across the animal kingdom to include some insects, butterflies among them. It is also the case that many animals mate more than once; males with genetically driven propensity to sire as many offspring as possible and females to ensure successful insemination with the best possible mate characteristics. It is hard to say for sure, but it may also be that both enjoy it. Among the more profound questions facing biology is whether the sperm from a second mating male displaces that of the first or whether the two mix together to produce hybrids. Using the wing patterns that resulted as the biological metric, 17 females were mated with 34 males to conduct the experiment (it was not reported if this was consensual). The results were used to determine “insect mate-seeking strategies and individual fitness.” In that it was the first male’s sperm that prevailed, the conclusion was that it was not in the best interests of either the female or the male to mate multiple times. This then led to the conclusion that “virgin females apparently are sought by males and probably are more receptive to courtship and successful mating than are ones which have mated previously.” [11] This, at least, is the same theory espoused by some college fraternities and numerous religious denominations.

References

1. https://entnemdept.ufl.edu/creatures/bfly/red-spotted_purple.htm    

2. Marshall, S. Insects, Their Natural History and Diversity, Firefly Books, Buffalo, New York, 2006, pp 161-167.

3. Milne, L. and M. National Audubon Society Field Guide to Insects and Spiders, Alfred A. Knopf, New York, 1980, pp718-719.

4. Heikkilä, M. et al. “Cretaceous origin and repeated tertiary diversification of the redefined butterflies”. Proceedings. Biological Sciences. 22 March 2012 Volume 279  Number 1731 pp 1093–1099.

5. Brunetti, C. et al. “The generation and diversification of butterfly eyespot color patterns”. Current Biology. 16 October 2001 Volume 11 (20) pp 1578–1585

6. Bates, H. “Contributions to an insect fauna of the Amazon valley. Lepidoptera: Heliconidae”. Transactions of the Linnean Society. 21 November 1861 Volume 23 Number 3. pp 495–566.

7. Carrol, S. Endless Forms Most Beautiful, W.W. Norton, New York, 2005,  pp 197-219.

8. Ries, L. and  Mullen, S.   “A Rare Model Limits the Distribution of Its More Common Mimic: A Twist on Frequency-Dependent Batesian Mimicry” Evolution. 4 July 2008, Volume 62 (7) pp 1798–1803.

9. Savage, W.; Mullen, S. “A single origin of Batesian mimicry among hybridizing populations of admiral butterflies (Limenitis arthemis) rejects an evolutionary reversion to the ancestral phenotype”. Proceedings of the Royal Society B: Biological Sciences. 15 April 2009 Volume 276  Number 1667  pp 2557–2565 at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2686656/  

10. Gallant, J. et al “Ancient homology underlies adaptive mimetic diversity across butterflies” Nature Communications, 8 September 2014 Volume 5, p 4817.

11. Platt, A. and Allen, J. “Sperm Precedence and Competition in Doubly-Mated Limenitis arthemis-astyanax Butterflies (Rhopalocera: Nymphalidae)”. Annals of the Entomological Society of America. 1 September 2001 Volume 94 (5) pp 654–663.

Brown Thrasher

The Brown Thrasher is a hiker’s bird, searching for food in the wooded thickets we share

Common Name: Brown Thrasher –  There is some conjecture as to the origin of the word thrasher, which could derive from a noun or a verb. The similarity between thrash and thrush, another common bird which is sometimes becomes thrusher in English country dialect, implies a nominal origin. They are erroneously called brown thrushes on occasion. The predicate interpretation calls attention to the long tail that flails about as if thrashing. The reddish-brown upperparts and brown-black stripes across the front both evoke a consummate brownness.

Scientific Name: Toxostoma rufum – The genus is Latin for arched or bowed (toxon) combined with mouth (stoma). This  refers to the long, curved beak that is a common thrasher characteristic. Rufum is Latin for red from which rufous or reddish-brown is derived.    

Potpourri: The Brown Thrasher has a hiker’s perspective, eschewing the cityscapes of pigeons and sparrows and the bird feeders suspended over the manicured lawns of suburbia. They are found mostly strutting through  brushy, unkempt woodland thickets, pausing only to probe for insects with a long piercing bill.  Dressed for the part, the muted brown hues of protective top feathers match the surrounding tree bark. The vertical streaks that extend over the breast from head to toe are like the reedy grasses through which they pass almost invisible.  The cryptic colors are hardly arbitrary, as they share the same wooded areas with scarlet tanagers and  goldfinches with vibrant reds and yellows that are unmistakable even at a distance. Each species follows its own blueprint, hammered out by the evolutionary pressures of survival. All are  songbirds, and the brown thrasher is the most gifted of the lot. 

Thrashers, catbirds, and mockingbirds comprise the family Mimidae, the name originating from the Greek word mimos meaning  to imitate or represent, a word applied equally to mimes as mimics.  Most but not all of the birds in the family are noted for the complexity, variability, and length of their songs, which are not infrequently taken from other birds.  They are even called mimids as a group collective name. The catbird is the least loquacious, preferring a harsh, “downslurred mew,” one of the tortured onomatopoeias favored by birders to use human vocal phonetics to describe what emanates from bird beaks.  This of course sounds like a cat’s meow and what better name for a  bird with uniform dark gray plumage which, admittedly, is not unusual as a cat color. The Northern Mockingbird is incongruously more common in the south-eastern United States. It bears the sobriquet of copying and repeating as if taunting or mocking the sounds made by other birds, the ultimate mimic. [1] The adage that it is a sin to kill a mockingbird has no ethnic roots, but was rather chosen by novelist Harper Lee as a literary metaphor for innocence destroyed by evil.

Mockingbirds are mimics, but they are bested by Brown Thrashers when it comes to repertoire

The Brown Thrasher is adumbrated by its mockingbird cousin. This is due to a combination of familiarity, mistaken identity, and cultural lore. Mockingbirds congregate with relatively high densities near human habitation and are, as mimids, accomplished virtuosos in their own right.  Many songs of the hidden and unappreciated brown thrasher are therefore mistakenly attributed to the better known mockingbirds.  Attention to auditory detail provides a clue to the true troubadour. Mockingbirds normally repeat their purloined song phrases three times in rapid succession. This distinguishes them from both the terse catbird, with its singular tone and the brown thrasher, which repeats each sound just twice. [2] These double tone repetitions follow one set after another in lengthy and involved serenades  that are not only melodically captivating but independently conceived without mimicry. Henry David Thoreau found them boon companions during his philosophical sabbatical at Walden Pond, recording their song as “drop it, cover it up, cover it up – pull it up, pull it up, pull it up.” This was refined to doublets by Cornell’s University Laboratory of Ornithology as “plant a seed, plant a seed, bury it, bury it, cover it up, cover it up, let it grow, let it grow, pull it up, pull it up, eat it, eat it.” [3] None of which makes any sense, but why should it?

Measuring bird song tonality is complex, requiring not only frequency band measurements but also statistical calculations. While this may raise doubts about absolute numbers for recorded sounds that comprise a song, comparisons between species using the same methodology are surely valid. Mockingbirds have been recorded and analyzed with a repertoire ranging from 66 to 244 song types, the variation a matter of the individuality of each bird in its selection of songs to replicate and its predilection for singing in general. The songs of the brown thrasher number in the thousands and there is some evidence that improvisation occurs spontaneously in the course of a single burst. One brown thrasher was recorded singing in staccato bursts  for 113 minutes while stationary  at a single perch using  a mixed repertoire of 4,654 separate doublet units. Further evaluation with a spectrum analyzer revealed 45 song segments of which 20 were never repeated and two were repeated seven times. The entire sonata was statistically analyzed and found to consist of 1,805 separate sounds. It is a widely held among ornithologists that brown thrashers are the most accomplished vocalists among the thousands of species of song birds. But there remains a fundamental question: what is all of the singing about? It has been observed that male brown thrashers are mostly vocal in early spring as a part of territorial reckoning. However, once they are selected by a female and begin nesting, the singing stops abruptly like an avian version of musical chairs (nests in this case). While this would suggest that the intricate songfest achieves its intent ― i. e. intimidating rivals and attracting mates, why the complexity? Most birds suffice with a note or two even when there is more competition and closer quarters. There must be another reason. [4]

Why birds sing at all is a matter of conjecture. There are some sounds with obvious purpose and there are some that can only be for an amusement akin to human whistling. Non-songbirds produce mostly instinctual noises that are known  as calls, like the quack of a duck or the honk of a goose. Songbirds also use calls for functional purposes like location and warning but the song is mostly a whimsical trill. From the physiological perspective, songbirds are unique in having a “voice box” called a syrinx which has two sides that can be independently controlled to produce two different tones at the same time. There are many unexplained sonorous behaviors, including why males are the primary singers in temperate climates (as females are in the tropics) and why there is a dawn chorus. Songbirds learn songs from their parents as nestlings in what is called the critical period and practice them after fledging, shortening the years-long human process to weeks. Absent the intricate syntax necessary for human speech, there can be no practical reason for a string of arbitrary sounds extending to the thousands employed by songbirds like  brown thrashers. [5] The dawn’s early light sing-along may provide a clue ― that it has no practical function whatsoever, but is rather an expression of the exuberance in the ineffable beauty of musical tones ― the sine qua non of feeling alive even for creatures that soar over treetops and dart acrobatically from limb to limb. [6]

Brown thrashers pair bond to procreate, incubate, and subsequently feed the nestlings that result. Unlike many birds, however, they are not monogamous, sometimes changing mates even in the middle of a single summer, but only after the teenagers have left the nest for good. The survival of avian species as the only extant relative of the dinosaurs through the Cretaceous – Paleogene extinction 65 million years ago is testimony to the evolutionary resilience of strict behavioral protocols where caring for progeny is concerned.  Once the eggs are laid by the female, they must be kept warm until they hatch two weeks later as helpless altricial chicks that must be fed until fledged. The duties of this months-long endurance test would be impossible without the dedicated support of a mate. In the case of brown thrashers, this does not mean just finding food and guarding against predators, but sharing in chick care as well.  While the numbers vary to some extent, one field observation documented that during one 14 hour period the female sat on the nest for a little under 9 hours and the male just shy of 4 hours, about thirty percent of the time. Once the eggs hatch after about two weeks of brooding time share, the feeding frenzy begins. During one particularly long day, food deliveries began at 3:30 AM and did not end until 9:00 PM during which time the female made 186 sorties and the male 98 for a total of 286, equating to a rate of about one meal every 4 minutes. Since what goes in must come out, the nest would become fouled to overflowing at this rate if the droppings of the sequestered chicks were not removed. Fouling of the nest is a serious taboo among almost all animals. Both male and female adults inspect the nest scrupulously on a regular basis and particularly after feeding to collect the accumulated excrement which they encase in a transparent bag for removal. [7] It is probably not coincidental that birds and mammals are the only two major groups of animals that are warm blooded and that invest long hours of diligent care in overseeing the growth and instruction of their progeny. Both attributes require a lot of time and energy and yield an impressive adaptive result.

Maintaining a constant body temperature inside a thin skin covered in feathers against the onslaught of environmental extremes requires a self-regulating heat engine. Heat requires the oxidation of food, the essence of metabolism. Getting more oxygen to the cells of the body requires a plumbing system that is designed for inhalation and transport ― the four-chambered heart of birds (and mammals) evolved as a result from its three chambered reptilian predecessor. The extra chamber allows for a separate pulmonary loop on one side of the heart to operate at one eighth of normal blood pressure (15 mm HG as opposed to 110 mm HG systolic in humans) so that the hemoglobin of the red blood cells can absorb oxygen in the pressure-limited lungs.[8] To make this work, a steady input of nutrient laden food is needed. Brown thrashers, like all birds, are adept omnivores. In the warmer months,  they stalk resolutely through underbrush, thrashing aside the detritus to reveal the hidden arthropod smorgasbord below.  Since the etymology of thrasher is a matter more of lore than fact, there is no reason to reject the notion that the name arose from this thrashing of the underbrush, akin to threshing grain.  In the less constrained and sometimes exploitive science of the last century, 266 brown thrashers were eviscerated to provide incontrovertible evidence of what they ate. Animal foods comprised 62.22 percent,  consisting mostly of insects (18.14 percent beetles and 5.95 percent caterpillars), a major portion of which was used to feed the gaping maws of the brood. The balance of 37.38 percent was vegetable, mostly wild berries (19.95 percent), constituting the main food source during the  winter months (45 percent in January and February). [9] The berry-bird connection is not coincidental, but rather an evolutionary advance of flowering plants, or angiosperms, allowing them to spread their seeds by embedding them in a nutritive dollop frequently colored red with provocative purpose.  

Birds are intelligent and bird brains are advanced, contrary to the pejorative epithet. Corvids are considered the most intelligent, but mimids cannot be too far behind.  Acorns, the mast of oak trees, are a popular food for brown thrashers throughout the year, but especially in November when the nuts fall to the ground. Acorns are round with hard casings to protect the oak tree seed within from being disturbed. Squirrels pick them up in their hand-like paws and gnaw through the shell with incising teeth, but birds must make do with claws and a beak. Brown thrashers have been observed excavating a shallow hole in the ground as a tool to hold an acorn firmly in place while they hammer away with repeated piecing blows to breach the protective casing. Moving the acorn from one side to the other in the depression for a better angle of attack, the inner nutmeat was fully removed and eaten. Tool-making such as this requires cause and effect reasoning that defines intelligence. [10]  There is also evidence of the employment of props as coaxing tools in the training of fledglings to take the first flying leap. A parent bird was observed with a bit of paper folded to resemble a morsel of food held over the heads of the craning chicks and repeatedly pulling it away. Having inspired rapt attention, a swift move to a nearby branch incited lunging as  the only option. [11]   Taken together, the syntax in song, the resourcefulness in repast, and the complexity in care require a least a modicum of sagacity. The brown thrasher is a very smart bird … avian sapiens.

References:

1. Rosenberg, G. “Mockingbirds and Thrashers” National Geographic Complete Birds of North America, Jonathon Alderfer, editor, National Geographic, Washington DC, pp 495-502.

2. Robbins, C. Bruun, B., and Zim, H. Birds of North America, Western Publishing Company, Racine, Wisconsin, 1983, pp 240-241.

3. Johnson, T. “Out My Backdoor: Brown Thrashers, a Special Songster” available at https://georgiawildlife.com/out-my-backdoor-brown-thrashers-special-songster

4. Kroodsma, D. & Parker, L. (1977). “Vocal virtuosity in the Brown Thrasher”. The Auk Volume 94 Number 4, 1977, pp. 783–785.

5. https://academy.allaboutbirds.org/birdsong/       

6. Hartshorne, C. “The Monotony Threshold in Singing Birds”. The Auk. Volume 77 Number 2. 1956 pp 176–192.

7. Bent, Arthur Cleveland . Life histories of North American nuthatches, wrens, thrashers and their allies. Smithsonian Institution United States National Museum Bulletin 195. United States Government Printing Office Washington, DC, 1948. pp. 351–374. file:///C:/Users/Owner/Downloads/USNMB_1951948_unit.pdf     

8. Needham, W. The Compleat Ambler, Outskirts Press, p. 336.

9. Beal, F.  et al. “Common birds of southeastern United States in relation to agriculture.” U.S. Department of  Agriculture (USDA) Farmer’s Bulletin 755, 1916,  p. 11. available at https://www.biodiversitylibrary.org/page/56848073#page/13/mode/1up      

10. Hilton, B. Jr. (1992). “Tool-making and tool-using by a Brown Thrasher (Toxostoma rufum)” The Chat Volume 56.

11. Bent, op. cit.

Columbine

The Columbine flower has five unusually long tubular channels that lead to the nectar at the very top, an arrangement best suited for its primary pollinator, the ruby-throated hummingbird.

Common Name: Columbine, Rock bells, Meeting houses, Cluckies, Rock-lily, Honeysuckle, Jack-in-trousers – Columba is Latin for “dove” with the implication that dovelike is the intended metaphor for the flower that resembles five doves with uplifted tails and descending wings.

Scientific Name: Aquilegia canadensisAquila is Latin for eagle as the name of the genus. An avian appearance is again the likely etymology, the eagle of war supplanting the dove of peace. The species name attests to its first being identified and classified in Canada.

Potpourri: Like a five-pointed crimson crown for elven kings, the columbine evokes the magical spirits of rocky uplands. It appears as the summer sequel to the spring ephemeral flowers that emerge before the trees leaf out to attract early pollinators,  persisting well into the year to entice more persistent nectar collectors. A splash of red dangling from the end of a two foot long stout stem called a caudex cannot be overlooked even by the most myopic and oblivious of passers-by. It looks like different things to different people according to culture and custom. The original name columbine was assigned to the European variant (A. vulgaris) using the prevailing Romance languages to refer to a cluster of little columbae, doves in Latin. The American variant (A. canadensis) was afforded the rich diversity of colloquialisms that arose locally as pockets of immigrants settled in new communities and independently gave it their own apt mnemonic. Rock bells is perhaps the most obvious, as yellow stamens dangling underneath like a clapper must be  intended for ringing. Since they grow in rocky areas, rock lilies is also useful since lilies are bell-shaped. Meeting houses is a bit more imaginative, and may be a translation from a Native American name as a colorful five-poled tepee for tribal gatherings. Honeysuckle is a compound word for any flower with a long shaft that may be plucked to extract the honey-like nectar. While Jack-in-trousers does have lascivious implications, it is likely innocuous, like Jack-in-the-pulpit. Jack was a common English term for an unkempt young man and donning red trousers to stand out in the crowd with insouciance would make sense. [1] It is not possible to rule out, however, that something Jack concealed underneath might have been the original intent.

Like all things in nature, the peculiar shape of the columbine is not without reason. The only function of a flower is to attract a mobile pollinator to a sessile plant to carry  male anther pollen from one flower to the female pistil ovary of another. The genetic variation that this imbues is why sex evolved, fun has nothing to do with it. Cursory inspection of the wild columbine blossom reveals its most telling feature, the sweet honeypot of nectar at the very top of the “dove tails” is almost impossible to reach. It is apparent that the intent of the gradual natural selection that created this cul-de-sac was to favor a specific pollinator. That this occurs has been demonstrated irrevocably many times. The most famous example is none other than Erasmus Darwin, who had received a shipment of orchids from Madagascar which included one with an exceptionally long nectary; writing to a friend at the Kew Botanical Gardens, he noted that “in Madagascar there must be moths with proboscises capable of extension to a length of between ten and eleven inches.”  A moth with the necessary tongue length was discovered in 1907. [2] The exotic shapes of many orchids not only have guided passageways through which only the desired pollinator can fit, but even go so far as to imitate the female of an insect to incite the male to a lustful assault. Sex is everywhere.

Color is an equally important attribute for selective pollination. It is the evolutionary result not only of the flower as attractant, but also for the visual color perception of the targeted animal population. Color vision arose about 450 million years ago (mya) with the emergence of the agnathan or jawless fish vertebrates whose only modern survivors are the lampreys and hagfish. Color is a matter of wavelength measured in nanometers (nm) ranging from long wavelength red to short wavelength violet. The vestigial color scheme was tetrachromatic, having four cone types categorized as LWS, longwave sensitive, MWS, middlewave sensitive, SWS2 and SWS1, both shortwave sensitive in addition to rods for black and white. The first three cones correspond to the same general red-green-blue spectra that comprise human color perception with the addition of SWS1 that extends well into the ultraviolet range.  The “four color” physiology has been retained for the vast majority of animals, including most fish, reptiles, amphibians, birds and insects, which means, counterintuitively, that they “see” more than we do up to and including ultraviolet light. Almost all land mammals are dichromatic, having lost two of their four cones. The primates evolved from their two-color cone mammalian ancestors about 50 mya, adding a third (red) cone for reasons that are and always will be subject to conjecture. Bees, unlike the majority of tetrachromatic insects, have only three photoreceptors with spectral peaks at 340 nm, 440nm, and 540 nm which means that they can see ultraviolet light quite well but are deficient at the red end of the spectrum. [3]

The wild columbine is scarlet red and has a deep nectary to attract a specific pollinator – the ruby-throated hummingbird. It is certain that the flower and its ecological bird partner coevolved,  the columbine becoming redder and longer and the hummingbird seeking columbines more exclusively as they were much more likely to find nutrition that was, to all intents and purposes, saved for them.  The columbine species that is indigenous to Europe is blue in color, has a shorter stile, and is not as tubular in shape. It is pollinated largely by ultraviolet seeking bees who would miss it if it were red; there are no hummingbirds in Europe.  In the western half of North America, there area about twenty  species of columbine that range from white/yellow to blue with broad, open petals that extend horizontally as a landing pad for flies and bees that alight to crawl down to the nectar. The blue Rocky Mountain Columbine (A. caerulea) was selected as Colorado’s state flower in 1891 based on a vote by school children, its notoriety transfixed by the 1999 columbine massacre, an inaugural to an era of gun violence in schools that has yet to abate. [4] 

There are consequences to hiding nectar at the end of a long and delicate tube for the exclusive use of a chosen species. The struggle for survival is inexorable with hunger as its gnawing impetus. It is not uncommon to find a columbine with bore holes at the top of the dove tails where an enterprising insect has made its way to the ambrosia within. There are ways to stop marauders, and the chemical factory of plant physiology solves these types of problems by evolving countermeasures in the form of repellents that may also be toxic. [5] The compounds that emerged from defensive measures of flora against fauna are nature’s pharmacopeia, coopted by humans over time.  Herbal medicines were the province of medieval apothecaries, dispensing curative roots and fruits based on the collective wisdom of millennial trial and error. The European columbine was apparently not one of them, however, as John Gerard, one of the most notable of the early herbalists, does not prescribe them. Rather, he extolls their uniqueness as “five little hollow horns … of the shape of little birds” and notes that “they are set and  sown in gardens for the beauty and variable color of the flowers.” [6] The wild columbine of North America was another flower altogether.

 A. canadensis was widely used as an herbal remedy by Native Americans who were equally adept in the use of plants for medicinal purpose as their counterparts in Europe. The difference between the two columbines may stem from the need for chemical protection in the parlous American wilderness or perhaps from the welcoming of insect pollinators by A. vulgaris that its American cousin restricts in favoring the hummingbird. Whatever the reason, wild columbine has a rich history of practical applications that differ according to tribal custom, as there was limited cultural interchange. Among the more interesting formulations were a wash made from columbine leaves to treat poison ivy itch by Iroquois (a confederation of six tribes), an infusion for “heart troubles” by Cherokee, and as perfume for smoking tobacco by Meskwaki. Young bachelors of the Omaha tribe chewed columbine seeds into a paste to spread on their blankets and bodies as perfume for prospective mates. [7] It would follow that the suggestive red flowers would be used as an adornment to further the intent of the tryst although there is no record of this (there being no medicinal purpose). There is no evidence that any of these uses resulted in actual medical benefit other than what might have been conveyed by the placebo effect. A popular field guide to medicinal plants ascribes vague astringent, diuretic, and anodyne uses for the wild columbine with the provocative warning that it is “potentially poisonous.” [8] Since all medicine is governed by dose, this could be said for nearly anything if used to excess.

European naturalists followed in the footsteps of early explorers and colonists to study and eventually classify and catalogue the cornucopia of the New World. The variety of new plants and animals each carefully described in Latin overwhelmed established European biology, such as it was. To restore order out of the ensuing chaos, Carolinus Linnaeus, a Swedish botanist and physician, devised a taxonomic system based on genus and species published as Systema Naturae in 1735 that is still in use today. The red columbine from the Americas was one of the earliest plants to be identified and exported, probably due in part to its physical resemblance to the European flower of the same name and only accentuated by its flame red florescence.  The Canadian species name provides its provenance … it was first sent by French explorers to the noted Parisian botanist Jacques-Phillipe Cornut who wrote a treatise on Canadian plants in 1635  without ever travelling there to see them for himself. Cornut sent a specimen to John Tradescant in Lambeth, London who renamed it Virginia columbine in 1656. [9] The botanical dispute was a microcosm of the global struggle between France and England that was culminated in the French and Indian or Seven Years’ War one hundred years later. Tradescant and his son were the first English horticulturalists to collect extensively in the Jamestown colony as members of the Worshipful Company of Grand Gardeners. Captain Bligh of HMS Bounty fame was commander of many of their global collecting expeditions. During one of the three trips that the younger Tradescant made to Virginia between 1637 and 1654, he allegedly fell in love with a Powhatan girl and promised to marry her on his return. The Smith-Pocahontas affair was evidently no mere fluke. However, when he sailed back to England, he was introduced to the wife that his father had chosen.  Although reportedly devastated, he catalogued the contents  of what became known as the Lambeth Ark in 1657. It became the British Museum of Garden History in 1981. [10]

The columbine has not escaped the notice of poets, its beauty the inspiration for lofty verse and mellifluous metaphor. John Burroughs, the notable literary naturalist of the nineteenth century was especially fond of it. In his poem Columbine, he extolled its many virtues:

            I strolled along the beaten way, Where hoary cliffs uprear their heads, And all the firstlings of the May Were peeping from their leafy beds,

            When, dancing in its rocky frame, I saw th’ columbine’s flower of flame.   Above a            lichened niche it clung, Or did it leap from out a seam?–

            Some hidden fire had found a tongue And burst to light with vivid gleam.   It thrilled the   eye, it cheered the place, And gave the ledge a living grace.  

There is indeed something almost surreal about an encounter with the crimson columbine along a rocky trail where few things grow and the palette of green, gray, and brown prevails. It is nature’s way of reminding awkward bipeds stumbling along uneven trails that we are also its product ―  there is room for both beauty and the beasts.

References

1.  https://www.fs.fed.us/database/feis/plants/forb/aqucan/all.html   

2. Needham, W. The Compleat Ambler, Outskirts Press, pp 80-81.

3. https://hikersnotebook.blog/other-articles/geology-and-earth-science/colors-of-nature/

4. Sanders, J. Hedgemaids and Fairy Candles, Ragged Mountain Press, Camden, Maine, 1995. pp 23-25.

5. Adkins, L. Wildflowers of the Appalachian Trail, Menasha Ridge Press, Birmingham, Alabama, 1999, pp 138-139

6. Gerard, J. Gerard’s Herball – Or, Generall Historie of Plantes, John Norton, London, 1597 pp 69-70.

7. Ethnobotanical data base http://naeb.brit.org/uses/search/?string=columbine    

8. Duke, J. and Foster, S. Peterson Field Guide to Medicinal Plants and Herbs, Houghton Mifflin Company, Boston, 2000, p 153.

9. Drake, J. “Growing from Seed” The Seed Raising Journal from Thompson & Morgan, Winter 1987-88 Vol. 2 Number 1. https://www.thompson-morgan.com/aquilegias-article     

10. https://people.elmbridgehundred.org.uk/biographies/john-tradescant/   

11. https://poems.one/poem/john-burroughs-columbine