Eastern redcedar

Eastern redcedar is rarely seen standing alone in a field as they spread readily

Common Name: Eastern redcedar, Red cedar, Red juniper, Cedar apple, Virginia red cedar  – Cedar is from kedros, the name for the tree in Greek, which probably is derived from kadru, Sanskrit for tawny. Cedar trees were well known in antiquity and their aromatic wood was renowned throughout the Mediterranean region. The bark is red-tinted, the “red” distinguishing it from the lighter colored white cedar. The tree is indigenous to the eastern half of North America as the counterpart to the western redcedar.

Scientific Name: Juniperus virginiana – The generic name is the Latin word for juniper, a shrubby evergreen of the northern latitudes, also well known in antiquity. The etymology is unclear but it may originally have come from a word for reed or stem to describe the twiggy leaf structure. The tree was first encountered by European naturalists in the colony of Virginia.

Potpourri:  The Eastern redcedar is the most widely distributed conifer in the Eastern United States. It has two starkly contrasting reputations. On the positive side, it has historically been considered one of the most important indigenous trees in North America with multiple uses among native peoples. As a curative agent, a ready source of wood, and an insect repellent, it permeated Indian culture which was largely adopted by the pioneering Europeans as they moved inland and learned to endure the same hardships. Red cedar was therefore an equally important mainstay of early settler homesteads east of the Mississippi River, sought after for its valuable commodity assets.  In the modern era that spans the last century, it has taken up a more sinister role as a scruffy roadside eyesore, rising to near invasive status in many disturbed areas. It occupies monoculture stands along major highways, challenged only by the equally prolific Ailanthus/tree of heaven. However, there is a qualitative difference between an invasive alien and a widespread native. Introduced plants like Ailanthus invade new habitats free of local predators and devoid of competition to dominate the sun’s energy and the soil’s nutrients. Indigenous plants like Eastern redcedar that have evolved to thrive on meager resources in hardscrabble environments are honest competitive pioneers.

Eastern redcedar also has a split personality by name ― it is a juniper and not a cedar. However, it is literally a family affair since both cedars and junipers belong to Cupressaceae, the resinous evergreen family of trees and shrubs commonly called Cypress. With about 130 species worldwide, cypresses are characterized by scalelike leaves in flattened twigs, unlike the needles of pines, hemlocks, and firs. [1] The cedar name preference was likely a result of English colonists whose religious affiliations would have favored a common tree name with more biblical resonance. While the juniper is mentioned in the bible, the cedar is literally foundational; King Solomon built the temple in Jerusalem from the Cedars of Lebanon (Cedrus libani). A cursory inspection at the fruiting time of year would have revealed the error; junipers have “berries” and cedars do not. In reality they are not berries but small, round, dark blue cones ― verisimilitude by design. Since both berries and cones are the fruiting bodies of their respective plants and carry the all-important seeds, convergent evolution favoring propagation is evident.

The juniper berries of red cedar are small, blue cones.

Conifers have cones and evergreens are always green. However, not all conifers are evergreen. The larch and the bald cypress are deciduous, losing all their needles annually. Conifers are gymnosperms of the order Pinales. Gymnosperm literally means “naked seed,” distinguishing them from the angiosperms with seeds encapsulated in a developed ovary ― fruit like an apple or peas of a pod. The angiosperms are the most advanced land plants, their survival enhanced by sweet tasting fruit attracting animal herbivores that spread reproductive seeds to germinate in nutrient-laden fecal droppings. Naked seed plants, lacking fruited ovaries, usually produce robust cones. Cones come in two types: male pollen cones and female seed or ovulate cones. Most conifer trees are dioecious, with a single tree producing both male and female cones. [2] The Eastern redcedar is monoecious in having separately sexed trees. Male tree pollen cones are ephemeral, forming upright structures called staminate strobili or conelets that release clouds of wind-borne pollen in spring of which a vanishingly small percentage will land on the ovaries of downwind female tree ovulate cones.  The tough, woody ovulate cones of most conifers are a palladium for the development of the fertilized seed on which the future of the plant ultimately depends, their naked seeds dispersed and carried away by the wind or dropped to the ground below. However, in the case of the Eastern redcedar, green scales form as an outer protective coating over an unusual non-rigid, berry-like cone. As the season progresses, the color of the “berry-cone” changes from greenish white to a distinctive blue when mature, mimicking the progression of flowering plant berries from green to red or blue-black. [3] Not all evergreens have cones either. Holly, mountain laurel, and rhododendron retain non-needle leaves perennially. The cedar cum juniper is a full-fledged evergreen conifer … both attributes have purpose.  The environmental factors that contribute to the prevalence of evergreen over deciduous growth are related to sun and soil nutrient resources. At high elevations and northern latitudes, the growing season insolation is insufficient for the annual regrowth of leaves in time to absorb enough sun energy for sustainment. Equally, poor soils with diminished nutrients cannot support annual leaf regeneration. Nature’s answer to not being able to grow a new set of leaves every year is to keep them so that the tree is always, that is ever, green. The preponderance of narrow leaf shapes like pine needles and cedar scales is related to both water retention, as reduced surface area equates to less evaporation, and resistance to storm and snow weather damage in winter when other trees are bare. Evergreen trees do replenish their greenery like their broadleaf cousins, but they do so incrementally rather than all at once. The needles of pine trees are replaced about every four years while those of cedars and junipers are on a roughly ten-year cycle. [4] Eastern redcedar thrives in marginal soil habitats, encroaching on grasslands that are essential to livestock operations.  For example, it is estimated that over seven hundred acres of rangeland are lost every day in Oklahoma due to Eastern redcedar. [5] While this has understandably raised the hue and cry of the cattle-beef industry, anything that cuts back on the contribution of methane belching cows to climate change is at worst equivocal. Long term research studies have revealed that “encroachment by J. virginiana into grasslands results in rapid accretion of ecosystem C and N in plant and soil pools.” [6] In other words, “invasive” red cedars not only sequester carbon but crowd out cows, doubling their greenhouse gas reduction efficacy.

Cedar apple rust fungus emits spores that infect apple trees.

There is one characteristic of red cedar range expansion that weighs against its otherwise positive environmental credentials. As an integral partner in a ménage à trois with a fungus that involves apple trees, cedars are complicit in crop damage. Fungi developed some peculiar relationships with plants as they evolved in the dark backwaters of the ecological web. One of the most interesting is heteroecism or two host parasitization. Eastern redcedar trees are linked to apple trees by a fungus aptly named Gymnosporangium juniperi-virginianae, which is commonly called cedar apple rust. The biennial cycle starts as the fungus forms mycelial galls on the tips of red cedar branches. The mycelium is the main body of the fungus. In spring, hornlike projections grow outward from the gall bearing billions of red-orange spores. The windborne spores are carried aloft and afield ― a miniscule percentage will land on apple (and crabapple) trees, where they germinate.  Yellow spots appear on the apple tree’s fruits and leaves the following spring from which a second set of spore bearing tube-like structures release spores that germinate only on red cedar leaves. That one fungal species requires two alternating hosts to survive is not unique. Barberries and wheat are conjoined in a similar arrangement with wheat rust. [7] Why and how this duality evolved is a matter of some conjecture, but it is quite true that the fungus can be eradicated by getting rid of one of the two hosts, as was done with barberry. Due to the expansive nature of red cedar, it is impractical to remove trees. Fungicides and planting rust-resistant apple varieties are the primary remediation practices.

The seeds of Eastern redcedar enclosed in “angiosperm-like” cones mimic the berries that attract animals, especially birds. The cedar waxwing is named for its preference of these “juniper berry” fruits. Field studies have found that it takes about twelve minutes for the juniper berry and its seeds to pass through the avian digestive tract and that the seed thus “processed” has a germination probability that is three times that of a berry-cone seed that simply falls to the ground (as all uneaten cones eventually will). It is in this manner that Eastern redcedar extends along rural fence lines which serve as roosts for engorged cedar waxwings. Juniper berries are also a popular food source for many other birds including robins, ruffed grouse, and turkeys as well as small mammals like raccoons, skunks, and opossums. [8] The evident desirability of Eastern redcedar cones by diverse animal populations is indicative of some evolution of the former to suit the latter. With high concentrations of fat and fiber, moderate levels of calcium, and, most importantly, substantial carbohydrates for metabolic energy, they are excellent sources of nutrition.  However, just because animals eat them does not mean that humans can. Juniper berry edibility is tenuously acceptable according to which of the thirteen different species of juniper is ingested. However, juniper berries are mostly too resinous for human tastes, suitable only for seasoning or perhaps for tea when roasted and ground. The flavor of the berries is distinctive. The French name for juniper is genièvre from which gin, a liquor flavored with juniper berries, derives.[9]

Eastern redcedar was widely used by Native Americans according to region and tribal customs. The leaves and twigs of the tree were steeped in water to extract chemical constituents as a ptisan, a natural tea administered orally for the treatment of respiratory ailments like colds and coughs by Cherokee, Cheyenne, Flathead, Nez Pierce, Sioux and the Haudenosaunee or Iroquois Confederacy. The aromatic properties of cedar were volatized by burning, the incense an important part of Kiowa prayer meetings, Lakota funerals, and a treatment to reduce Seminole anxiety. The wood, with its inherent resistance to decay and insect infestation, was used by Ojibwa for wigwam construction, Navaho for a war dance ceremony wand, and among various tribes for everything from musical flutes to canoes.[10] The use of red cedar by American colonists based on Indian antecedents was well established by the 18th century as noted by  Swedish botanist Peter Kalm in 1749 on occasion of his North American travels. He chronicled that it was among the most durable of all woods used in home and boat construction, and that “some people put the shavings and chips of it among their linen to secure it against being worm eaten,”  the origin of the cedar chest. [11] Institutional medicinal applications of cedar became were well established by the 19th century; it was listed as a diuretic in the U. S. Pharmacopeia from 1820 to 1894. Oil of cedar, sometimes called cedarwood oil, has been included a reagent in the Pharmacopeia since 1916 and is used in aromatherapy and as an insect repellent. [12]

Juniperus virginiana is notably successful as a matter of natural resilience in its production of chemicals that protect it from everything from microbe attack to insect maceration. These properties, moderated by limited dosage, are the basis for its use to ameliorate human health by arresting bodily access by biotic invaders.  One of its constituents is podophyllotoxin, the name derived from the genus Podophyllum peltatum commonly called mayapple, from which it was first isolated. It is currently prescribed for use as an antiviral topical treatment of genital warts with many emerging applications ranging from cancer and multiple sclerosis to arthritis and psoriasis. [13] There is also something to the documented use of the soothing smell of cedar as a treatment for anxiety by Seminole herbalists. Testing with laboratory mice has shown that cedrol, one of the constituents of cedarwood oil, produces anti-anxiety or anxiolytic effects measured by performance in maze behaviors. Physiologically, it increases the amount of dopamine, a neurotransmitter known for its positive behavioral effects. [14] Cedar wood and juniper berries as insect repellent are equally valid according to recent research. The resinous exudate of its berries have antiparasitic and nematicidal (worm killing) properties and its wood resin is antibacterial. [15] Eastern redcedar/juniper is a tree for all seasons, but especially Christmas, with cedar apple rust ornaments to boot.

References:

1. Little, E. The Audubon Field Guide to North American Trees, Alfred A Knopf, New York, 1986, pp 305-315.

2. Wilson, C. and Loomis, W. Botany, 4th Edition,  Holt, Reinhart and Winston, New York, 1967. pp 549-570.

3. United States Forest Service database https://www.srs.fs.usda.gov/pubs/misc/ag_654/volume_1/juniperus/virginiana.htm   

4. Kricher, J. and Morrison, G. Peterson Field Guide  to Eastern Forests, Houghton Mifflin Company, Boston, 1988, pp 8-9, 279.

5. https://www.noble.org/news/releases/oklahoma-must-address-cedar-encroachment/ 

6.  McKinley, D.; Blair, J. “Woody Plant Encroachment by Juniperus virginiana in a Mesic Native Grassland Promotes Rapid Carbon and Nitrogen Accrual”. Ecosystems. 1 April 2008 Vol. 11 No. 3: pp 454–468. 

7. Stephenson, S. The Kingdom Fungi, Timber Press, Portland, Oregon, 2010 p.182.     

8.   Barlow, V. “Eastern Redcedar, Juniperus virginiana”, Northern Woodlands, Winter 2004 https://northernwoodlands.org/articles/article/eastern_redcedar_juniperus_virginiana/   

9. Angier, B. Field Guide to Edible Wild Plants, Stackpole Books, Mechanicsburg, Pennsylvania, 2008, pp 110-111.    

10. Native American Ethnobotany Data Base. http://naeb.brit.org/uses/search/?string=juniperus%20virginiana&page=1

11. Kalm, P. Travels into North America; Containing Its Natural History, and a Circumstantial Account of Its Plantations and Agriculture in General, with the Civil, Ecclesiastical and Commercial State of the Country, the Manners of the Inhabitants, and Several Curious and Important Remarks on Various Subjects. 1772. Translated into English by John Reinhold Forster. Vol. 1 (2nd ed.). London: Printed for T. Lowndes, No. 77, in Fleet-street.

12. USDA Plants Data base. https://plants.sc.egov.usda.gov/home/plantProfile?symbol=JUVI  

13.  Cushman, K. et al “Variation of Podophyllotoxin in Leaves of Eastern Red Cedar (Juniperus virginiana)”. Planta Medica May 2003. Vol. 69 No. 5 pp  477–478. 

14. Zhang, Kai; Yao, Lei “The anxiolytic effect of Juniperus virginiana essential oil and determination of its active constituents”. Physiology & Behavior May 2018  Vol. 189  pp 50–58.

15. Samoylenko, V. et al “Antiparasitic, nematicidal and antifouling constituents from Juniperus berries”. Phytotherapy Research. December 2008.  Vol. 22  No. 12 pp 1570–1576.

Great Lobelia

The Great Lobelia is an imposing flower on a tall stem, ideal for pollinators.

Common Name: Great Lobelia, Gagroot, Asthma weed– Use of the scientific genus for a common name is not unheard of in botany, but it is unusual. It would be logical but wrong to associate the name with “lobe,” a round projecting part like those of the multi-petal blossom. Since common names arise randomly as mnemonics, it is probable that the “lobe-like” name was good enough. The true etymology of the common/genus name is to honor Matthias de l’Obel, the sixteenth century Flemish physician to both Prince William of Orange and King James I of England. The two other notable flowers in the genus Lobelia that are of special note have more descriptive common names: Cardinal Flower, and Indian Tobacco.

Scientific Name: Lobelia siphilitica – The species name is recognizable as a Latinized version of syphilis, the (mostly) sexually transmitted disease (STD) that was the scourge of Europe in the sixteenth and seventeenth centuries. The plant was at one time considered a curative.

PotpourriThe “Great” Lobelia deserves the honorific reserved for historically important leaders like Alexander and imposing pyramids like Khufu by virtue of its superlative floral attributes. Up to four feet tall, it is liberally spangled with clusters of irregular blue flowers that extend outward in five pointed lobes like grasping, gloved fingers. The name lobelia evidently stuck as it is overendowed with lobes even though that has nothing whatever to do with the name, which honors the Flemish physician Matthias de l’Obel (“de le” is ”of the” in French) who wrote several books on botany with emphasis on medicinal properties. This is apropos as the other  common names gagroot and asthma weed are evidence of a deep relationship that people have historically had with this plant and its several cousins. The lobelias are producers of some potent chemicals that have historically been subject to diverse herbal remedies for maladies both real and imagined, syphilis is just one of them.

The lobelias are in the Bellflower Family Campanulaceae (campana means bell in Latin), named for the prevalence of radially or bilaterally symmetrical tubular bell-shaped flowers. With the exception of the bright red Cardinal flower (L. cardinalis), most range in color from lilac to blue.[1] Since the function of flowers is to attract mobile animals to sessile plants to transport male pollen from one to the female stigma in another, different colors and shapes can only have evolved for that purpose. Differences in color are especially noteworthy when two species in the same genus have a common origin and physiology but only differ markedly in hue. A vivid, eye-catching red was chosen for the color of the robes of the highest officials in the Roman Catholic Church below the pope who were cardinalis, Latin for principal. The name as color carried over to the bird and flower without ecclesiastical implications. The cardinal flower can only have evolved to attract a specific type of pollinator … perhaps a single species.  Lobelias with their drooping tubular bluish flowers are well suited for bee pollination. Bee vision extends through the blues to the ultraviolet range but they cannot see red. Cardinal flowers grow in boggy habitats in dense stands, evidence of multiple germinations at the same location. They are frequently attended by  butterflies flitting from flower to flower, especially spicebush swallowtails. It is quite probable that this has something to do with the color, which is attractive to butterflies while unseen by bees.

Cardinal Flowers attract Spicebush Swallowtail butterflies

The purpose of sex is genetic diversity, a simple fact often misconstrued in the neo-culture of gender preference, both real and perceived. Nature goes to great lengths to ensure that genetic DNA is mixed and matched to produce the variation on which adaptability depends. The fossil record is a road map of what has and has not been able to change to meet new environmental challenges such as that induced by a meteor crashing into the earth near Chicxulub, Mexico 65 million years ago. Floral diversity can only be achieved via transport of male gametes from one plant to the female gamete of another. Dioecious species like maple and holly trees have a male plant and a female plant to promote diversity. Monoecious species like lobelias that have both sexes on the same plant are more common. Since the stamens of a flower that contain the male pollen are situated adjacent to or directly over the female pistil, self-pollination instead of the preferred cross-pollination would be the more likely outcome absent some evolutionary legerdemain. The term proterandrous, literally “before-male,” applies to lobelias. It means that a flower’s male pollen reaches sexual maturity before the female stigma is receptive. A pollinator would then be more likely to carry pollen from one flower with stigma deactivated to another that is receptive. Several days after opening, the stigma curls backward to come in contact with pollen dropped into the base of the flower from its own anthers. This functions as a backup, promoting self-pollination should pollinators fail to deliver. This then would assure survival although lacking the genetic diversification of DNA contributions from two different plants. [2]

In order to mature to reproductive age and produce seed for future generations, plants must survive to sexual maturity. Chewing insects and browsing herbivores must therefore be held at bay. This is the basis for spines and thorns that keep animals away but also for the chemistry of taste and aroma as deterrents. Plants evolve random mutations to create compounds against specific threats. The toxic “milk” in milkweed is exuded when the stem is punctured to keep sap-sucking bugs away. Some animals adapt to tolerate these toxins to employ them as their own deterrents. Monarch butterfly larvae that eat milkweed are a good example, as both the caterpillar and the adult gain the advantage of the plant’s poisons to escape predation. In a similar vein, herbal medicine is the use of plant produced chemicals to promote human health. The difference is sentience … herbs are chosen for specific conditions based on human knowledge. Since the Great Lobelia is named L. siphilitica, syphilis provides a good case in point. The disease first appeared in Italy in 1494, infecting many of the French soldiers besieging Naples who spread it throughout Europe as “the French Disease.” The name syphilis comes from an epic poem attributing the disease to a shepherd named Syphilus who had offended the god Apollo. As punishment, “A hideous leprosy covers his body; fearful pains torture his limbs and banish sleep from his eyes.” [3] It was the scourge of Europe in the 18th and 19th centuries, infecting many notable composers, philosophers, and musicians including Mozart and Beethoven. [4]  

Syphilis is caused by Treponema pallidum, a spirochete closely related to the microbe that causes Lyme Disease which has similar symptoms. It is spread mostly through “intimate sexual contact,” the coarse vernacular now only too common. It was almost certainly imported from North America by lascivious Columbian mariners debarking in Italy. Bone samples that pre-date any association with Europeans confirmed that the disease occurred in Native American populations. The debilitating consequence of syphilis as it spread unchecked through the upper echelons of European society (one need not wonder why) precipitated a pressing need for treatment. Before the age of pharmaceutical drug trials, the only option was to identify a naturally occurring compound by trial and error; searching in North America where it started was the logical thing to do. Herbalists who had begun to study the unique flora of the New World in the eighteenth century working on occasion with Native Americans learned of a treatment for syphilis using lobelia plants. [5] By the time that Carolinus Linnaeas started cataloguing plants by genus and species in about 1735, the use of lobelia plants for treating syphilis had been well established ― enough to warrant assigning the scientific name Lobelia siphilitica. The fact that the treatment ultimately failed to cure the disease in Europe was attributed to deterioration of the relevant lobelia compounds on the long sea voyage. [6] It is much more likely that it didn’t do much good for the Native Americans either. 

The several species of flowers in the genus Lobelia were used for a wide variety of treatments by different tribal groups as a matter of local lore and cultural practice. The Iroquois, a confederation of six tribes in the northeast, used parts of the roots and stems of the cardinal flower for just about everything, considering it a panacea by itself but also as a complimentary adjuvant when mixed with other herbs. It was even taken for sickness (presumably depression) caused by grieving. The Cherokee, native to a vast territory comprising a large portion of the southeast, were more selective, using lobelia compounds for specific ailments like fever, rheumatism, and stomach problems. That they diagnosed and treated the disease called syphilis with lobelia is likely the fons et origio of its purported curative power. The Cherokee are also closely associated with a third species of lobelia called Indian Tobacco (L. inflata). The common name implies that it was used as a substitute for tobacco (genus Nicotiana from which nicotine is derived) which was widely used by native peoples throughout the Americas. According to the historical record, however, it was used as a substitute for tobacco in order to break the nicotine habit and not as an alternative. [7] There would be no need for a tobacco substitute as it was quite common. Indian tobacco was also used medicinally as a strong emetic, which is appropriate, since it has the highest concentration of the “medicinal” compound shared by all lobelias.

Indian Tobacco is medicinally the most potent of the Lobelias.

The efficacy of historic herbal remedies such as lobelia extracts can only be determined using science-based methods to distinguish snake oil from bonafide medicine. However, it is almost never cost effective to do so since human trials are exorbitantly expensive and wild plants are free. It is well established that the effective chemical in lobelia plants is an alkaloid named lobeline according to generic custom. It is similar in structure to nicotine, producing commensurate physiological effects. Lobelia extracts have been used in a variety of products like chewing gum and patches marketed to break the tobacco habit, emulating the Cherokee practice. In nineteenth century America, when treatment options were primarily limited to extracts from plants and animals, lobeline was one of the most popular. The common names gagroot and pukeweed suggest that it was often used as an emetic. This should not come as much of a surprise, because ingesting a poison is almost certain to induce the stomach to eject it along with everything else. There are anecdotal suggestions that death may have resulted from using lobelia as a home remedy. [8] Recent research has shown that herbal supplements, lobeline among them, can have adverse cardiovascular effects, particularly when used in combination with other drugs. [9]  While there has also been some research with animals to attempt to validate lobeline as a viable drug, the current consensus in the medical community is that “lobelia is not effective for smoking cessation, asthma, or any other medical condition..” [10] However, the jury is still out on lobeline, which has been shown to improve patient response to multi-drug resistance, a problem in chemotherapy.[11]  It is fair to conclude that Native Americans were onto something whose full potential has yet to be realized. Perhaps one might look to the Meskwaki Indians for inspiration. They used chopped up lobelia sprinkled in food or on beds of discordant couples as a means of easing marital discord. [12] It would likely not be too difficult to recruit drug trail participants for a love potion.

References:     

1. Niering, W. and Olmstead, N. National Audubon Society Field Guide to North American Flowers, Alfred A. Knopf, New York, 1998 pp 438-442

2. Gadella, T. W. “Campanulales” Encyclopedia Britannica, Macropedia William Benton Publisher, Chicago, 1974, Volume 3 pp 704-708.

3. Fracastor, Hieronymus, Syphilis, The Philmar Company, St. Louis, Missouri 1911. pp 1-58.

4. Franzen, C. “Syphilis in composers and musicians – Mozart, Beethoven, Paganini, Schubert, Schumann, Smetana”. European Journal of Clinical Microbiology & Infectious Diseases. 1 July 2008 Volume 27 No. 12 pp 1151–57.

5. Foster, S. and Duke, J. A Field Guide to Medicinal Plants and Herbs, Houghton Mifflin Company, Boston, 2000. pp 163-164.

6. http://naturalmedicinalherbs.net/herbs/l/lobelia-siphilitica=great-blue-lobelia.php  

7. Native American Ethnobotany Database at http://naturalmedicinalherbs.net/herbs/l/lobelia-siphilitica=great-blue-lobelia.php   

8. Foster, op cit.

9. Cohen, P. A.; Ernst, E. “Safety of herbal supplements: A guide for cardiologists”. Cardiovascular Therapeutics. August 2010 Volume 28 Number 4 pp  246 – 253.

10. Memorial Sloan Kettering Cancer Center. https://www.mskcc.org/cancer-care/integrative-medicine/herbs/lobelia   

11. Ma Y. “Lobeline, a piperidine alkaloid from Lobelia can reverse P-gp dependent multidrug resistance in tumor cells”. Phytomedicine. 15 September 2008 Volume 15 No. 9 pp 754–758.

12. Harris, M. Botanica, North America, Harper Collins, New York, 2003, p 89.

Wood Frog

The most recognizable feature of the wood frog is the black “robber’s mask” eye stripe.

Common Name: Wood Frog – Frog is among the oldest of Indo-European words originating as the Sanskrit pravate, meaning “he jumps up.” It evolved to English through Old Norse as frauki. Wood frogs are found around wet areas in woodland habitats, but not on wood as the name suggests. The reference may be to its characteristic brownish hues which are similar in color to wood bark. However, brown frog would then be a better choice and no less uncreative.

Scientific Name: Rana sylvaticaRana is the Latin word for frog which differs from the Sanskrit origin as an onomatopoeia of their call …  like croak or ribbit in English. The Latin word for woodland is silvae. The scientific name is literally “frog wood,”  the opposite of the common wood frog name. Wood frog have been reclassified by modern DNA taxonomy to Lithobates sylvaticus from the Greek lith meaning “stone” and bates meaning “one who treads,” which would connote “stone walker.” This could be literal, as is the case for the wood frog depicted above climbing on a lichen-covered rock.

Potpourri:  Wood frogs appear in the spring after having endured even the coldest of winters as if immigrating from remote, warmer habitats, like anuran snowbirds. Surely an amphibian noted for its slimy wetness cannot have survived near frozen-through skateable ponds that dot the woods they inhabit. But they do. The extraordinary tenacity of life in the savagery of the wild is the result of the survival of mutants.  After the basics of what it took to be a frog were successfully worked out in the deep recesses of time, populations of jumping, amphibian carnivores lurking in or near water burgeoned. To escape the crowds competing for the same resources, the more adventurous individuals left for greener, but sometimes colder, pastures. The resulting diaspora to new environments is one driving force for speciation. Wood frogs, like humans among mammals, have managed by sheer luck to evolve in the right direction to become among the most successful of their amphibian cohorts. Not only do they survive arctic winters, but they are first to emerge in spring to fill any emergent pool of water with thousands of eggs. It is only a matter of time until a new mutation will offer better chances elsewhere.

The first question is how do thin-skinned animals survive iced-in ponds without the coat of a beaver or down like a duck? This conundrum perplexed naturalists whose warm blooded judgement was skewed toward bears denning in caves and caribou gathering in tightly packed  herds to share or conserve body heat. The cold blood of reptiles and amphibians lacks the metabolic wherewithal of thermoregulation. Consensus was that burrowing deep into the ground below the frost line was the only possible palladium; toads had been found buried up to four feet deep during excavations.  John Burroughs, an eminent nineteenth century American naturalist, chanced upon a frozen frog he found under some leaf litter and concluded that “… frogs know no more about the coming winter than we do, and that they do not pass deep into the ground to pass the winter as has been supposed.” [1] Finding an animal frozen and lifeless would lead most to conclude that it died of exposure, having failed to account for weather extremes. The foolish frog theory, which would make a passable subject for Aesop’s fables or a subplot in Disney’s Frozen, is false. Frogs freeze on purpose.

Science entered the picture in the 1980’s when a Minnesota-based researcher with some knowledge of frog adaptability took up the subject. The experiment consisted of collecting a number of frog species in the fall and subjecting them to freezing in the laboratory under controlled conditions. After six days at -6°C, the frozen frogs were moved to a refrigerator and thawed at +6°C. Wood frogs began to show vital signs and limb movement after three days but mink and leopard frogs subjected to the same conditions froze to death and stayed that way. The resulting paper concluded that “an accumulation of glycerol during winter was correlated with frost tolerance, indicating that this compound is associated with natural tolerance to freezing in a vertebrate.” [2] In other words, wood frogs seemed to be making antifreeze. In the four decades that have followed since this seminal experiment, further research has revealed the true nature of the wood frog’s magic.  

What better place to study frozen wood frogs than Alaska where arctic winter is the norm and spring thaw the exception? Researchers located frozen frogs in the wild and measured ambient temperatures with sensors placed directly on their skin. After two seasons with temperatures as low as -18°C and a seven-month long period of deep freeze suspended animation, every wood frog came back to life. [3] That the frogs survived the natural habitat test at much lower temperatures for a much longer time period than in the laboratory test led to some speculation as to the mechanics of freeze protection. Vertebrate metabolism is based on energy generated primarily from the oxidation of glucose derived from dietary carbohydrates. Excess glucose is stored in the liver and in muscle tissue as glycogen for future energy needs. The key to the deep freeze conundrum was that in the laboratory, temperature was lowered to below freezing just once and the frogs froze. In the wild, frogs are subjected to multiple freeze/thaw cycles according to weather fluctuations. It was discovered that each cycle ratcheted up the production of glycogen, ultimately increasing its concentration by a factor of five. To accommodate the stockpile, liver size increased by over fifty percent ― one researcher described the wood frog as a “walking liver.” When compared to wood frogs monitored in more moderate Midwest climates, the Alaskan frogs had three times as much glycogen. [4] While Darwin’s Galapagos finches provided a hint of adaptations for survival, Alaskan wood frogs are a compelling affirmation case study.

The actual mechanism employed not only by wood frogs, but also by spring peepers, gray tree frogs and chorus frogs to revive after freezing to death (heart stoppage and breathing cessation) is now understood to involve both glycerol and glucose in addition to some specialized proteins. Glycerol lowers the freezing point of water to protect membranes from freezing just as it does for automobile cooling systems. Glucose in high concentrations prevents the formation of ice crystals inside cells. Ice crystals are like small daggers, shredding cell membranes and wreaking havoc with organelles. This is why freezing is normally lethal to animals and why frozen vegetables that are not dehydrated turn to mush when defrosted. When a frog senses first frost, adrenaline is released to convert liver glycogen into blood glucose. This is the same mechanism that provides energy for fight or flight (and freeze in frogs). It originates in the amygdala, the brain region that provides for immediate action in emergencies known as the sympathetic nervous system.  The difference with wood frogs is magnitude. Human glucose ranges from 90 to 100 milligrams per deciliter with a diabetic threshold at 200 mg/dl. Frogs boost their glucose to as high as 4,500 mg/dl, well over lethality for humans, and probably for just about every other living thing. The specialized proteins act as ice nucleation sites outside the cells where about 65 percent of total body water ends up frozen. [5] Cryobiology may well be the next frontier in the quest for life everlasting if the lessons learned from wood frogs can be mastered. [6]

Male wood frog in amplexus grip of female amid fertilized eggs.

The spring thaw fills vernal pools with the cacophony of male wood frogs courting, a behavior known as explosive breeding. As the amphibian exemplar of the early bird gets the worm, the quest for sex begins in early March, even before wet areas are free of ice. Filling the air with their duck-like quacking, male wood frogs frenetically search for something to mate with, not infrequently grasping other males and even other species, including large salamanders. The tenacious grip is called amplexus, aided and abetted by swollen thumbs and expanded foot webbing that won’t let go. [7] It is necessary because females are generally larger than males and slimy frogs are slippery. Mating success of male wood frogs is dependent on physical size,  one of nature’s enduring correlations. It is also true that larger females are more likely to mate as size in this case correlates to the number of eggs produced. After an embrace that can last for over an hour for egg fertilization, the female deposits as many as 3,000 eggs in a gelatinous, globular mass about four inches in diameter. After a time, the ball flattens and collects algae for disguise as pond scum. One month after oviposition, the eggs hatch into aquatic tadpoles for the race against the clock to metamorphose into terrestrial wood frogs before the pool, which may be seasonal, dries up and they expire. Wood frogs can freeze but their young need water. The odds are stacked against survival, but only one tenth of one percent of the eggs in the brood must reach adulthood for survival of the species, at least for those that are fittest. [8]

Amphibians first appeared in the Devonian Era about 400 million years ago as something like a walking fish and have never broken free from their aquatic “roots” even as evolution has run its course. True frogs of the family Ranidae, which don’t appear in the fossil record until 57 million years ago, are long-legged, narrow wasted, and web-footed with horizontal pupils including wood, green and bull frogs. Since their origination occurred after the breakup of the supercontinent Pangaea, global dispersal required continent jumping. DNA assessment of 82 Ranidae species revealed that the North American clade of true frogs came from East Asia, hopping across Beringia and spreading across the New World by 34 million years ago. The first genetic split of the true frogs that spread out in North America was the mutation that became the wood frog, suggesting a significant adaptation. [9] Are wood frogs still evolving? The short answer is yes because everything does, including humans. It is just too slow to notice.

Amphibians are the proverbial “canary in the coal mine” when it comes to planet Earth. They need both clean water because they are aquatic for at least a portion of their life cycle and clean air because we all do. Wood frogs offer a case in point. With climate getting warmer and not colder, ice survival may not have quite the same importance in the future.  One study found that pond temperature had a marked effect on wood frog tadpole development time. Those in colder ponds grew faster. Conversely, warmer water not only slowed tadpole growth but also evaporated more quickly. Rising ambient temperatures will thus reduce the chances for slower growth tadpoles to metamorphose into lunged froglets before the water evaporates due to accelerated desiccation. [10] On the other side of the survival ledger, empirical data from the beginning and end of the last century revealed that temperatures had risen about 3°F and that male wood frogs were calling for mates about two weeks earlier. This would then move conception time up to account for more time needed to gestate and grow. [11] Given their historical evolutionary success over the last 34 million years, it is reasonable to conclude that Rana sylvatica is more likely to survive climate change than Homo sapiens, who have only been around for less than one million.

References:

1. Heinrich, B. Winter  World, Harper-Collins, New York, 2003, pp 169-175

2. Schmid, W. “Survival of Frogs in Low Temperature” Science, 5 February 1982,  Volume 215, Issue 4533  pp. 697-698.

3. Pennisi, E. “How to Freeze and Defrost a Frog”, Science, 8 January 2014.

4. Servick, K. “The Secret of the Frozen Frogs”  Science, 21 August 2013.

5. Heinrich, op cit.

6. Costanzo J et al  “Survival mechanisms of vertebrate ectotherms at subfreezing temperatures: applications in cryomedicine”. The FASEB Journal. 1 March 1995 Volume 9 No. 5 pp 351–358.

7. https://www.nasw.org/users/nbazilchuk/Articles/wdfrog.htm    

8. https://animaldiversity.org/site/accounts/information/Rana_sylvatica.html

9. Yuan, Z. et al. “Spatiotemporal diversification of the true frogs (genus Rana): A historical framework for a widely studied group of model organisms”. Systematic Biology.  Issue 65  No. 5, 10 June 2016  pp 824 – 842.   

10. Renner, R. “Frogs not croaking just yet” Science, 12 My 2004.

11. Wong, K. “Climate Warming Prompts Premature Frog Calls” Scientific American 25 July 2001’

Greenshield Lichens

Rock Greenshield Lichens decorate the winter snowscape.

Common Name: Rock Greenshield Lichen – The rosette shape is like a rounded shield and is greenish gray in color ― a green shield found almost exclusively on rocks. Lichen has an obscure etymology but may derive from the Greek word leichein which means “to lick” just as it sounds. There is no extant clue for this association as very few lichens are eaten (and thus licked). Some, like this species, have small lobes that could be a metaphor of sorts for little (leichein) tongues. The Common Greenshield Lichen is found mostly on trees.

Scientific Name: Flavoparmelia baltimorensisParmelia is Latin for shield,  the genus that was used broadly for all lichens that were shield shape until 1974 when it was subdivided. Flavo as a prefix means yellow, distinguishing these lichens from the blue tint of other shield lichens… yellow hues combine with blue so that the overall effect is green. This species was first classified from a Baltimore specimen giving rise to the familiar nomenclature.

Potpourri: The rock greenshield lichen and its virtually indistinguishable  cousin the common greenshield lichen (F. caperata) are encountered clinging to a substrate of  rock or wood while traipsing along almost any trail. In the winter months when  deciduous trees are devoid of greenery and mostly annual undergrowth has died back, only the grays and browns of rocks, dirt, leaf litter, and boles remain. The exceptions are the greenshield lichens that spread their leaflike (and tongue-like) lobes outward and onward, oblivious to the reduced light and frigid temperatures by which the rest of the forest is constrained. Their persistence is testimony to the lichen lifestyle, one of the natural world’s wonders. Comprised of a fungus that has partnered  with one or more organisms from a different kingdom, 14,000 identified lichens have mastered the art of survival in the most inhospitable of habitats from hot, dry desert to frozen tundra. They are even found on Mount Everest at elevations exceeding seven kilometers. [1]

According to the International Association of Lichenology, a lichen is “an association of a fungus and a photosynthetic symbiont resulting in a stable vegetative body.” The fungal partner is called the mycobiont and constitutes about 95 percent of the lichen body structure or thallus. Since fungi are heterotrophs and therefore cannot make their own food, they must rely on autotrophs that photosynthesize the sun’s energy to produce nutrients necessary for growth and reproduction. Some fungi consume dead plants as saprotrophs, some parasitize living organisms, and some connect to living plant roots in a mutually beneficial association called mycorrhizal (fungus root). Lichenized fungi evolved a relationship to photosynthesizing organisms that falls into the category of symbiosis, which is defined as an intimate relationship between two living things. The photosynthetic partner of the lichenized fungus is called the photobiont and can be either green, brown, golden algae or cyanobacteria, a type of bacteria that contains chlorophyll formerly called blue-green algae. Algae is now a broad non-technical name for several types of polyphyletic eukaryotes that photosynthesize, which is all that matters to the fungal partner. The photobiont for greenshield lichens is a green alga species in the genus Trebouxia, which is the most common photobiont for all lichens. [2]

The relationship between the fungus and the algae in a lichen is complex. Traditionally the symbiosis of lichens has been characterized as mutualism in which both partners benefit equally. In reality, the relationship frequently ranges from commensalism, where the fungus benefits but the algae do not, to outright parasitism, where the algae are harmed for the benefit of the fungus. Some insight into the living arrangements is afforded by the observation that the lichen’s fungi need the algae but not vice versa. That is to say that none of the lichen forming fungi, comprising almost half of ascomycetes, the largest division of the Fungi Kingdom (mushroom are in the other large division – the basidiomycetes), exist in nature without algae, whereas the algae can and do lead independent lives on their own. However, having a place to live with enough water and air for photosynthesis to make carbohydrates and respiration to oxidize them for energy (both plants and fungi need to breathe) is certainly an algal advantage. It is at the cellular level that the controlling dominance of the fungus can become sinister. The root-like tendrils of the fungus called hyphae surround and penetrate the algal cells, releasing chemicals that weaken the surrounding membrane so that the carbohydrates leak out, feeding the fungus. Weaker algal cells thus violated die, and were it not for periodic reproduction, so too would the lichen. [3] A lichen has been described as a fungus that discovered agriculture, an apt aphorism. The fungus uses the algae for subsistence in like manner to a farmer tending fields to extract their bounty ― it would be nonsensical to assert that farmers and soybeans therefore benefit mutually in symbiosis.

Lichen reproduction is also complicated, as it involves two different species that must reproduce independently and then come into close contact to form a union. While this union must have occurred at least once for any lichen to exist, a singular rare event in the millions of years of geologic time is not unusual. The mycobiont, in this case Flavoparmelia baltimorensis, produces reproductive spores in a fruiting body called an apothecia in a manner analogous to the gills of mushroom fruiting bodies. The photobiont, in this case Trebouxia, also reproduces using spores when it is independent of the fungus, but only reproduces asexually once lichenized. Apothecia are very rarely seen on greenshield lichens, direct evidence that, like most lichens, they have no pressing need for reproductive spores.  Since they are abundantly distributed and can on occasion cover vast swaths of boulder fields (F. baltimorensis) and exposed wood surfaces (F. caperata), it is evident that there is a successful reproductive workaround. In general, this consists of a lichen forming a detachable unit that includes both the fungus and its algal partner for windborne distribution to new locations. These “lichen seed packets” take various forms including soredia that are miniscule balls of fungal hyphae surrounding a few algal cells and schizidia, which are simply flakes of the upper layer of the fungal thallus which also contains the algal layer. One of the ways to tell rock and common greenshield lichens apart is that F. baltimorensis has schizidia and F. caperata has soredia. However, identifying small irregular components on the gnarled surface of a lichen is a challenge even for a lichenologist with a lens. It is much easier to identify a rock or on a tree and look for lichens.

Greenshield lichens often cover broad expanses of rock and tree surfaces to the extent that long term effects come into question. Do lichen covered rocks disintegrate at an accelerated rate? Do trees weaken due to the amount of bark covered by lichens? For the most part, lichens are self sustaining in the sense that the heterotrophic fungus is supplied nutrients from autotrophic algae. While sunlight and water are the essential ingredients for photosynthesis, nitrogen, phosphorous and potassium are also required for plant growth (the three numbers on a fertilizer bag refer to these elements).  It is less well known that fungi need these same nutrients for the same metabolic reasons. [5] In many cases, lichens are able to get all of the nutrients they need from minute amounts dissolved in water. The quality of precipitated rainwater is why lichens are useful for environmental monitoring as their growth correlates to air quality. The two main substrate characteristics associated with lichen growth are moisture retention and exposure to sunlight. For lichens growing on exposed tree bark, the degree to which moisture is retained as it flows down the tree is the key factor. While it is true that the lichen will “rob” some of the nutrients that would otherwise go to the tree roots, the amount is negligible. Deciduous trees have more lichens than conifers because their leafless trunks are sunlit for six months of the year whereas evergreens are ever shaded. Rocks are not good at retaining moisture. Consequently, lichen hyphae penetrate rock surfaces to depths of several millimeters seeking water, and, depending on the type of rock, minerals as well. This contributes to the long-term weathering of rocks for soil formation, and more broadly to the million-year geologic cycle of mountain building and erosion. The answers to the two questions are yes, lichens do disintegrate rocks at a geologic rate, and no, lichens do not harm trees ― they are sometimes called epiphytes for this reason.

Common Greenshield lichens do not harm the trees they use for support.

Chemistry is another important aspect of lichen physiology. More than 600 unique compounds are concocted by lichens in surprisingly large quantities … up to five percent of total bodyweight. It is instructive to note that when lichenized fungi are artificially grown without algae in a laboratory, chemical output is negligible. This can only mean that specific chemicals promote the associative nature of the individual lichen species.  There are any number of hypotheses that might explain this. Bitterness as deterrence to animal browse is certainly one possibility, as lichens grow quite slowly on exposed surfaces and are easy to spot. However, some lichens, notably reindeer moss (Cladinia rangiferina), are a major food source for animals and are quite likely propagated in their droppings. It is also believed that some chemicals act to coat sections of hyphae to provide air pockets necessary for photosynthesis by the algae. The chemical footprint of a lichen species is one of the main diagnostic tools used in field identification. Lye, bleach and several other reagents are dripped onto the surface; a change in color indicates the presence of a specific chemical that is related to a specific lichen. [4] There are many unknown aspects of lichen physiology. This was made manifest recently when it was discovered that many lichens contain a type of basidiomycete yeast (also a fungus), which is embedded in the body of the ascomycete fungus in varying concentrations that correlate to anatomical differences. Some if not all lichens may actually consist of two fungi and an alga or two, a far cry from simple symbiosis. [6] The function of yeast fungi is not yet known.

The Flavoparmelia genus was separated from the other Parmelia (shield) lichens in 1986 in part due to their production of the chemical compound usnic acid. [7] It is a large molecule with the formula C18H16O7 which simplifies the recondite but recognized international IUPAC standard 2,6-Diacetyl-7,9-dihydroxy-8,9b-dimethyldibenzo[b,d]furan-1,3(2H,9bH)-dione. Usnic acid is found primarily in the top layer of the fungus along with another chemical called altranorin just above the area where the algal bodies are concentrated. It is surmised that they contribute to shielding green algae from excessive sunlight exposure since bright sun is inimical to photosynthesis, the source of all lichen energy. Usnic acid is also a potent antibiotic, collected primarily from Usnea or beard lichens due to higher concentration for use as an additive in commercial creams and ointments. Flavoparmelia caperata is one of several lichens that have historically been used by indigenous peoples as a tonic taken internally or as a poultice applied to a wound. [8] The medicinal uses of lichen fungi should come as no surprise, as many polypore type fungi growing as brackets on tree trunks have been used medicinally for millennia. The abundance of rock and common greenshield lichens is evidence of successful adaptation. In addition to thriving on bountiful rock and wood surfaces, the chemical shield screens sunlight to protect the green algal energy source and guard against assault by microbes and mammals.  In other words, they are literally green shields.  

Carl Linnaeus assigned lichens to the class Cryptogamia meaning “secret life” along with everything else that created spores and not seeds. [9]  One of the more enduring lichen secrets is how and when the coalition between fungi and algae began. It is widely accepted that simple replicating organisms started out in aqueous habitats, as water affords bodily support and nutrient transport. The transition from sea to shore would have been nearly impossible for an alga with no structure or a fungus with no food. There is good reason to suppose that some form of union like a lichen may have come about by chance and was then promoted by survival.  Scientific research over the last several decades has cast some light into the dark shadows of this distant past. What look like lichen hyphae embedded in the soil around fossils from the pre-Cambrian or Ediacaran Period (635-541 million years ago) suggest that lichens may have been the first pioneers on dry land. [10] This is supported by the finding that marine sediments from this same period contain not only the root-like hyphae of fungi but also the rounded shapes of blue green algae or cyanobacteria. This suggests that something lichen-like started out in the water was left high and dry in a tidal flat to make the critical transition. [11] However, recent DNA analysis of primitive ferns and lichenized fungi revealed that the lichens evolved 100 million years after vascular plants.[12]  Lichenology, like all science, is a continuum that never ceases in its quest for knowledge. Future field tests and experiments are certain to clarify the origin story.

References

1. Kendrick, B. The Fifth Kingdom, 3rd edition, Focus Publishing, Newburyport, Massachusetts, 2000, pp 118-125.

2. Brodo, I., Sharnoff, Steven and Sylvia. Lichens of North America Yale University Press, New Haven, Connecticut, 2001. pp 1-112, 316-317, 479-484.

3. Wilson, C. and Loomis, W. Botany, 4th edition, Holt, Rinehart, and Winston, New York, 1967, pp 451-453.

4. Brodo, op. cit.

5. Kendrick, op. cit. pp 142-158.

6. Spribille, T. et al “Basidiomycete yeasts in the cortex of ascomycete macrolichens” Science Volume 353 Issue 6298, 21 July 2016, pp 488-492.

7. Hale, M.  1986. “Flavoparmelia, a new genus in the lichen family Parmeliaceae (Ascomycotina)”. Mycotaxon. 25 (2): April-June 1986.pp 603–605

8. Brodo, op. cit.

9. Linnaeus, C. Species Plantarum. Vol. 2. Stockholm: Impensis Laurentii Salvii.1753. p. 1142

10. Frasier, J. “Were Weirdo Ediacarans Really Lichens, Fungi, and Slime Molds?” Scientific American 13 December 2012.

11. Yuan, X. et al “Lichen-Like Symbiosis 600 Million Years Ago” Science  Volume 308, Issue 5724, 13 May 2005, pp 1017-1020

12. Frederick, E. “Hardy lichens don’t actually predate plants” Science 20 November 2019

White-tailed deer

A white-tailed deer doe evidently not alarmed by a proximate hiker

Common Name: White-tailed deer, Virginia deer, Whitetail – The etymology of deer extends to the origins of Indo-European languages in Sanskrit as dhvamsati meaning “he falls to dust” (perhaps to indicate mortality). As new languages arose according to human migration and custom, the root was modified … in Old Norse dyr meant wild animal or beast. The interpretation of deer as any animal, such as in Shakespeare’s King Lear Act III, Scene IV – “But mice and rats, and such small deer,” dropped out of common usage long ago. A “modern” deer is any ruminant animal of the family Cervidae. The white underside of the tail is exposed by raising it vertically erect  as a warning signal to its cohorts as it flees from a perceived threat.

Scientific Name: Odocoileus virginianus – The genus name is from the Greek odon meaning tooth and koilos meaning hollow due to the pronounced indentations in the crowns of the molar teeth, prominent in herbivores. The species was first classified in Virginia.

Potpourri: The life and times of the white-tailed deer from bust to boom in the 20th century affords a case study in adaptation to human habitats. It is one of the species that have earned the coined scientific sobriquet synanthrope applied to any wild species that lives near and benefits from their association with humans ― like pigeons and possums, they are partners of the Anthropocene. For those who have been around for a half century or so, the renaissance of the deer is something of a miracle.  About the only way for an east-coaster to see deer around the middle of the last century was to go “deer stalking” in Pennsylvania’s Poconos, driving around the mountains at dusk hoping for the rare treat of seeing even one. The pre-Columbian deer population is estimated to have been about thirty million, held in check by Native American hunters who burned the forest underbrush in part to aid in their quest and by marauding wolfpacks seeking the young and weak. Everything changed with the onslaught of the Europeans moving into the wilderness believing that its resources were endless until it was no longer wild and they weren’t. By 1900, the North American deer population had plunged to about 500,000 and they had been largely extirpated in many states in New England and the Midwest. The tide turned with the passage of the Lacy Act proscribing the sale of wild animals which ended practices such as using deer hides as currency. [1]

After World War II, GI Bill-educated veterans moved away from farms to work in cities settling in suburban developments pioneered at Levittown near Philadelphia. The economics of farming was meanwhile transformed in favor of larger, consolidated fields. Many small farms became fallow as forest succession restored the original woodland habitats that once predominated. The combination of suburban housing tracts now abutting deep wood sanctuaries reconstituted the sustaining and nurturing habitats and the renaissance of the white-tailed deer. The absence of wolf predation and the diminution of hunting by humans removed the check that had once balanced burgeoning deer populations. The phenomenal  return of the white-tailed deer that was a symbol of ecological restoration has become dystopian due to their resurgent numbers. The deer-human relationship has come under strain due to crop damage, deer/car collisions, landscape plant damage, Lyme disease  epidemiology, and, most importantly, ecosystem imbalance. These five friction factors all arise from one of two innate and irrevocable deer behaviors: eating and roving. Remediation requires reduction of the deer population to that which is sustainable in a specific area. This can be done locally with barrier fencing but this is impractical for large fields and local roads and for the most part only moves the problem to another location since deer are both mobile and resourceful. Restoring predator populations would also be effective, but the acceptability of wolves lurking in the woods is contrary to an understandable Little Red Riding Hood mentality. The only action that can be controlled and widely implemented is to resort to human predation, euphemistically called culling the herd. [2]

An understanding of deer behaviors associated with browse foraging and procreation roving is necessary to appreciate both the nature of the overpopulation problem and the most effective way to resolve it. White-tailed deer are hooved ruminant animals in the Family Cervidae (Latin for deer) which also includes elk, caribou, and moose. There are 17 recognized geographically dispersed subspecies based on minor differences in physiological traits including the endangered Key deer of south Florida and the threatened Columbian deer of the Pacific Northwest. However, since DNA testing shows no distinctions, there is scant justification for retaining the regional variants. The closely related mule deer endemic to western North America that can and do hybridize with white-tailed deer are a separate species (O. hemionus) even though they look just like their cousins with marginally longer “mule-like” ears. Depending on the species, males are called stags, harts, bucks or bulls, females are called hinds, does or cows, and the young are called calves or fawns. The most recognizable and unique of cervid features are the osseous antlers that rise from the male (and female caribou) skull like a  crown of bony spears, nature’s most magnificent weapons.

The symbolism of antlers permeates the cultural history of the hominids who hunted the animals that bore them. From the cave paintings of Chauvet in France to many heraldic coats of arms of medieval Europe, antlers are metaphor for strength and courage. Hung over the fireplace of the iconic hunting lodge, they are meant to convey honor on the hunter, even if the hunt was less so. Antlers are the weapon that one buck wields against an opponent in contesting for conjugal rights, a survival of the fittest ordeal of the highest order. Success depends on a combination of bulk strength and antler effectiveness. Size matters, and the winner spreads the genetic heritage that will produce even larger antlers with more projections. Cervid buck antlers can extend up to four feet upward and arch backward halfway across the back. The ramose headgear is an impressive feat of physiology that seems impossible as an annual event; but antlers are shed every winter and regrown the following spring. [3] Growth at the rate of up to one inch per day to produce 20 pounds of bone tissue is necessary to complete the process to meet rutting time constraints. Aside from doubling the buck’s energy input needed over baseline, antlers require calcium and phosphorous that must be drawn from existing bone resulting in seasonal osteoporosis. That this is a truly remarkable trait is supported by genomic analysis which suggests that it arose from a single evolutionary mutation of an ancestral cervid. Antlerogenesis, as the process is now named, offers potential insights into tissue generation that could plausibly be used to produce bone tissue prostheses for amputees. [4]

The deer life cycle starts in early summer, when does give birth to one or two fawns, although quintuplets have occurred. Fawning areas are selected for the safety of the suckling fawns as they gain a half pound a day to triple their weight in a month. During the first few postpartum weeks, the doe keeps the fawns hidden from predators by leaving them separately in dense undergrowth while foraging. The fawns remain immobile, even withholding their feces and urine until the doe returns, ingesting the waste to eliminate any vestige of telltale scent. Even with this exceptional maternal care, forty percent of the fawns succumb to a variety of mishaps and maladies, the majority to coyote predation.  Leaving the palladium of the fawning area, the extended family of does, their fawns, and female offspring from previous years assembles with as many as twelve deer. Moving about together in their home range averaging one square mile, they forage in the crepuscular light of dusk and dawn. Ruminants like deer have a four chambered stomach that allows them to digest almost anything.  Food travels to the rumen which contains bacteria to break down the vegetation.  The reticulum circulates the food back to the mouth as cud to be chewed again, whence the omasum pumps the food to the abomasum to complete the process.[5] The deer diet is therefore diverse, consisting of over 600 plant species broken down according to type: 46% browse (sedges, shrubs, and trees); 24% forbs (herbaceous flowering plants);  11% mast (nuts and berries); 8% grass; 4% agricultural crops; and 7% other (like fungi and lichens). These wide-ranging menu options provide adaptability to ensure deer species survival during weather and climate vagaries.  However, although deer are consummate herbivores, they have preferred foods as all animals must.  Crops and mast top the list whenever available and forbs trump browse as provender. Soybean scavenging in farmed fields and begonia bodegas in suburban gardens is an integral part of the deer population debate.

A newborn fawn left by its foraging mother in a secluded area.

Male deer separate from the maternal group when they become yearlings and form loose-knit and unrelated bachelor groups of as many as five individuals. As the summer wanes and autumn colors appear, bucks ready for the annual procreative ritual by prepping antlers. Growing from a skull projection called a pedicel, the antlers are at first covered with a nurturing blanket of nerves and blood vessels with the texture of velvet. At maturity, the covering falls away in shredded tatters that are scraped off by rubbing the antlers against a tree. These “buck rubs” impart a glandular secretion to mark out the buck’s home turf. In the late fall, nominally November, the rut race begins with bucks setting the pecking order by locking horns and pushing one against the other like furry Sumo wrestlers until brute force prevails; finesse has nothing to do with it. Bucks attract females by creating a scrape, a two-foot diameter area that has been cleaned of underbrush to expose the bare earth to which generous amounts of his urine have been applied.  When a doe enters one of up to seven estrous periods that last for one to two days every month, pheromones are exuded from glands on the inner side of the back legs at the knee joint. Drawn by the scent of the buck rubs and marked scrapes according to primordial evolutionary attraction, the doe urinates there to provide a beacon for the dominant buck to follow. A successful buck may mate with as many as twenty does, fighting off competitors whenever challenged. The resultant fawns emerge in the spring to complete the cycle. With ample food and moderate weather conditions, deer populations double every two to three years. [6] Because does are motivated during the rutting season to move independently at speeds that can reach 35 mph, most collisions occur in November along rural roads.

Two bucks browse together until mating season begins.

Returning to the five major deer population – human activity impacts, long term ecosystem deterioration due to deer overbrowse and traffic fatalities due to deer collision stand out as exceptional. Tick infestation, garden grazing, and crop consumption are relatively minor issues in almost all cases. Lyme disease is a tick-born pathogen that has severe medical consequences if left untreated. Deer are one of the hosts of the black-legged tick which carries of the disease’s bacterial vector Borrelia burgdorferi, resident parasites of white-footed mice. The rising incidence of Lyme disease over the last several decades correlates with the increasing number of deer. However, correlation is not causation, and numerous field studies have been completed in the last thirty years to separate scientific fact from emotive fiction. In one New Jersey study, the deer population was reduced by half from 45.6 to 24.3 deer per km2 over a three-year period without any appreciable change in the tick population except that it increased in the second year. [7] In Connecticut, the epicenter of Lyme disease named for one of its towns, a more robust venery reduced the deer population to 5.1 deer per km2  corresponding to an 80 percent reduction in reported disease incidence. [8] A study in Massachusetts by Harvard researchers was based on data from a deer culling operation that was carried out from 1983 to 1991 that reduced the deer population from 400 to 100 without any decrease in Lyme disease incidence. The analysis of this data considered the population dynamics of mice, ticks, and deer, concluding that to have an appreciable effect, the deer population would need to be reduced to .07 deer per km2. [9] It is reasonable to assert from these examples that deer do contribute to the number of diseased ticks and therefore to the frequency with which ticks infect humans. However, effecting a measurable diminution in Lyme disease incidence would necessitate near extirpation of white-tailed deer which is not an option.

Deer consumption of plants grown by exurban homeowners to landscape an expanse of manicured grass is a nuisance at worst. Garden center horticulturalists are generally competent in recommending plants that have evolved natural repellents like thorns or bitter leaves. Even deer delicacies like young tree saplings can be deer-proofed with chemical sprays that mimic predator smells, protected by netting and fencing, or surrounded by metallic, wind-powered “scare deer” devices. Deer consumption of plants grown by farmers to feed cattle for beef and corn for hogs and ethanol is another matter; agroeconomics and food security outscore lawn aesthetics. The economic loss to farmers due to wildlife in the United States is approximately $4.5 billion annually, about one percent of the annual $450 billion Gross Cash Farm Income (GCFI) tallied by the USDA. While measurable, this is hardly catastrophic. The particular crop eaten and the specific animal snacking on it varies significantly by locality due primarily to the extent to which fields abut woodland wildlife refuges. Indiana provides a good benchmark as it is 65 percent farmland with average farm income. An independent  study recently conducted by Purdue University provides quantitative data concerning wildlife crop losses to corn and soybeans, its major crops. White-tailed deer, raccoons, and groundhogs were the main culprits with rabbits, squirrels, and wild turkeys contributing at the low end of the scale. Racoons were responsible for 87 percent of corn predation, eight times more than deer. Deer and groundhogs were the primary soybean consumers. Questionnaires filled out by farmers provided the interesting feedback that, while they thought that deer were mostly responsible for all damage, only one in five considered deer a nuisance. Economically, most fields surveyed had damage of less than $100 and even the most damaged fields did not exceed $500. Indiana, like most states, strikes a delicate balance by allowing farmers to kill deer only if they submit proof of excessive damage ($500 in this case). [10] This is a reasonable tradeoff between allowing for wildlife survival (it is their land too) with some degree of control.

Deer collisions, particularly when fatal, are surely consequential. While official numbers are approximate, insurance claims provide a good  surrogate. According to the Insurance Institute for Highway Safety, there were 185 deaths due to deer collisions in 2019, one half of one percent of the 36,120 automobile fatalities reported by the National Highway Traffic Safety Administration.  State Farm Insurance reported  2.1 million animal collision insurance claims nationwide from July 2020 to June 2021 of which two thirds were deer. The highest incidence was in West Virginia and the lowest in Washington DC. [11] The only three ways to deal with this problem are to change driver behavior, change deer behavior, or reduce the deer population. Deer crossing warning signs are an attempt to address the former but these are now so common that they are largely ignored. Driving slower at dusk on rural roads from October to December is good practical advice, but so is don’t drive drunk. Deer behavior controls consist of fencing and wildlife passage corridors along and around major highway routes but this is impractical on side roads.[12] Evolution may play a role as deer have been observed stopped at the side of the road waiting for traffic to pass, a trait that would be passed along as those that bound ahead are removed from the gene pool. Reducing the deer population is the only option that remains.

Synanthropic animals impose a burden on ecosystems that is comparable to that of invasive plants.  Both result directly from the Gargantuan footprint of almost 8 billion humans. People transport plants (and animals like pythons in the Everglades) both wittingly and unwittingly from places where they evolved to places where they didn’t which frequently means that there is no natural way to stop their invasion. Synanthropes mostly do what they have always done, taking advantage of those human landscapes and structures to which they are suited. Pigeons nest on rocky cliffs like tall city buildings, Canada geese throng in grassy wetlands like golf courses, and deer browse along wooded glades like housing developments. From the ecological standpoint, deer are by far the most insidious in subverting the entire process of forest succession. All trees eventually die and must therefore be replaced by saplings that resulted from the seeds that they dropped. Deer browse removes the shoots at the ground level so that there are no saplings to succeed … ultimately, there would be no forest. The disruption of natural ecosystem processes has cascading effects due to the complexity of association. One study compared the bird populations on islands which had no deer to those with deer and found that there was a 55 to 70 percent reduction (depending on species), with the largest reductions in those birds that depended on forest floor plants. [13]   To make matters worse, deer removal of native plants from the understory creates disturbed areas which are havens for weedy invasives. Forty areas that were fenced off from deer for three years in Pennsylvania and New Jersey had fewer invasives than the surrounding woods. [14] Since deer are considered ecologically excessive in 73 percent of their North American range, the implications are clear. Something must be done.

Deer population control has always been a contentious issue. The “Bambi effect” predisposes many to decry killing altogether. Disney’s namesake fawn barely survives his mother’s death at the hands of  hunters and the raging fire that spread from their untended campfire to succeed his father as prince of the forest. There are humane methods to reduce the fawn count but these are labor intensive and therefore expensive. Contraceptives in the form of vaccines that cause antibodies to block pregnancy or inhibit reproductive hormones must be periodically administered, practically impossible for roving herds. Tranquilizing and sterilizing deer is a better option since it is permanent. Doe  sterilization has been unsuccessful since bucks continue to seek out and impregnate those still in estrus. A program to sterilize bucks on Staten Island by performing five minute vasectomies has reduced fawn births by 60 percent and the deer population by one fifth at a cost of $6.6 million. [15] This would be unaffordable on a scale needed to deal with the 30 – 40 million deer wandering around the continent. Most people are now willing to accept the inevitable and condone managed deer hunts. A legitimate rationalization is that humans have preyed on ruminants like deer for at least as long as they have made cave drawings depicting the hunt. The U. S. National Park Service has established a deer culling program to “protect and restore native plants, promote heathy diverse forests, and preserve historic landscapes.”  The 6,500 pounds venison harvested from the parks in 2020 was donated to local food banks. [16] While some may still object to killing deer, it cannot be without a tinge of hypocrisy. The automated killing in slaughterhouses to produce hamburger made from the flesh of another ruminant is accepted practice.  How can hunting free range ruminants be worse?

References

1. Jarvis, B. Animal Passions, The New Yorker, 15 November 2021, pp 38-44.

2. Swihart, R. K. and DeNicola, A.  Public involvement, science, management, and the overabundance of deer: Can we avoid a hostage crisis?  Wildlife Society Bulletin 1997 Volume 25  pp 382-387 

3. Emlen, D. Animal Weapons, Henry Holt and Company, New York, 2014 pp 2-4. 

4. Ker, D. and Yang, Y. “Ruminants: Evolutionary past and future impact” Science, Volume 364, Issue 6446, 21 June 2019, pp 1130-1131.

5. https://dnr.maryland.gov/wildlife/Pages/hunt_trap/wtdeerbiology.aspx       

6. https://www.fs.fed.us/database/feis/animals/mammal/odvi/all.html     

7.  Jordan, R. et al “Effects of reduced deer density on the abundance of Ixodes scapularis (Acari: Ixodidae) and Lyme disease incidence in a northern New Jersey endemic area” Journal of Medical Entomology, Volume 44 no. 5 September 2007 pp 752-757.

8. Kilpatrick, H, et al “The relationship between deer density, tick abundance, and human cases of Lyme disease in a residential community” Journal of Medical Entomology, Volume 51 no. 4 July 2014 pp 777-784.

9. https://www.hsph.harvard.edu/news/features/kiling-deer-not-answer-reducing-lyme-disease-html/    

10. McGowan, B. et al. “Corn and Soybean Crop Depredation by Wildlife” FNR-265-W, Department of Forestry and Natural Resources, Purdue University. June 2006. https://www.extension.purdue.edu/extmedia/FNR/FNR-265-W.pdf      

11. https://www.iii.org/fact-statistic/facts-statistics-deer-vehicle-collisions     

12. Hallock, Timothy J. Jr (2016) “The Effect of the Deer Population on the Number of Car Accidents,” Journal of Environmental and Resource Economics at Colby: Vol. 3 : Iss. 1 , Article 14. Available at: https://digitalcommons.colby.edu/jerec/vol3/iss1/14.   

13. Staedter, T. “Deer Decreasing Forest Bird Population” Scientific American, 31 October 2005.

14. Stokstad, E. “Double Trouble for Hemlock Forests” Science 19 December 2008.

15. Jarvis. Op cit.

16.Hedgepeth, D. “Deer cull set for parks in Md., W. Va.” Washington Post 7 November 2021

Great Falls at the fall line

The Great Falls of the Potomac River

Fall LineA fall line is defined as a line of numerous waterfalls such as at the edge of a plateau where water flows over hard, erosion-resistant rocks onto a weaker substrate that washes away to create a vertical drop. Also called a fall zone where it extends over a large geographic area.

Potpourri: Fall line waterfalls occur globally wherever the edge of a land continental plate abuts the coastal plain formed on its perimeter from the sediments deposited from upland erosion. Examples include Africa, Brazilian South America. Western Australia, and the Indian subcontinent of Asia where the Jog Falls plunges 830 feet. The world’s most prominent and prevalent fall line is where the crystalline Appalachian Mountains meet the sedimentary coastal areas of eastern North America. From north to south, the major rivers that cross the fall line are the Delaware, Schuylkill, Patapsco, Potomac, James, and Savannah originating to the west and north and flowing southeastward through the Appalachian Piedmont Province toward the Coastal Plain abutting the Atlantic Ocean. The Susquehanna River is not included because it is older than the mountains through which it passes.  Because falling water junctures block river traffic upstream from moving downstream and vice versa, they are a natural location for entrepôts of goods passing from the coast to the hinterland to be traded for produce and raw materials. The rushing waters became the motive force for the mills and factories that formed the backbone of the nineteenth century Industrial Revolution in the United States. Ultimately, the cities Trenton, Philadelphia, Baltimore, Washington D. C., Richmond, and Augusta arose along the Atlantic Seaboard Fall Line that are connected by US 1, the original north south connecting route now paralleled by I-95. There are also many smaller rivers flowing generally eastward with towns of varying size at their fall line such as Wilmington, Delaware on the Brandywine River and Fayetteville, North Carolina on the Cape Fear River. [1]

The geography of the fall line is complicated. It has a north-south range of almost a thousand miles of locally diverse terrain encompassing an array of canals, dams, bridges, and harbors engineered over the centuries to move around and over the waterfalls.  The colonial settlements of the coastal plain reaching ever westward to penetrate into the  upland regions of the Appalachian Piedmont Province had a major impact on its waterways, notably the Patapsco and the Potomac River basins. Two years after the Catholic King Charles I signed a charter for the heirs of Lord Baltimore to establish a colony in honor of Queen Henrietta Maria, the Ark and the Dove anchored off an island in the Potomac to settle at a spot soon to be named St. Mary’s. The success of tobacco plantations over the course of the next fifty years encouraged population growth  and investors with venture capital such as Charles Carroll, a physician from Annapolis. In August 1729, the Maryland assembly passed an act providing for a town named Baltimore on land owned by Carroll at the mouth of the Patapsco. The location along the western shore of the Chesapeake Bay was the terminus of the falling waters of a Patapsco River tributary that ended in a sheltered cove ideal for a harbor. With a flour mill for grain brought downriver from the west and an iron mill to take advantage of the iron deposits and the abundant forests for charcoal fuel, Baltimore became established as a fall line city, shipping almost 2,000 tons of pig iron to England between 1734 and 1737. At about the same time, a village in Prince George’s County along the Potomac River became prominent as a mercantile location near what were called the Potomac Rapids. It was eventually named Georgetown. [2,3]

The Potomac River proved more consequential than the Patapsco due to its extensive drainage, emanating from the heart of the Appalachian Plateau in what is now West Virginia. The importance of linking the eastern seaboard with the western hinterlands was one of George Washington’s first and most enduring predilections. Having surveyed the extensive holdings of Lord Fairfax in western Virginia and operated on the frontier during the French and Indian War, Washington gained an appreciation for the geographic constraints of the colony. Emerging victorious and idolized for his pivotal role in the Revolutionary War, he returned directly to his grand design, setting off on 1 September 1784 to take stock of the western lands [4]. Based on his journey, he wrote a letter to Governor Harrison of Virginia that it was necessary “to prevent the trade of the western territory from settling into the hands of the Spaniards or the British” to  “extend the navigation of the eastern waters; communicate them as near as possible with those which run westward.” The first challenge was to enlist the support of the legislatures of both Maryland and Virginia to overcome the jealousies and rivalries between the merchants of Baltimore and Georgetown. Appointed as chairman of a commission to investigate the feasibility of making the Potomac River navigable by the Maryland General Assembly in 1784, Washington chaired a joint conference with Virginia that issued a resolution in favor of “removing the obstructions in the River Potomac, and the making the same capable of navigation from tide-water as far up the north branch of the said river as may be convenient and practicable.” The Patowmack Company was organized in 1785 to get around the fall line. George Washington was its president until 1792 when he resigned to start a new job with greater responsibilities. [5]

There were five sections of the Potomac River that were impassable to watercraft due to rapid currents, shallow depths, and waterfalls. Great Falls, the only true waterfall and Little Falls, an area of rapids just upstream of Georgetown, were the only two that would require a series of locks and canal segments to bypass the river altogether. The other three, Seneca Falls on the Virginia side of the Maryland creek of the same name and House and Shenandoah Falls near Harper’s Ferry would require only dredging of canal passages for deep draft boat access. Even though the Patowmack charter provided “liberal wages” for “any number not exceeding one hundred good hands” each provided with one pound of salt pork and three jills of rum daily, laborers were hard to find. The task of building multiple canals in five different locations proved to be daunting, with progress plagued by periodic flooding, delays in construction material supplies, and poor management. The five locks for the 80 feet of vertical elevation change for the canal passage around Great Falls were not completed until 1802, seventeen years after the first work started and fourteen years later than predicted at the outset. The Patowmack canal system boats and barges carried thousands of barrels of flour and whiskey and bushels of corn and wheat between Georgetown and Cumberland until 1828, when bankruptcy forced divestiture.[6] Its successor as fall line bypass was the Chesapeake and Ohio Canal, initiated on the 4th of July 1828 with President John Quincy Adams presiding at the ground-breaking ceremonies.  On that same day in the Patapsco basin near Baltimore, the first rails of the Baltimore and Ohio Railroad were laid with Charles Carroll presiding as the only surviving signer of the Declaration of Independence. The C&O Canal would eventually meet the same fate as the Patowmack, becoming obsolescent and financially unsound as the railroads that could move heavy loads across the fall line without the need for waterborne carriage offered a cheaper alternative.

The Patowmack Canal was George Washington’s special interest project

The geology of the fall line is both complicated and simple. Its inception as the edge of a tectonic plate that was once part of a larger continuum, in this case Pangaea, is simple. The complex conglomeration of both igneous and sedimentary rocks subject to the metamorphism of crushing plate forces is complicated. Alfred Wegener is the Charles Darwin of geology, having first explained the jigsaw puzzle fit of South America into the indented west African coast with the theory of continental drift in 1912. Widely dismissed by most geologists at the time, the idea gained traction in the 1960’s when the periodic (about once every 500,000 years) magnetic reversal of the north and south poles were detectable in the bedrock of the Atlantic Ocean, which would be the case if the sea floor was spreading. Subsequent geologic expeditions confirmed similar rock types and ages leading to the notion that the Wegener’s drifting continents were actually plates which at one point had been conjoined as the single global continent Pangaea that formed in the late Paleozoic Era about 300 million years ago. With a motive force thought to be the convective upwelling currents of the molten magma in which the plates “float,” Pangaea broke up between 200 and 65 million years ago. The eastern edge of the North American plate that had been abutted by the African plate to the south and the Eurasian plate to the north became over eons of time the fall line, its erosive rivers filling the coastal plane with the clastic sediment along its edge [7] Tectonic plate direction and drift rate of about 1.5 inches a year will move them back together in what has been called Pangaea Proxima with Eurasia to the east, North America to the west and Africa above South America in the middle some 250 million years in the future inhabited by whatever life forms have by then evolved. [8]

An orogeny is defined as the formation of mountains through structural disturbance of the earth’s crust through folding and faulting. In other words, the drift of one continental plate is arrested by the inertia of its neighbor as the edges of both are crumpled on impact. The Appalachian Mountains are the result of the Eurasian, African, and the North American plates coming together in three separate orogenetic events called Taconic, Acadian, and Appalachian that occurred with geologic persistence over millions of years involving sequential collisions with various land masses now intermingled within the twisted rocks that remain. The Taconic orogeny occurred in the later Ordovician Period ~ 450 million years ago when Pangea first started to form. Island arcs called terranes to the east in the Iapetus Ocean (the predecessor to the Atlantic – Iapetus was the father of Atlantis in Greek mythology) were pushed into and upon the North American Plate. These would be similar to the islands of Great Britain and Japan off the western and eastern sides of the Eurasian Plate. The resultant formations are a hodgepodge of igneous blocks and sediments with some sections of oceanic crust material like the serpentinite of Soldier’s Delight in Maryland. The Acadian orogeny followed as the Eurasian plate collided in the north causing extensive volcanic activity, uplifting, and metamorphic folding in New England and southeastern Canada in the region called Acadia with less uplift to the south. The collision of the African plate with the North American plate about 250 million years ago is called the Appalachian orogeny as it was the coup de grâce for its namesake result, which were originally as high and jagged as the Rockies. [9] The dynamism of plate tectonics never ceases and eventually Pangaea broke apart along the seam that is now the Atlantic ridge in middle of the expanse of water that fills it. What remains is the complicated and postulated geology of the Patapsco and Potomac River basins.

The fall line marking the Paleozoic igneous and metamorphic rocks of the Appalachian Piedmont that is the eastern edge of the North American plate is not really a line. The buildup of sediments flowing seaward to form the Mesozoic and Tertiary sedimentary rocks of the Coastal Plain is uneven and irregular, even if inexorable. The falling water gradually moves inland according to flow that varies in both time and place. The original Potomac River fall line was at Theodore Roosevelt Island just downstream from Georgetown. It first appeared about seven million years ago when the Atlantic Ocean subsided from the region. For the first five million years, the Potomac River slowly eroded the bedrock due to the gentle gradient of the run-off. The climate changed about two million years ago due to variables in orbital geometry such that the Earth was more distant from the Sun (according to the Milankovitch theory changes in eccentricity or roundness, obliquity or tilt, and precession affect the amount of sunlight reaching the earth). The four major ice ages of the Pleistocene Epoch sequestered water in ice caps and glaciers that resulted in lower ocean levels and an increase in the vertical drop and water flow erosion. Consequently, the location of the fall line moved inland at the rate of a half inch per year to Great Falls fourteen miles upstream where it is now situated. The erosional progression of flowing water follows the path of least resistance. The original Potomac riverbed on the Virginia side of the river shifted eastward toward Maryland to flow down a less resistant fault line at some point in the last million years. A fault is an area where a fracture has formed due to the opposing movements of two adjacent rock formations. The result is the canyon called the Mather Gorge with the cliffs of the fault on either side exposed by the water that removed the loose aggregate between them.  There are some eight miles of the hard rock yet to be eroded until the softer deposits of upstream sediments are reached ― in one million years the falls won’t be great, only a shallow rapid. [10]

The Mather Gorge is a fault line preferentially eroded by the Potomac River.

The rock formations at Great Falls and the shoreline trails along both the Maryland and Virginia sides of the Potomac basin are the Rosetta Stone of its beleaguered geologic past. At the simplest level, rock types fall into three categories according to their origin. Sedimentary rocks are deposited into oceans by rivers as sediment, igneous rocks originate from the molten lava of the mantle, and metamorphic rocks are sedimentary or igneous rocks that have been folded and twisted by pressure and temperature. The geology of the Great Falls basin is mostly metamorphic with an occasional igneous seam of granite called a dike that was intruded during intermittent volcanism; there are no sedimentary rocks. The transformed rocks that form a wide expanse from Virginia to Pennsylvania comprise the Wissahickon Formation, consisting of two main types of metamorphic rock: graywacke and schist. Graywacke, German for “gray, earthy rock,” is an appropriate euphemism for what is dull, monotonous, and virtually devoid of color or texture. Its origins are equally mundane as mudstone that accumulated in sedimentary layers as runoff into a shallow sea. Schist, which is Greek for “split” is the companion to graywacke in both origination and placement. After the graywacke mud thought to have been caused by undersea “avalanches” settled to the bottom of the Iapetus Ocean around 650 million years ago, sand deposits formed in layers on top of it. This is the origin of mica schist, sandstone transformed by heat and pressure into mica and quartzite. These rocks typically retain the striations of the original bedding and can therefore be split (schist) into layers like shale. Granitic rocks that are lighter in color penetrate the graywacke and schist in seams called dikes without regard to the twists of metamorphism indicating that they were inserted or intruded into these formations later. Radiometric dating of the granite establishes a datum of its formation 470 million years ago.  The mica schist has pockets of andalusite, a metamorphic mineral (which differs from a rock in having a single chemical formula ― in this case Al2SiO5) which provides quantitative detail of the metamorphic conditions of its origin. Laboratory testing demonstrates that this mineral forms at temperatures of 650 ֩C and 5.5 kilobars which occur at a depth of 25 kilometers, or 15 miles underground. Deciphering the hieroglyphics of the rock formations at Great Falls tells the story that unfolded in its Pangaean assembly. Mud and sand sediments that collected in the proto-Atlantic Ocean were first compressed by succeeding layers and then twisted and bent by the vice of colliding tectonic plates. The flow of lava incident to orogeny ensued forming granitic intrusions. Everything came to a grinding halt when Pangaea was fully formed, leaving the relict as evidence when the ocean again parted. [11]

The twisted metamorphic rocks of the Wissahickon Formation

On September 30, 1861, the Union army’s 71st Pennsylvania Regiment camped near Great Falls in Maryland where Private McCleary (or possibly Carey) took note of the quartzite rock outcrops that he saw there.  Based on his perception that the rock type and disposition were characteristics of gold deposits, he formed a prospecting group after the war that purchased the property from a local farmer and sank a 100 foot exploratory shaft into the underlying bedrock. After arduous hard rock digging for several years that produced only eleven ounces of gold, the Maryland mine project was abandoned. In 1900, the Maryland Gold Mining Company obtained the property and tunneled extensively until 1908 with similar limited success; several years later a second mine site several miles west into what was called the Ford vein was established and prospected extensively. The final gold rush followed the increase in the price of gold to $35 an ounce in 1934 when a new Maryland Mining Company was formed, extracting 6,000 tons of ore yielding 2,500 ounces of gold before the final shutdown in 1940. Over the seventy years of operation, the total amount of gold produced from  Maryland gold mines was 5,000 ounces with a market value of $150,000 … that amount of gold is now worth  $9M. The geologic origin of the seams of gold in quartzite veins is speculative in timing but settled in sequence. Quartzite is metamorphosed sandstone that entered preexisting fault cracks in the original rock formations as a hot solution with a variety of minerals entrained with occasional flakes of gold. The anastomosing or interconnecting veins range in size from a few inches up to twenty feet that extend in  a belt roughly one quarter mile in width.  In addition to elemental gold, iron sulfide or pyrite (also called fool’s gold) and lead sulfide or galena are the primary minerals. The cracks were formed long after the deposition and uplift of the graywacke and shist, probably during the Triassic Period 200 million years ago based on the existence of fissures in known and dated Triassic rocks in the general region. The faulting and hot solution sand insertion was the result of a final stage of uplift in the formation of Pangaea.[12] It took humans only a few years to dig out what nature had painstakingly assembled over eons. And all that for less than a ton of  the entrancing metal  called gold.

References:

 1.  Lustig, L. K. “Waterfalls” Encyclopedia Britannica, William and Helen Benton Publisher, Chicago, Illinois, 1974, Volume 19, pp 638-643

2. Brugger, R. Maryland, A Middle Temperament, Johns Hopkins University Press, Baltimore, Maryland, 1988, pp. 63-70.

3. Patapsco Valley Heritage Greenway, Inc., “History/Culture” https://web.archive.org/web/20100310084026/http://www.patapscoheritagegreenway.org/history/HistPersp.html

4. Hulbert, A. George Washington and the West: Being George Washington’s Diary of September 1784, The New York  Century Company, 1905

5. Pickell, J. A New Chapter in the Early Life of Washington with the Narrative History of the Potomac Company, D. Appleton and Company, New York, 1856. pp. 25-50.

6. https://www.mountvernon.org/george-washington/the-potomac-company/

7. Cazeau, C, Hatcher, R., and Siemankowski, F; Physical Geology, Principles, Processes, and Problems, Harper and Row, New York, 1976, pp 374-393.

8. Romano, M. and Cifelli, R. “100 years of continental drift,” Science, Volume 350, Issue 6263, 20 November 2015.

9. Schmidt, M. Maryland’s Geology, Schiffer Publishing, Atglen, PA, 2010, pp 88-112.

10. http://www.virginiaplaces.org/regions/fallshape.html     

11. Reed, J., Sigafoos, R., and Fisher J. River and the Rocks, The Geologic Story of the Great Falls and the Potomac River Gorge Geological Survey Bulletin 1471, United States Government Printing Office, Washington DC 1980.

12. Reed, J. Gold Veins near Great Falls, Maryland, Geological Survey Bulletin 1286, United States Government Printing Office, Washington, DC, 1969.

Blueberry

Blueberries are blue because anthocyanin turns blue when the PH is basic – and to attract animals

Common Name: Blueberry, bilberry, whortleberry –  Blueberry would be a strong candidate for the most innocuous common name. Unlike many fruits called berry, the blueberry is true to its name. A true blue berry.  Bilberry is a European blueberry and a North American variant; whortleberry is of British origin.

Scientific Name: Vaccinium spp – The genus name was assigned by Carolinus Linnaeas in 1753 and is of obscure origin, possibly originating as an early Latin name for the bilberry or whortleberry but also implausibly from hyacinth, a member of the lily family native to the eastern Mediterranean region. Spp is an abbreviation meaning several species and is used whenever there is considerable variation due to hybridization as is the case with blueberries.[1]

Potpourri: Blueberries are the most dependable and commendable trailside fruit eaten as browse by hikers on the move.  In the fall as the trees and shrubs wrap up the annual cycle of flowering, growing, and seeding to pass the cold winter months in quiescent repose, the berries await their purposed fate. The upland woods where they mostly grow are awash in the reds of maples and the yellows of hickories in the canopy above the contrasting blues at their base that bring the joys of sweetness and not the doldrums of despair.  The three primary colors represent the fullness of the visual light spectrum and metaphorically the completeness of nature’s cycle. The fruit that bears the seeds of the next generation is the most important part of the plant. But there are many questions that arise: Why do the berries start out red and turn blue? Why do they grow where they do? Why are there so many of them when each seed contains all the DNA instructions necessary and sufficient for a whole new plant?  All in good time. 

There are two general types of blueberry. To be consistent with the hackneyed name of the fruit with a berry that is blue, the shrubs on which the blueberries grow are called highbush and lowbush (HW41 and Dubya43?). Both are grown commercially according to geographic climate preferences. That is far from the whole story of blueberry types, however, as there are myriad hybrids both between them and separately from each.  A recent scientific paper described the blueberry tribe Vaccinieae as a “large and morphologically diverse group that is widespread in the temperate and tropical zones of most continents.” DNA analysis was described as difficult because the characters normally used don’t work well. One of the reasons is that blueberries and their kin are not only diploid with two sets of chromosomes (like humans and many plants and animals), but also triploid, tetraploid, pentaploid … up to six sets or  hexaploid depending on the species.  It was concluded that the genus Vaccinium is not monophyletic, which means that there are multiple ancestors. For the purpose of a practical understanding of the habitat and range of the blueberry, highbush and lowbush will suffice. [2]  The highbush blueberry Vaccinium corymbosum is on the tall side at five to fifteen feet, a tree of a bush. The species name indicates that it grows in corymbs, which are clusters of flowers and then berries more or less in a planar array. They start out with a pinkish-red tinge that matures to fully white, producing a blue berry after pollination, a truly patriotic succession. The highbush blueberry of the dry upland areas in the Appalachian Mountains is the fons et origio of the commercial blueberries cultivated in the United States. [3]

Flowers must attract pollinators to make a fertile seeded blueberry fruit

Those who deprecate the federal government for its intrusion into local affairs should consider Frederick Vernon Coville, a botanist with the USDA who may be considered the progenitor of the commercial blueberry. In the introduction to the USDA bulletin written to document his research, he provides the rationale for his quest. A nine foot high, three inch diameter highbush blueberry transplanted on the grounds of the Smithsonian Institution in Washington DC had been there since before 1871 and was probably fifty years old. That this was not unique was confirmed at the Arnold Arboretum in Boston which a number of thirty year old blueberries that had been grown from seed or transplanted prior to 1880. However, it was also true that all  attempts to grow blueberry bushes using rich garden soils at agricultural research stations from Maine to Michigan had failed. [4]  Why the former and not the latter? What was the missing ingredient? Starting in 1906, Coville planted test plots with different combinations of soil and nutrients to discover four years later that the ingredient was acid … blueberries and a number of related ericaceous plants like cranberry and huckleberry required an acidic soil (PH < 7).  In 1911, he began a series of cross pollination experiments to create cultivars with attributes like larger, sweeter, and denser berry clusters. [5] His field notes, which are listed as collection 413 in the USDA archives, consist of daily penciled entries of research field varieties and fruiting quality. [6] This was the way gene modification was accomplished in the pre-CRISPR Cas-9 era. The historic trial and error method relies on random chance while the new method is scientific, putting the chosen gene in the right place. Unfortunately, the use of science to add genes to produce beneficial cultivars like higher nutrition and better drought tolerance earns the damning epithet GMO (genetically modified organisms) and us shunned by some as “Frankenfood.”

The lowbush blueberry Vaccinium angustifolium, referring to the narrow leaves which in Latin is angustus – folium … is literally diminutive in size at less than a foot. These are the dense berry thickets of New England and the Upper Midwest, notably Maine where it is the state berry ― blueberry pie is second only to lobster in defining the local cuisine. Called the late sweet blueberry in edible wild plant field guides, it was probably the most important fruit for Native Americans as one of the main ingredients for pemmican, a concoction of meat and berries that was a trail food staple. [7] The importance of the lowbush blueberry extended beyond edibility as Chippewa placed dried flowers on hot stones as a treatment for “craziness” and Iroquois used the berries in ceremony to invoke health and prosperity for the coming season. [8] Even though it readily hybridizes like its highbush cousin, the lowbush blueberry is more generally recognized by researchers as a single species that is highly polymorphic. The dense blueberry patches that occupy significant swaths of otherwise relatively sparse northern habitats are a major source of food for a wide variety of mammals and birds. Of particular note are black bears, whose reproductive success has been correlated to the size of the blueberry output in a given year, and large ground dwelling birds like wild turkeys and ruffed grouse. [9]

The lowbush blueberry patches dominate the landscape of Acadia in Maine

Why are blueberries blue? Bright red cranberries that are in the same genus grow in similar habitats. Everything in nature has a purpose ― in most cases related to Darwin’s epiphany that  survival is the outcome only if fit, meaning adaptable. Since fruits contain the seeds for future generations, fruit color must have evolved over time to attract animals as agents for transport. The animals in question  cannot have been  hunter gatherer H. sapiens since North America was devoid of hominids until about 12,000 years ago. While the determination of the responsible  animal is speculative, there are some interesting facts and correlations that provide a basis for hypothesis.  The first clue is that non-primate mammals cannot see red. The generally accepted reason is that the smell and hearing senses are much more important for survival in the brushy shadows of their habitats than sight. It is also reasoned that primate red vision was beneficial in locating ripe fruits in the jungle canopies of African forests; more food, more survivors. Birds also see red presumably for the same reason. It is demonstrably true that most berries are red and that avian dispersal is the primary vector for dissemination of the seeds of red-berried plants. A 2015 study compared the amount of berry seeds from 25 different plants (16 native and 9 invasive) in 450 bird dropping samples and found that birds ate almost exclusively from native plants. This can only  make sense if birds evolved eating native berries and that they would continue to do so, eschewing the invasives (which unfortunately does not seem to deter them much). [10] As an interesting correlation, the areas where cranberries predominate are in the same general area as the breeding ground for passenger pigeons. Though they are now extinct, having been extirpated by humans in the last century, a reasonable theory is that passenger pigeons ate cranberries and deposited the seeds all over their brooding areas, where the resultant cranberry bushes flourish.

Conversely, blueberries are not for the birds. This is not to say that birds won’t eat them  but only that bird consumption cannot have been the primary evolutionary forcing function … blueberries would have to be redberries. It is much more likely that blueberries became blue and sweet to attract mammals to propagate the seeds contained therein. A few observations again provide some basis for speculation. The first is that black bear fecundity, as pointed out above, correlates to the quantity of blueberries produced in any given year. Correlation is not causation but there is more. It is demonstrably true that the area of North America where black bears are indigenous is largely convergent with the natural habitat of the lowbush blueberry. It is also true that black bears eat a lot of berries, which can easily be verified by looking at bear scat encountered on the trail ― invariably dotted with seeds. In my view, black bears or perhaps their evolutionary ancestors are the best candidate as the original  blueberry propagator. That would also help to explain why there are so many berries … it takes a lot of berries to attract a bear.

One of nature’s more interesting innovations is how the color of a fruit is controlled to attract an animal. Anthocyanins are complex chemical compounds produced by some plants for which they perform no other function but pigmentation. Like litmus paper, acidic environments cause red pigmentation and  basic means blue. Anthocyanin is what makes roses red, violets blue, and plums purple. It is also the source of the brilliant colors of red and sugar maples in autumn.  Here however, red has purpose which is thought to shield individual leaves from damaging infrared radiation in early fall so that more nutrients can be retained in the roots over winter.  It is interesting to note that blueberries start out pinkish-red and gradually become blue as they ripen and thereafter are ready to eat. Coville demonstrated that blueberry bushes grow in acidic soil, so clearly the chemistry of the plant is on the red or acidic side. This is amply demonstrated in the fall, when blueberry bushes turn brilliant red, indicating that they produce copious amounts of anthocyanin and that they are acidic. [11] Blueberries must therefore be turned blue by the plant which can only be achieved with high PH or basic chemicals that must be produced for this purpose – to attract bears.

Commercially grown blueberries in the form of both V. angustifolium and the cultivars that originated with V. corymbosum are second only to strawberries in quantity and value with annual sales of nearly $1B. Cranberries (V. macrocarpon) which are also heath family plants closely related to blueberries, are equally popular but almost wholly as juice and the essential holiday sauce. Raspberries and blackberries are at the back of the pack. But here we are literally talking apples and oranges. The botanical berry has little to do with the vernacular use of the term. A berry is defined as a fleshy fruit with seeds embedded in the pulp, like blueberries, huckleberries and even tomatoes and watermelons. The strawberry is an accessory fruit with the seeds embedded in the external skin of the fruit and both raspberries and blackberries are aggregates, consisting of many tiny berries clustered together. The natural foods movement gained ground at the turn of the millennium in reaction to the realization that processed food consumption had resulted in an epidemic of obesity according to both the CDC and NIH. Blueberries were among the more favored foods in the fruit and vegetable category that emerged as the gold standard for a healthy diet. Domestic production consequently rose 284 percent between 2000 and 2019 supplemented by South American imports that rose 1000 percent during that same period for blueberries year round. [12] In that nature operates according to opportunistic species seeking perpetuation and dominance, the large numbers of nutritive blueberries were ripe for exploitation by a fungus appropriately named Monilinia vaccinii-corymbosi for its assault on the species. It can destroy over 50 percent of the crop (85% in New Hampshire in 1974). The infected fruit goes by the pejorative mummy berry for the swollen wrinkled and gray blobs that result. [13]

Blueberries are good food for all animals,  not just bears. High levels of vitamin C vitamin K, and manganese are conveniently packaged with high fiber, the stuff of proper stools.  More importantly from the health standpoint, blueberries have exceptional levels of flavonoids, notably anthocyanins … more than cranberries, strawberries, plums and most other fruits. Flavonoid compounds  are for the most part considered  beneficial for health  due to antioxidant and anti-inflammatory activity. The arrant actor is the free radical, a malevolent name for an insidious chemical. Simply put, it is any chemical element that is either missing one electron or has one too many. Since chemistry abhors untidy stray charges in favor of the neutral state, free radicals seek to balance their electrical charge by reacting with anything available, like body cells and the DNA that they all contain. Disrupted DNA mostly results in cell death but certain mutations can have more inimical effects, like the uncontrolled growth of a cancerous tumor. Antioxidant chemicals are in their most general form hydrogen donors that combine and neutralize free radicals. Recent studies have cast some doubt on the role of anthocyanin as an effective antioxidant. However, the benefits of blueberries as part of a healthy diet are indisputable. The only question is why. Affirmed  benefits of blueberries for human health. include reducing cognitive degeneration, promoting heart health, reducing susceptibility to cancer, and lowering blood pressure, among many others. [14]   

So why are blueberries blue? Because they contain anthocyanin in an aqueous environment with  a high enough PH so they are basic blue in lieu of acidic red. This requires a certain amount of finesse since the bush on which they grow is rooted in acidic soil with  acidic leaves marked by the red of the same anthocyanin chemistry. But that is not really why blueberries are blue but rather how they became blue. Why they are blue is because that was the best color to attract an animal with enough drawing power and consistency to propagate the species. They are blue because they are in the ecosystem of which they are but one of the many interconnected parts.   

References:   

1.  https://naturalhistory2.si.edu/botany/ing/genusSearchTextMX.cfm  

2. Kron, K. et al  “Phylogenetic relationships within the blueberry tribe (Vaccinieae, Ericaceae) based on sequence data from MATK and nuclear ribosomal ITS regions, with comments on the placement of Satyria”. American Journal of Botany. February 2002 Volume  89 Number 2 pp 327–336.

3. Niering, W. and Olmstead, N. National Audubon Society Field Guide to North American Wildflowers, Alfred A. Knopf, New York, 1998, pp 508-509

4. Coville, F. Experiments in Blueberry Culture. US Government Printing Office. 1910.

5. https://federallabs.org/successes/success-stories/blueberries-making-a-superb-fruit-even-better  

6. https://www.nal.usda.gov/speccoll/collectionsguide/collection2.php?search=coville

7. Elias T. and Dykeman, P. Edible Wild Plants, A North American Field Guide, Sterling Publishing New York, 1990, pp 164 – 167.

8. Ethnobotany data base http://naeb.brit.org/uses/search/?string=vaccinium

9. https://www.fs.fed.us/database/feis/plants/shrub/vacang/all.html     

10. Runwal, P. “Migratory Birds like Native Berries Best.” Audubon Magazine, 12 June 2020

11. Wilson, C, and Loomis, W. Botany, 4th Edition. Holt, Rinehart, and Winston, New York, 1967,  pp.52-53.

12. https://www.ers.usda.gov/data-products/charts-of-note/charts-of-note/?topicId=14849     

13. Batra, L. “Monilinia vaccinii-corymbosi (Sclerotiniaceae)”Its biology on blueberry and other related species” Mycologia  Volume 75 number 1 1985 pp 131-152.

14. https://www.ars.usda.gov/plains-area/gfnd/gfhnrc/docs/news-2014/blueberries-and-health/

Coral Fungi – Clavariaceae

Crown – tipped Coral is one of many fungi that have a branching pattern similar to ocean corals

Common Name: Coral Fungus – The branching of the fungal thallus resembles the calcium carbonate structure of  ocean corals. Other common names are applied to differentiated shapes, such as worm, club, or tube fungi for those lacking side branches and antler fungi for those with wider, flange-like appendages. An extreme is cauliflower fungus which looks nothing like coral but is usually included in the coral-like category in field guides. The common Crown-tipped coral is depicted; the ends of the coral segments have tines like miniature crowns.

Scientific Name: Clavariaceae – The family name for the coral fungi is derived from clava, the Latin word for “club;” the type-genus is Clavaria. The coral fungus above was originally Clavaria pyxidata, became Clavicorona pyxidata, and is now Artomyces pyxidatus. Pyx is from the Greek word pyxos meaning “box tree” from which boxes were made (and the etymology of the word box – a pyx is a container for Eucharist wafers). The implication for its use as a name for this species is “box-like.”

Potpourri:  Coral fungi look like coral. The verisimilar likeness can be so convincing that it seems plausible that they were uprooted from a seabed reef and planted in the woods for decoration. The delicate ivory and cream-colored branches rising in dense clusters from a brown-black dead log are one of the wonders of the wooded paths sought by those who wander there. There is an abiding benefit to have some knowledge of the things that nature has created and coral fungi is a good collective mnemonic to apply to the group that surely must be closely related. And so it is  for the traditionalists steeped in the lore of musty mushroom field guides who are referred to collectively as the “lumpers.”  The new world order of DNA has taken the science of biology on a wild ride with many hairpin turns and dead ends; in the case of mycology, the train has left the tracks more than once. Coevolution … that which created a marsupial mouse in Australia unrelated to the placental house mouse everywhere else … globally demonstrates Darwin’s vision. Fungi that branch is a natural evolutionary path for two individual organisms that started at different places and times.  The diaspora of species from one genus to another in search of a home on the genetic tree of life has exploded the coral fungi into fragments. This is the realm of the “splitters,” the subdividers for whom a bar code will become the only true arbiter of species. There is of course a hybrid middle ground, acknowledging the latter but practicing the former, the province of most mushroom hunters.  

Like all epigeal fruiting bodies extending upward above the ground from the main body of a fungus, which is hypogeal or below ground, the branching arms of coral fungi function to support and project the spore bearing reproductive components called basidia. Gilled or pored mushrooms maximize the number of spores they can disperse by creating as much surface area as possible in the limited space beneath the cap or pileus. Similarly, coral fungi branch again and again or extend myriad singular shafts to get as many fingers of spore bearing surface into the air as possible. [1] The topology of using multiple  extensions into a fluid medium is one of the recurring themes of evolution ― coevolution. In this case, it has nothing to do with fungi per se. They look like coral because real coral is doing essentially the same thing; the namesake polyps secrete a type of calcium carbonate called aragonite to form protective exoskeletons in reefs that extend outward into the water where their food floats by.  To extend the analogy to the rest of biology is a matter of observation. Trees send branches covered with photosynthesizing leaves toward the sun and roots toward the water and minerals of the earth where they encounter the branching mycelia of fungi.

Fungi have evolved to distribute reproductive spores with different mechanisms that could only have been naturally selected by the variations in form and function of random mutation. Among the more creative methods are the puffing of puffball spores out a hole in the top by the impact force of raindrops, the odorous spore-laden goo of stinkhorns that attracts insects seeking nutrients, and the redolence of truffles sought by burrowing or digging animals as food digested, their spores excreted intact. The coral fungi are among the most primitive of all basidiomycete fungi in having their club-shaped spore bearing basidia positioned along the upper reaches of each prong so that they can be carried away by either wind or water. [2] Having more fruiting bodies with more branches creates more spores, which is why coral fungi are frequently found growing saprophytically in dense clusters on dead tree logs or growing in mycorrhizal clusters on the ground. Simply sticking indistinguishable club shapes into the air with a bunch of short rods with spores attached to the end is the most straightforward way to disperse them for germination.

The phylogenic diversity of the coral fungi belies their similar ramified appearance. Historically, structure was thought to be the basis for taxonomic classification, an assumption that works reasonably well with plants and animals but not with fungi. The delicate and colorful appearance of the coral fungi brought them to the attention of the earliest naturalists, who grouped them according to shapes. Since fungi were then considered members of the Plant Kingdom (Subkingdom Thallophyta), this was consistent with practice. The French botanist Chevallier placed them in the order Clavariées in 1826 with only two genera, Clavaria and Merisma noting that “se distingue du premier coup d’oeil” – they can be identified with a fleeting glance in having “la forme d’une petit massue” – the form of a little club. [3] The assignment of fungi to families according to form lasted for over a hundred years until the nuances in microstructure and spore appearance initiated cracks in the biological foundation. Toward the end of the last century the fungi were recast as one of five different kingdoms, the foundational genus Clavaria was dissected into six genera with derivative names like Clavulina (little club)  and Clavariadelphis (brother of Clavaria), which is how they appear in the most popular fungi field guides. [4]

In spite of the distinctive shape that suggests a unique origin, coral fungi are agarics, the historical group name for almost all gilled fungi. What is now the order Agaricales is comprised of over 9,000 species, containing over half of all known mushroom forming macrofungi assigned to one of 26 families with about 350 genera that range from Amanita to Xerula. Carl Linnaeas, who established the first taxonomic structure in biology with the publication of Systema Naturae in the 18th century, placed all gilled mushrooms in a single genus that he named Agaricus. One hundred years later, Elias Fries published Systema Mycologicum, which separated the agarics into twelve genera based on macroscopic features such as the structure of the spore bearing surface or hymenium (e.g. gills, pores, teeth, ridges, vase-shaped) and spore color (white, pink, brown, purple-brown, or black). Six groups of basidiomycetes were recognized based on the shape of the sporocarp or fruiting body ― “coral-like fungi” was one of them. While there was some expansion of genera over the ensuing decades, the so-called Friesian approach to gilled mushroom identification has persisted and  is what is still generally in use, spore print color and all. The use of field characteristics is crucial to the practical application of mycology that serves the community of foragers looking for edible species and other aficionados who enjoy their company. [5,6]

Over the last several decades, the use of DNA to map out the true phylogenetic relationships has upended the traditional taxonomy based on macroscopic structure and spore color. Unravelling the complex weave of evolutionary threads from one species to its predecessor  is a monumental task that is just now gaining momentum. The goal is to determine the real or cladistic family tree so that a clade, the term adopted to refer to all species with a common ancestor, can be established with certainty. In one analysis, the agarics fell into six major clades, or single-ancestor groupings named Agaricoid, Tricholomatoid, Marasmioid, Pluteoid, Hygrophoroid and Plicaturopsidoid. The coral fungi are in the latter, which diverged from all the other agarics at the earliest evolutionary branching in the Cretaceous Era some 125 million years ago. It is not unreasonable to conclude from this analysis that the coral fungi evolved a reliable and efficient method of spore dispersal early on and have thrived ever since, branching out to form new species all using the same technique. It is now equally evident that the shape of a fungus does not necessarily establish its proper branch in the family tree. The agarics, now the Eugarics Clade, not only has fungi shaped like mushrooms and coral, but also puffballs like Calvatia and Lycoperdon. Likewise, shapes extend across multiple clades.  For example, coral-shaped fungi also appear in the Russuloid Clade (Russulas)  as Artomyces as pictured above and Sparassis as pictured below in the Polyporoid Clade (Polypores). This is then the dichotomy between the taxonomists of the old school steeped in the Linnaean traditions of field identification and the DNA systematists of the new school for which only the laboratory will do. [5,6]

The new biological life history of coral fungi is still subject to the findings of the most recent research paper devoted to the group and it may be decades before a settled taxonomy emerges. As a brief and incomplete history, in 1999 “four lineages containing cantharelloid and clavarioid fungi were identified,”  with the clavarioid containing most of the corals, but also noting that “Clavicorona is closely related to Auriscalpium, which is toothed, and Lentinellus, which is gilled.” [7] In 2006, it was acknowledged that coral shaped fungi must have evolved at least five times over the millennia and that the “evolutionary significance of this morphology is difficult to interpret because the phylogenetic positions of many clavarioid fungi are still unknown.” The new genus Alloclavaria was added to accommodate the unique fungus Clavaria purpurea  “not related to Clavaria but derived within the hymenochaetoid clade,” which consists mostly of bracket fungi. [8] Seven years later, the coral fungus family was found to consist of four major  clades:  Mucronella,  Ramariopsis-Clavulinopsis,  Hyphodontiella, and Clavaria-Camarophyllopsis-Clavicorona. This thorough phylogenic analysis of 47 sporocarp sequences merged with 243 environmental sequences concluded that “126 molecular operational taxonomic units can be recognized in the Clavariaceae … an estimate that exceeds the known number of species in the family.” [9] Phylogenic studies are continuing.

Returning to the more mundane walk through the woods looking for coral fungi, the two most pressing questions concern edibility and toxicity. Neither of these subjects is broached in the scientific literature, and, like most fungi, data points are empirical, relying on random trial and error anecdote. For coral fungi, this is complicated by the fact that most are small and delicate and therefore rarely sampled by those seeking massive brackets of Chicken-of-the-woods and yellow clusters of chanterelles. Edibility has been a question ever since Chevalier first singled them out in 1826, noting that “Presque tout les clavaires  fournissent a l’homme une nouriteure saine, on mange ordinarements les plus grosses”  –  almost all are good to eat but only pick the big ones,  and “Elles n’ont aucune qualité vénéneuses; quelques-une ont une saveur amèrenone are poisonous but some are bitter. [10] This sweeping assurance cannot have been the result  of a thorough assessment, as there are good and bad corals.  Modern guides are more circumspect, offering a range of information about edibility from choice to poisonous with caveats about having a laxative effect on some people and causing gastrointestinal distress in others. Many are of unknown edibility and likely to remain so. There is one standout worth noting that has the hallmarks of broad acceptability. The Cauliflower Mushroom (Sparassis americana – formerly crispa) is large, unusual, and common. It neither tastes nor looks much like a cauliflower. The “Elizabethan ruff of a mushroom” [11] is hard to miss and there is no doppelganger to fool the hapless hunter.

The Cauliflower Mushroom looks nothing like coral, or cauliflower for that matter. More like a neck ruff of Elizabethan England.

References:

1. Aurora, D. Mushrooms Demystified, Ten Speed Press, Berkeley, California, 1986 pp 630-658

2. Schaechter, E. In the Company of Mushrooms, Harvard University Press, Cambridge, Massachusetts, 1997, p.49   

3. Chevallier F. Flore Générale des Environs de Paris, Ferra Jeune, Paris, France, 1826  p. 102.

4. Lincoff, G. National Audubon Society Field Guide to North American Mushrooms, Alfred A. Knopf, New York, 1981, pp 398-414.

5. Matheny P. et al “Major clades of Agaricales: a multilocus phylogenetic overview”  Mycologia August 2006, Volume 98 Number 6 pp 982–995.

6. https://www.mykoweb.com/articles/Homobasidiomycete_clades.html        

7. Pine E. et al “Phylogenetic relationships of cantharelloid and clavarioid Homobasidiomycetes based on mitochondrial and nuclear rDNA sequences”. Mycologia. 1999. Volume 91 Number 6 pp 944–963.

8. Dentinger B and McLaughlin D. “Reconstructing the Clavariaceae using nuclear large subunit rDNA sequences and a new genus segregated from Clavaria”. Mycologia. Volume 98 Number 5 September 2006 pp 746–762.

9. Birkebak J et al. “A systematic, morphological and ecological overview of the Clavariaceae (Agaricales)”  Mycologia. Volume 105 Number 4, February 2013, pp 896–911.

10. Chevallier, op cit. p.104

11. Lincoff, op cit. p. 412.

Red-spotted Purple Butterfly

The Red-spotted Purple is mostly purple and has red spots at the wing tips

Common Name: Red-spotted Purple and White Admiral – Butterfly names are in most cases descriptive, using color and patterns as leitmotif. The mostly dark blue wings tinged with enough red to produce purple culminating in red wing spots provides one of the more mnemonic names. The alternative name White Admiral is a result of one of the more tantalizing tales of the lepidopterans as they change colors and patterns in mimicry detailed below.

Scientific Name: Limenitis arthemis –  The genus name literally means harbor goddess in Greek. The nautical association is apparently related to or is a result of  the fact that they are called the admiral butterflies, as in White Admiral. The species name is from Artemis (Diana in Roman mythology), the Greek goddess of the hunt and hence the woods. A butterfly as metaphor for a goddess captures the graceful beauty of both.    

The White Admiral has a single broad white stripe – like a US Navy admiral

Potpourri: The  Red-spotted Purple and White Admiral are the same species, Limenitis arthemis. Mimicry, the term for an animal mimicking another object in shape and/or color, is an evolutionary and genetic  response to  the inexorable tug of survival. Although it may seem especially notable in this case because of the striking result afforded by the difference between white striped and stripe-less wings, mimicry in its broadest sense is widespread. Some prey animals change colors according to age and season to provide better camouflage. The spotted fawn turns light tan as a doe or buck in summer and darker in winter to match the scenery. Predators must do the same in order to hide from their quarry long enough effect the coup-de-grace at the last moment. There are no black panthers, only melanoid leopards and jaguars becoming night stalkers (both are in the genus Panthera). Aposematism is similar to mimicry in that coloration is used to ward off predators. But rather than being cryptic, the colors stand out  against the background in sharp contrast, alerting the wary predator that poisons there lurk. The juvenile red eft of the red-spotted newt is a good example of aposematism … a defenseless amphibian that protects itself with vivid orange hues similar to those  used by hunters to accentuate visibility.

White Admirals, the northern version of the Red-spotted Purple, are named for their prominent white stripes. It is perhaps only coincidence that the progression of officer ranks in the U.S. Navy ranges from an ensign’s single narrow to a single broad stripe for an admiral. The Red Admiral (Vanessa atalanta) which has a similarly placed red stripe is otherwise unrelated. Boldly contrasting prominent stripes on two species suggests purpose and coevolution. While striping may be related to species or mate recognition, it is more likely a matter of predator avoidance, the moving flashes of colored streaks creating a disorienting stroboscopic effect. [1]  On progressing geographically southward, the White Admiral’s broad stripe disappears and the red spots move forward to the edge of the wing tip to become both red-spotted and purple. This rather extraordinary transformation is a combination of the aping of mimicry and the warning of aposematism, a hybrid scheme called apatetic in general or Batesian in particular.  The wing is now more uniformly dark in color, resembling that of a butterfly of a different genus and species ― a poisonous doppelgänger.

The Red Admiral has a single red stripe

It is widely known that the Monarch butterfly is unpalatable to birds because its caterpillars eat milkweed (Asclepias syriaca) that produces cardiac glycosides that are toxic to most animals. It is mimicked by the Viceroy (Limenitis archippus) butterfly, a generic cousin of the Red-spotted Purple, as a matter of enhanced survival. [2] The Green Swallowtail butterfly (Battus philenor) is better known as the Pipevine Swallowtail because its larvae feed on Dutchman’s Pipe (Aristolochia durior), a vine that produces a toxin called aristolochic acid. Since the range of the toxic Pipevine Swallowtail butterfly extends only as far as its namesake food, it is a southern butterfly because that is where the vines are. [3]  The change is not cognitive choice, but rather choice by chance. The White Admirals that ventured south with less prominent stripes survived more frequently since they were more likely to be avoided by predators. Over time and subsequent mating of diminished stripe White Admirals, the stripes disappeared altogether and the Red-spotted Purple became the southern variant, sometimes listed as a separate subspecies Limenitis arthemis astyanax. The name extends the mythological association to include Astyanax, the son of Trojan hero Hector who was defenestrated by the Greek Achilles so he could not avenge the death of his father.

The Pipevine Swallowtail is copied by the Red-spotted Purple to escape predation by birds. It is called Batesian mimicry.

The kaleidoscopic patterns of butterfly wings are among the most artistic creations of nature. Their evolution that began during the Cretaceous Period 150 million years ago was marked by three random mutation “inventions” that radiated in time and space along the way to produce the 18,000 plus extant named species. [4]  The first and defining mutation was wing scales from which the name of the order Lepidoptera was derived, lepis meaning scale and ptera wing in Greek.  Scales are genetically modified sensory bristles, that became flat. senseless, and slippery, probably to avoid capture … the survivors passed the scale genes along. The second invention was changing the scale colors, possible because each scale is from a single cell with control of hue and texture, the combination producing different shadings and sometimes even iridescence. Lastly there was pattern, the genetics of placing colors in ordered arrangement. Spots in general and eyespots in particular start in the caterpillar stage, where an organizer puts them in the right position on the wing, a disc at this point. Colors are added in the chrysalis phase so that the adult butterfly wing emerges after metamorphosis with spots. These are usually at the margins of the wing so that a predator would first strike there, removing only a small portion of the wing as the butterfly flitted to safety. The efficacy of this is demonstrable, as many lepidopterans are found with a bite out of one wing. [5]

Butterflies are among the most studied of all animals, surely more a matter of beauty and ease of net capture than for their scientific import as just another type of insect. Henry Walter Bates spent eleven years in the Amazon rainforest in the mid-nineteenth century, identifying 8,000 species that were then new to science, many of them butterflies. His studies led to the observation that some butterflies had patterns that were quite similar in appearance to unrelated species that were unpalatable to birds. He hypothesized that birds would learn to avoid them after only a few experiences and that this would then perpetuate the verisimilitude. When he returned to England in 1859 to recover from his epic jungle ordeal, he presented a paper on his discovery of butterfly mutations and to what he considered to be one of the best examples of  the “origin of all species and all adaptations.” [6] The phenomenon, known ever after as Batesian mimicry, became one of Darwin’s favorite examples of his epochal Origin of Species which had just been published. The two developed an enduring friendship, corresponding periodically on the new ideas of evolution. Bates became one of the primary adherents to the nascent theory, writing on one occasion that “I think I have got a glimpse into the laboratory where Nature manufactures her new species.” [7] The headwinds of religious dogma required decades to overcome, but gradually and fitfully the theory has gained near universal acceptance excepting those that adhere to biblical literalism.

With the advent of DNA as a roadmap of evolutionary change, Darwin’s insight only remains a theory insofar as it cannot be proven according to the scientific method of testing, which would require going back in time to reset the biological clock. The White Admiral conversion to Red-spotted Purple is one of the most documented of butterfly DNA subjects because of the infraspecific Mason-Dixon line that separates them. Proceeding north, the White Admiral prevails, while the far south is dominated by the mimetic Red-spotted Purple. The validity of the Batesian mimicry has been well established. A thirty year data set of Fourth of July Butterfly Counts confirmed that mimicry occurs even when the population of the unpalatable Pipevine Swallowtail species is low and that a sharp phenotypic geographical transition marks the boundary.[8] Between the two extremes, there is range over which hybridization occurs, affording a singular opportunity to study the interaction between the two variants according to DNA changes. Scientific research has established that the White Admiral variant is monophyletic (single ancestor) and that the hybridization of mimicry occurred just once. The hybrids that exist in the transition zone are thought to be due to mating between the two, producing on occasion a Red-spotted Purple with faint or partial white stripes. [9] More recently the location of the mutation responsible for Batesian mimicry on the genes of two different types of butterflies (Limenitis and Heliconius) that diverged 65 million years ago demonstrates the coevolution of this important survival trait. [10] Genetic confirmation provides the scientific “how” corresponding to the Batesian “why,” proof  for all practical purposes of Darwin’s “theory.”

The employment of Batesian mimicry of Limenitis arthemis in scientific research of butterfly sex  must surely have been considered for the Ig Nobel Prize in biology. One of the more compelling examples of female reproductive choice is sperm retention and storage after mating for fertilization at a later, more auspicious time. In that this would enhance the survival of subsequent generations, it has coevolved across the animal kingdom to include some insects, butterflies among them. It is also the case that many animals mate more than once; males with genetically driven propensity to sire as many offspring as possible and females to ensure successful insemination with the best possible mate characteristics. It is hard to say for sure, but it may also be that both enjoy it. Among the more profound questions facing biology is whether the sperm from a second mating male displaces that of the first or whether the two mix together to produce hybrids. Using the wing patterns that resulted as the biological metric, 17 females were mated with 34 males to conduct the experiment (it was not reported if this was consensual). The results were used to determine “insect mate-seeking strategies and individual fitness.” In that it was the first male’s sperm that prevailed, the conclusion was that it was not in the best interests of either the female or the male to mate multiple times. This then led to the conclusion that “virgin females apparently are sought by males and probably are more receptive to courtship and successful mating than are ones which have mated previously.” [11] This, at least, is the same theory espoused by some college fraternities and numerous religious denominations.

References

1. https://entnemdept.ufl.edu/creatures/bfly/red-spotted_purple.htm    

2. Marshall, S. Insects, Their Natural History and Diversity, Firefly Books, Buffalo, New York, 2006, pp 161-167.

3. Milne, L. and M. National Audubon Society Field Guide to Insects and Spiders, Alfred A. Knopf, New York, 1980, pp718-719.

4. Heikkilä, M. et al. “Cretaceous origin and repeated tertiary diversification of the redefined butterflies”. Proceedings. Biological Sciences. 22 March 2012 Volume 279  Number 1731 pp 1093–1099.

5. Brunetti, C. et al. “The generation and diversification of butterfly eyespot color patterns”. Current Biology. 16 October 2001 Volume 11 (20) pp 1578–1585

6. Bates, H. “Contributions to an insect fauna of the Amazon valley. Lepidoptera: Heliconidae”. Transactions of the Linnean Society. 21 November 1861 Volume 23 Number 3. pp 495–566.

7. Carrol, S. Endless Forms Most Beautiful, W.W. Norton, New York, 2005,  pp 197-219.

8. Ries, L. and  Mullen, S.   “A Rare Model Limits the Distribution of Its More Common Mimic: A Twist on Frequency-Dependent Batesian Mimicry” Evolution. 4 July 2008, Volume 62 (7) pp 1798–1803.

9. Savage, W.; Mullen, S. “A single origin of Batesian mimicry among hybridizing populations of admiral butterflies (Limenitis arthemis) rejects an evolutionary reversion to the ancestral phenotype”. Proceedings of the Royal Society B: Biological Sciences. 15 April 2009 Volume 276  Number 1667  pp 2557–2565 at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2686656/  

10. Gallant, J. et al “Ancient homology underlies adaptive mimetic diversity across butterflies” Nature Communications, 8 September 2014 Volume 5, p 4817.

11. Platt, A. and Allen, J. “Sperm Precedence and Competition in Doubly-Mated Limenitis arthemis-astyanax Butterflies (Rhopalocera: Nymphalidae)”. Annals of the Entomological Society of America. 1 September 2001 Volume 94 (5) pp 654–663.