Hydropower

The Conowingo Hydroelectric Dam on The Susquehanna River is a two mile hike north of Susquehanna State Park along the railroad right of way that was installed to transport building materials for the dam, which opened in 1928.

Hydroelectricity is one of the three sources of renewable energy capable of providing electricity on a global scale. All three ultimately derive their energy from the sun. Photovoltaic or PV panels collect and transmit sun photon energy directly. Wind turbines collect the energy of pressure differences caused by temperature differences from the sun heating the earth’s surface. Hydro energy is more nuanced. The sun evaporates water from the oceans. As water vapor rises and moves with winds swirled by the Coriolis effect, clouds form and the vapor condenses to water falling as rain or snow. Water falling on land finds its way back to the ocean by forming rivers that flow from  higher to lower elevation. The potential energy of the water at higher elevation is converted to kinetic energy as it moves downhill. Turbines placed in the flow connected to electrical generators is hydroelectric power. Dams regulate the flow so that the power is constant and continuous rather than being subject to the whims of weather in floods and drought. Hydro has recently taken on a new role. Since wind and sun are intermittent but hydro is constant, the use for water has gained in importance linking the former to the latter. Power supplied  by wind and solar in excess of demand runs pumps to move water uphill to an elevated reservoir. The energy thus stored is reclaimed when the upper reservoir flows back downhill now directed through a hydroelectric generator. Pumped storage hydropower (PSH), as this arrangement is called,  thus provides energy storage, a mandatory capacity for a future global electrical system dominated by renewables.   

Hydropower is a broad and underutilized word that applies to any means by which water in its liquid state is used as a source of energy for human endeavor.  It is useful as an inclusive term that applies throughout the course of human history. Water for mills and the machinery of early, mainly textile factories, was generally called water power. Starting in the late 19th century, water became one of the main sources of generating electricity, mostly with the construction of dams, giving rise to the term hydroelectric power. Hydropower does not refer to the use of hydrogen gas as a source of energy (power is the rate of using energy), although this may one day become a neologism if hydrogen power proliferates. The similarity and possible confusion arises from the fact that Antoine Lavoisier, the father of chemistry, concocted the word hydrogen from Greek words meaning “I beget water” for the seminal work of Chemistry, published in English as Elements of Chemistry in 1790. The rationale for the name was that oxygen, meaning “I beget acid,” had been previously named based on the observation that sulfur, phosphorus and carbon produced acidic solutions when burned. Hydrogen mixed with oxygen “begot” water (2H2 + O2 = 2H2O), which was at the center of scientific inquiry from its inception.  Thales, a Greek philosopher of the 6th century BCE, held that water was the primary substance from which all other matter was formed. [1] While water is crucial to the evolution of life, the erosion of uplifted land back to the sea, and to the swirling chaos of weather, it falls short of being the elemental element. But it could be considered the elemental compound.

The use of water to do work dates from the dawn of prehistory as agriculture radiated outward from Mesopotamia and the first cities became food production and distribution centers. More mouths to feed with larger harvests gave rise to a better and more efficient way to make flour from threshed seeds in the Neolithic, literally New Stone Age.  The first written account of water turning a millstone to grind grain appears in the writings of Antipater of Thessalonica in the 1st century BCE: “Demeter (Greek goddess of agriculture) has reassigned to the water nymphs the chores your hands performed.” A horizonal wheel in a flowing stream was connected with an axle to a large circular shaped stone that rotated against a second stationary stone with the force otherwise provided by man or perhaps donkey power. The much more efficient vertical wheel that could use both the weight and the velocity of water required gearing to convert rotary motion to the horizontal stone was first described by the Roman engineer Vitruvius as hydraletae, Latin for water mills, in 27 BCE.  The watermill became the cynosure of the hamlets of England which grew in number from 6,000 according to the Domesday survey in 1086 to 30,000 in 1850. As the American colonies were settled and farmers migrated inland, watermills followed to make flour for daily bread to feed the burgeoning nation. Their remnants, long abandoned, abound.

The mill at Nethers, Virginia is just down the road from the Old Rag Trailhead.

The use of water advanced from foundational milling of flour to powering industry as an integral part of the nascent Industrial Revolution in the early 19th century. It was a matter of economics, waterwheels were the most efficient sources of energy available. Two manual laborers could manually grind 15 pounds of flour per hour (200 watts), a mule-driven mill could double that, but a waterwheel produced about 200 pounds with a power of 2.000 watts or 2 KW―enough to feed a village of 3,500 inhabitants. Water power could be scaled up by enlarging the wheel and/or by employing multiple wheels. To supply the 1,400 fountains and waterfalls at  King Louis XIV’s magnificent palace at Versailles, near Paris, France, fourteen 30-foot wheels were installed  on the Seine River between 1680 and 1688 to drive 200 pumps. Glasgow, Scotland became a manufacturing entrepot in the 1830’s in part because of the massive water works on the Clyde River at Greenock with 20 waterwheels that could provide about 2,000 KW or 2 megawatts (MW). The full potential of water as a viable power source to meet the demands of expanding industry was the water turbine, invented by the French engineer Benoit Fourneyron in 1827. A turbine, named from the Greek word for whirling, uses curved blades with radial outward flow, increasing the efficiency of the traditional flat board waterwheel substantially. Water turbines were the primary source of power along the Merrimac River in Massachusetts, supplying 60 MW to hundreds of textile mills in 1875, 80 percent of all power. [2]

Waterwheels powered factories with rotary motion to turn pulleys linked by gears and belt drives to spindles for weaving fabric, to operate saw blades to cut lumber, and many other similar mechanical operations. Hydroelectricity became possible only after the invention of a device that could take the rotary motion imparted by a water turbine to generate current. The dynamo was invented by Michael Faraday in 1831 as a practical application of his discovery of induced electricity, that any conductive material moving through a magnetic field was induced to create a current of  moving electrons. The rotation of an iron rotor through a magnetic stator converted mechanical to electrical energy, giving rise to the induced current generator or dynamo. Any rotational device, such as a steam engine, could provide the motive force. Initial development of the dynamo generator as a practical device was one of the many inventions of the inimitable Thomas Edison at Menlo Park. The impetus was to provide a constant voltage source that was necessary to power the incandescent lights that he was working on as concurrent development with a stated goal of creating a central station for lighting all of New York City. The resultant dynamo, nicknamed “long-wasted Mary Ann” for its unusual two upright columns, was found to not only produce a nearly constant 110 direct current (DC) output but to do so at 90 percent efficiency, twice as much as its variable voltage predecessors. At 3 P.M. on Monday, September 4, 1882, Edison gave the order to start up four boilers to make steam for nine steam engines connected to Edison dynamos at Pearl Street Station in New York City to provide electricity to 400 incandescent lamps. [3]  While momentous as a practical demonstration of the use of electricity, the DC generated by Edison’s dynamos was limited in range to about one mile. The subsequent invention of alternating current (AC) would solve that problem.

Niagara Falls played a seminal role in the development of electricity as the backbone of modern industry. The prodigious flow of the Niagara River that carries the waters of the Great Lakes to the Atlantic Ocean over a 167 foot cliff has attracted tourists for centuries (Edison spent his honeymoon there) and inevitably those interested in harnessing its water power. A small waterwheel-driven saw mill constructed in 1759 was succeeded 80 years later by a generating station that provided a small amount of electricity supplied by a DC generator for mills in what had by now become a namesake village. The Cataract Construction Company was organized in the 1890’s with the express purpose of building a water tunnel to supply a large scale hydroelectric power plant to sell electricity to customers at a profit. However, unlike New York City with its closely clustered businesses, upstate New York was remotely situated and would require long distance transmission. [4] The technology of electrical generation underwent a sea change when George Westinghouse bought the patents of Nicolas Tesla (who originally worked for Edison at Menlo Park) to design an alternating current (AC) generator. As AC current could be transformed to higher voltages for efficient transmission over long distances, the decision, considered radical and risky at the time, was to install AC generators in the Niagara Falls hydroelectric plant. It  began transmitting power twenty miles to Buffalo, New York, in 1896, making it the first modern industrial mecca. Thereafter, 80 percent of all new generating capacity was AC as the nation’s electrical grid took shape with remote power stations, like hydroelectric dams, feeding a network of long-distance power lines. Niagara Falls was a watershed reservoir for the watershed technology of AC power.[5] It now boasts 60 generators producing 5,000,000 KW, or 5 gigawatts (GW).

The nameplate from the Westinghouse AC generators in stalled at Niagara Falls is on display at the Smithsonian Museum of American History as one of the significant objects relating to power and industry

Construction of hydroelectric dams by the United States government resulted from congressional measures to address the international threat posed by Germany and its allies during the First World War after the sinking of the Lusitania in 1915.  The National Defense Act of 1916 doubled the size of the Regular Army and the National Guard and authorized the construction and operation of a nitrate plant for munitions “at a cost not more than 20 million.” President Woodrow Wilson chose  Muscle Shoals on the Tennessee River in Alabama as the site for a hydroelectric dam to provide power for the nitrate plant. This was at least in part a matter of investing in the impoverished south that had yet to fully recover from the devastation and disruption of the Civil War; Wilson was from Virginia. [6] The eponymous Wilson Dam, still in operation producing 663 MW of electricity, was not completed until 1924 and had no impact on the war, but a tremendous impact on the region. The Roaring Twenties optimism that infused the antebellum nation gave rise to stock speculation culminating in its precipitous plunge on Black Thursday, 24 October 1929. President Herbert Hoover, as an engineer and businessman, held to the belief that cajoling industrial and financial leaders to increase spending would staunch the downward spiral. Hoover vetoed a bill that would have converted the Muscle Shoals nitrate plant hydroelectric dam into a government run operation with the express purpose of providing electricity to the Tennessee region, preferring to have it run by private enterprise. By 1930, the Hoover Administration reluctantly concluded that some economic stimulus was warranted. Half a billion dollars was authorized for public works, including 65 million to construct Boulder Dam (later renamed Hoover Dam) on the Colorado River on the border between Arizona and Nevada. The economy continued to founder. Franklin Delano Roosevelt pledged to restore economic prosperity through federal government action in 1932 and was elected in a landslide with 472 of the 531 electoral college votes. [7]

Roosevelt’s New Deal ushered in the age of massive government programs, instituting a comprehensive plan to put the nation back to work. Building hydroelectric dams to bring power to the people was a core precept. With the Wilson Dam at Muscle Shoals as model, the US Congress passed the Tennessee Valley Authority (TVA) Bill in May 1933. It was one of the most important and far reaching initiatives in the history of the country. It gave the federal government the authority to construct and operate hydroelectric dams in a seven state region encompassing 40,000 square miles to “generate and sell electric power particularly with a view to rural electrification … and to advance the economic and social well-being of the people living in said river basin.” To accomplish this lofty goal, the government erected over 4,000 miles of transmission lines and subsidized rural electrification to quadruple the number of customers connected. The TVA is currently the largest public power provider in the United States and the fourth largest electric power provider with 29 hydroelectric sites employing over 300,000 people. The TVA model of government owned and operated hydroelectric dams to “use the facilities of a controlled river to release the energies of the people” was replicated in the Columbia River region of the Pacific Northwest. Construction of the Grand Coulee Dam in Washington State starting  in 1933 followed by the Bonneville Dam in Oregon in 1934. [8] By 1940, 40 percent of the electricity in the United States was provided by hydroelectric dams. The Depression Era ended with the full employment necessary to build the arsenal of democracy that won World War II. When it was over, nuclear energy of the atomic age replaced hydro as energy of the future.

Hydroelectric power has declined in importance in the United States over time, now providing only 6.2 percent of the electricity overall and 28.7 percent of that which is  renewable. This is the consequence of growing demand for electricity in an increasingly industrialized society relative to  static availability of appropriate locations for dam construction. Almost all of the good spots are taken and the older, larger capacity dams are over 80 years old . The United States is currently home to over 2,200 hydropower units that produce 80 gigawatts of electricity. Between 2010 and 2022, hydropower in the United States grew by only 2.1 GW almost entirely due to upgrades to existing hydroelectric dams (1.4 GW) and to additions of generators at 32 non-powered dams (550 MW). During the same period, 68 hydropower licenses were terminated for a total of 330 MW. By contrast in 2024 alone, the United States added 30 GW of solar power. [9]  Hydropower dams are in decline in the United States for several reasons. One is the droughts that are increasingly dire as global temperatures rise and more water evaporates. When the 2.1 GW Hoover Dam was completed in 1936, Lake Mead  was the largest reservoir in the United States. Due to a series of droughts in the 21st century, it has been reduced to as low as one quarter capacity with the plant’s four intake towers above the water line. [10] The second reason for the declining interest in hydropower is environmental. Dams disrupt the natural flow of sediments and impede the movement of fish upriver. The removal of the dams on the Elwha River on the Olympic Peninsula in Washington State in 2014 was the largest intentional dam removal  in world until the removal of the Klamath River dam in California in 2024.  A total of 1,951 dams were removed in the United Stats between 1912 and 2021. [11] Hydroelectric energy, while on the decline in the United States, is expanding in some areas of the world, notably China.

Unlike the network of hydroelectric facilities in the United States that were erected decades ago, global hydroelectric power was delayed until the broad industrialization that took hold in the  second half of the 20th century. Since the beginning of the 21st century, global hydropower has grown 70 percent, and now provides one sixth of  the world’s electricity. Hydroelectric power trails only coal and gas as third in overall capacity, is larger than all other renewable sources combined, and provides the lion’s share of electricity in 28 emerging and developing countries. Between 2021 and 2030 hydroelectric capacity is projected  to rise by 230 GW, an additional 17 percent. However, that marks a 23 percent reduction from the preceding decade of 2011 to 2020 indicating that the availability of sites with sufficient water flow and favorable geology is now reaching saturation worldwide. China is a case in point.  Between 2001 and 2010, it became the world leader in hydroelectric power with nearly 60 percent of global capacity. [12] The Three Gorges Dam of the Yangtze River, at 22.5 GW  the largest capacity hydroelectric plant in the world, was completed in 2003. During its nine year construction, the overall cost was over $30 billion, some of which was to resettle 1.3 million people displaced by the resultant reservoir. China’s growth has slowed since so that in 2025 it is still in first place, but with only 30 percent of global capacity. In a renewed bid to regain momentum, China recently approved the construction of  an even larger dam on the lower reaches of the Yarlung Tsangpo River that flows from Tibet to become the Brahmaputra in India and Bangladesh, where it empties into the Bay of Bengal. It is expected to be three times the size of the Three Gorges Dam and to cost $100 billion more. Aside from the international protests from the downstream countries, local Tibetans were involved in a protest in February of 2024 at the site of another dam. [13] [14]

Pumped Storage Hydro (PSH) facility encountered on a hike in central Germany.

While new hydroelectric projects are in decline, the use of water for energy storage, known as pumped storage hydro or PSH, is in the throes of a renaissance. There has always been a  need for large scale electricity storage. This is because electricity supply depends on the number and size of generators, which are traditionally either on or off. Electricity demand is variable depending on both diurnal workday activities and seasonal variation. To allow for some flexibility in balancing supply and demand, PSH was pioneered in the Swiss Alps in the early 1900’s. The basic idea is to use the excess supply during periods of low demand to operate pumps to move water from a low point to a higher point. The stored energy is used to augment supply when demand increases by allowing the water to flow back downhill through turbine generators. When the chimera of climate change energized the mandate for renewable energy sources, the storage problem got worse. Wind and solar are as variable on the supply side as the load on the demand side. While large scale rechargeable battery arrays can and have been used, they are not usually economically viable. There are 43 PSH units in the US with a storage capacity of 553 GWh, providing over 90 percent of all large scale electricity backup power. While 14,000 sites have been identified for possible PSH installations, the $2B price tag for a large unit is likely prohibitive. [15] While hydropower is a reliable and proven source of renewable energy with some limited storage capacity as PSH, it will not close the gap necessary to reduce carbon dioxide emissions on its own.

References:

1. Wothers, P. Antimony, Gold, and Jupiter’s Wolf, How the Elements were named. Oxford University Press, Oxford, England, 2019, pp 110 to 118, The fourth chapter of the book is entitled ‘H two O to O two H’ and is devoted to unravelling the chemical nature of water.

2. Smil, V. Energy in Nature and Society, General Energetics of Complex Systems, The MIT Press, Cambridge, Massachusetts, 2008, pp 180-184. Vaclav Smil is one of the world’s most respected authorities on power and energy. The book is encyclopedic.

3. Josephson, M. Edison, The Easton Press, Norwalk, Connecticut, 1986, pp 175-208.

4. https://www.niagarafrontier.com/power.html 

5. Needham, W The Green Nuclear Option, Outskirts Press, Denver, Colorado, 2022, pp 113-115

6. Link, A. Woodrow Wilson and the Progressive Era 1910-1917, Easton Press, Norwalk, Connecticut, 1982, pp 174-196

7. Wecter, D. Age of the Great Depression, The Macmillan Company, New York 1948.pp 44, 70.

8. Morison, S. and Commager, H. The Growth of the American Republic, Volume II, Oxford University Press, New York, 1950, pp 603-606.

9. Uria-Martinez, R. and Johnson, M. US Hydropower Market Report, US Department of Energy, Office of Scientific and Technical Management, Washington, DC. 2023

10. Kolbert, E. “A Vast Experiment, The Climate Crisis from A to Z” The New Yorker, 28 November 2022, p 47

11. Gleick, P. The Three Ages of Water, Public Affairs, Hachette Book Group, New York, 2023, pp 245-247

12. Hydropower Special Market Report to 2030  International Energy Agency 2020 https://iea.blob.core.windows.net/assets/4d2d4365-08c6-4171-9ea2-8549fabd1c8d/HydropowerSpecialMarketReport_corr.pdf     

13. “Dam!” The Economist, 4 January 2025, p 28.

14, Shepherd, C. “China pushes ahead with huge, and controversial, dam in Tibet” Washington Post 27 December 2024.

15. Kunzig, R. “Water Batteries” Science, Volume 383 Issue 6681, 26 January 2024, pp 359-363

Groundhog/Woodchuck

Groundhog foraging for food along the edge of a field not far from one of the entrances to its den refuge.

Common Name: Groundhog, woodchuck, forest marmot, whistle pig, marmotte commune (French), waldmurmeltier (German), Marmota canadiense (Spanish) – Groundhog is thought to derive from a translation of the Afrikaans aardvark; aarde means “earth” and vark means “pig”. This may have come to North America with the Dutch settlers of New Amsterdam. Earth pig and ground hog are synonymous.

Scientific Name: Marmota monax – The generic name comes from the French marmotte which is a shortened form of the Old French marmontaine which is from the Latin mures monti, which means “mountain mouse,” which is metaphorically similar to ground hog.  The specific name is from the Greek monos, which means single or alone, referring to characteristic solitary and  asocial behavior.

Potpourri: The groundhog is also known colloquially as woodchuck from a disparate Native American etymology. The various tribes of the Northeast were  familiar with the indigenous mammal, as it ventures abroad openly yet furtively in search of food during daylight hours. On being startled by a relatively large, and surprisingly fast woodchuck inadvertently encountered alongside a hiking trail, “big – brown – fluffy” was the descriptive name blurted out by one hiker. Perhaps due to similar and more frequent run-ins with different members of different tribes with different  languages, a variety of names were adapted over thousands of years of encounters: ockqutchaun in Narragansett; otchig in Ojibwa; otcheck or wuchak in Cree. [1] It is not clear that this was the name given to the groundhog, as one translation of the Cree name is “he who fishes” which  was given to  any of various fishing animals and groundhogs are not noted for catching or eating aquatic animals. Regardless of the precise etymology, which is rarely a matter of certainty, the  name wuchak was adopted by colonists. Many plants and animals of the New World had no European equivalents and were similarly christened. When words are taken from one language and used in another, modifications to suit familiarity are the norm. Thus, wu became “wood” to account for the animal’s habitat and chak became “chuck” perhaps as an onomatopoeia for the clucking noises that it made. The calque word woodchuck was the result. The palindrome that results from the reversal of the words led to the language exercise (tongue twister) phrase” how much wood could a wood chuck chuck if a woodchuck could chuck wood.” It was never clear what chuck wood was supposed to mean, but it suggests gnawing.

Groundhogs/woodchucks are in the Order Rodentia in the Family Sciuridae and are therefore closely related to squirrels and chipmunks, collectively the sciurids. The rodents are the largest group of mammals, comprising roughly 50 percent of all species, closer to 70 percent if based on the number of individual animals due to their geometric population growth and proliferation. Like all rodents, groundhog incisors grow at a rate of several millimeters a week throughout their lives (less during hibernation), which promotes and necessitates gnawing hard objects frequently. [2] While woodchucks may not chuck wood the way beavers do, it is not unlikely that they do. If there is nothing available to grind the teeth, malocclusion can proceed with potentially fatal result. Woodchucks are herbivores as are all rodents; foraging for food is the primary daily activity. While they favor grasses and herbs, they also regularly eat the leaves and twigs of dogwood, black cherry, and sassafras trees. Groundhogs are  synanthropes, thriving  in habitats planted and maintained in support of human enterprise. They are notorious for damaging consumption of farm crops such as corn, vegetables and fruit trees,  eating over a pound a day on average to maintain a body weight of 10 pounds. [3]

Groundhogs have strong, clawed forelimbs to dig elaborate dens that consist of an underground tunnel system with over 45 feet of tunnels extending to a depth of 5 feet underground.  The amount of effort necessary to excavate a maze of interconnected tunnels is near herculean, transporting about 100 cubic feet of soil weighing more than three tons. The tunnelling process would almost always include cutting through plant and tree roots, providing the tooth grinding necessary for survival. The den is accessed by a number of entrances, one of which is a plunge hole that extends vertically to the main tunnel for rapid ingress to escape predation. Occupied dens have a characteristic pile of fresh dirt at the entrances as a result of frequent cleaning. The den is arranged with a special chamber for excrement and a chamber for sleeping/hibernation that is a cozy 15 inch diameter padded nest.  The dens are both a boon and a bane as far ashumans are concerned. Their aeration and fecal fertilization of  the subsoil transforms it into topsoil, estimated by the state of New York to amount to 1.6 million tons per year. On the other hand, the burrows can damage building foundations and are a hazard to horses, who have been known to break a leg  on penetrating a hidden tunnel. [4]

Groundhogs have been traditionally characterized as solitary, agonistic animals, meeting only for the conjugal act necessary for survival of the species. Mating occurs soon after emergence from hibernation in early spring, the males on occasion fighting for the rights to reproductive activities with local females where geographic ranges overlap. The pugilistic ritual brings out the range of noises that make up the vocabulary of the animal which consists of  barking, squealing, chattering, and whistling; the name whistle pig is attributable to the cacophony. Female woodchucks have about three to five young called kits, that they raise for the most part on their own. The kits are naked, blind, and helpless and don’t even open their eyes until the fourth week. At six weeks, they are expelled from the den and forced to disperse. Not too many survive the first summer. The widely held belief that groundhogs are loners has been challenged by field studies. Recent research with modern radio tracking equipment has established that some if not most groundhogs belong to small groups consisting of one male and two or more kin groups of females consisting of an adult and a juvenile from the previous mating. “Interactions within the kin group and with the adult male were relatively frequent and generally amicable.” [5] Or maybe groundhogs are evolving so that the genetic traits that foster cooperation in raising kits results in increased survival of those who practice it.

Groundhogs are true hibernators in that they enter a state of torpor over extended periods during the colder months of winter. Hibernation is an evolutionary trait necessary and sufficient for survival (of the fitter) during periods when there is limited food available. It was most likely an adaptative genetic mutation that occurred soon after animals emerged from the oceans, where food is floating or swimming around at all times, to the challenges of seasonal terrestrial food availability. According to this theory, hibernation emerged during the transition from amphibians to reptiles and was retained in the mammalian diaspora during the Eocene Epoch. Human mammals would then have retained its genes, making the study of groundhog hibernation relevant to human treatments involving methods to slow metabolism   During sleep torpor, groundhog body temperature drops almost fifty degrees from 95 °F to 46 °F and heart rate slows form 100 to 15 beats per minute. In the mid-Atlantic groundhog hibernation begins in October and does not end until March or early April, lasting about 100 days. Research over the last twenty years has revealed that groundhogs do not stay in the lower metabolic, energy preserving state continuously, but rather reheat periodically to arouse and move about. It is hypothesized that arousal cycles may be needed to limit the physiological harm caused by long term shutdowns and contribute to readiness for spring mating. Arousals occur throughout winter becoming more extensive toward spring, which may then include short forays above ground, where they can be spotted by superstitious humans and named Punxsutawney Phil.[6]

Groundhog Day (February 2) is based on sound practical science even if its modern interpretation is fraught with the holiday hype of the social media age. When growing food became the norm during the Neolithic Age, knowing when to plant in spring for the fall harvest was a matter of life and death. The decision is essentially the same as that made by a hibernating animal that must decide based on environmental clues that it is safe to wake up and expend energy in search of food (and a mate).So looking for a hibernating animal out and about would provide a reliable prediction of the last frost and signal the start of preparatory measures to plow the fallow fields to sew the seeds of spring. Where and how this started is not known, but Romans purportedly celebrated hedgehog day in a similar manner, the indigenous hedgehog providing the shadowy omen. This practice spread across and was retained in medieval Europe. Since there are no hedgehogs in the New World, the majority of colonists who followed the Old World predictive prescription  eventually settled on groundhogs. While there are other animals that hibernate, including bears, skunks and snakes, the groundhog was common, easy to spot, and benign.

February 2 has a celestial significance that was important to early humans governed by the seasons as measured by the movement of the sun, the moon, and the visible stars. The Celtic tradition, which was incorporated into cultures that succeeded it in Britain and Ireland, is notable. The winter and summer solstices when the sun stood still and the spring and fall equinoxes with equal night and day were evident by careful observation. To provide for some transition between the four “quarter points,” the day that was midway between the two was known as a “cross-quarter” day. February 2, Groundhog Day, is  the quarter point between the winter solstice and the spring equinox. According to the Celtic tradition, it was called Imbolc, meaning lamb’s milk. A cloudy day was considered a harbinger of warm spring rains to prepare the ground for planting. Imbolc was symbolized by Brigantia, the goddess of light. When the Christian faith penetrated the Celtic lands, the holiday became Candlemas, when the candles of the church were blessed in celebration of the presentation of the Christ Child at the temple in Jerusalem. The other three cross quarter points are May 1, Beltane, generally the rite of spring now May Day, August 1,  Lammas, from “loaf mass” to celebrate the wheat harvest, and October 31, Samhain meaning “summer’s end” and the end of the old year, a time of the spirits of the dead. This became All Hallow’s Eve, now Halloween, returning to religiosity on All Saint’s Day on November 1. [7]

Hoary Marmot on Highline Trail in Glacier Park

The groundhog is the most solitary of the marmots, which are large ground squirrels that live in burrows and subsist on vegetative matter that can include grasses, berries, lichens, mosses, roots and flowers. The marmot appellation is more commonly applied to the species that live in mountainous areas, such as the Hoary Marmot (M. caligata) of the North American northwest and Siberia (right).  The Yellow-bellied Marmot (M. flaviventris) is also indigenous to the northwest and is noted for being the host for the tick that carries Rocky Mountain spotted fever. The Alpine Marmot (M. marmota) of Europe is thought by some historians to be the primary carrier of the Bubonic Plague, otherwise attributed to rats, which are also rodents. [8] It is not all bad. Groundhogs are the best non-human models for studying Hepatitis B since they suffer from a similar ailment and are also useful in studies of obesity, metabolism, and endocrinology. [9]

References:

1. Bento, H, publisher, Webster’s Third New International Dictionary of the English Language Unabridged, Encyclopedia Brittanica, Inc. Chicago, Illinois. 1971, p 2630

2. Wood, A. “Rodentia” Encyclopedia Brittanica, Macropedia William and Helen Benton Publishers, University of Chicago. 1974, Volume 15 pp 969-980.

3. Light, J. University of Michigan Museum of Zoology Animal Diversity Web. https://animaldiversity.org/accounts/Marmota_monax/

4. Kerwin, K. and Maslo, B. Ecology and Management of the Groundhog (Marmota monax)  Rutgers School of Environmental and Biological Sciences    https://njaes.rutgers.edu/e361/ 

5. Meier, P.  “Social organization of woodchucks (Marmota monax)”. Behavioral Ecology and Sociobiology Volume. 31 Number 6,  December 1, 1992, pp 393–400

6. Zervanos, S, “Professor sheds light on groundhog’s shadowy behavior” Penn State University Newsletter, January 2014

https://berks.psu.edu/story/2398/2014/01/23/professor-sheds-light-groundhogs-shadowy-behavior

7. Rothovius, A. “Ancient Celtic Calendar: Quarter Days and Cross-Quarter Days”            https://www.almanac.com/quarter-days-and-cross-quarter-days       

8. Whitaker, J. National Audubon Society Field Guide to North American Mammals, Alfred A. Knopf, New York, 1996, pp 438-445.

9. Kerwin and Maslo, op. cit.

Needle Ice

Needle ice extends upward into the frigid air at night at the rate of about a centimeter a day.

Common Name: Needle Ice, Ice flowers, Frost flowers, Ice fringes, Ice filaments, Rabbit ice, Ice castles, Ice leaf – The various descriptive terms are applied to an ice formation depending on the configuration of its components that can range from narrow needles to blocky castles.

Scientific Name: Segregated Periglacial Ice – Characterized by an area that is subject to intense freeze-thaw conditions (periglacial) that extends (segregates) from a frozen substrate, which may be ground soil or a plant stem. The name Crystallofolia has been proposed as a Latinized version of ice flower.

Potpourri: Hiking in winter is a challenge as air temperature often drops below the freezing point of water. Water is wet and sometimes slick but ice is slipperier and potentially dangerous. Mountainous regions are particularly susceptible to icy conditions for two reasons. The first is that ground water accumulates in the high volume terrain much more than level areas. It is drawn by gravity downslope, forming rivulets in ravines that combine to become the headwaters of rivers flowing eventually to the sea. Freezing is also a matter of height since temperature decreases between 3 and 5 degrees Fahrenheit (depending on how dry the air is) with every 1000 feet of elevation gain (6-10 degrees Celsius per kilometer). Sometimes physics is not intuitive. Lower temperatures occur when hiking upward in elevation because there is less atmosphere above and therefore less pressure, making  the molecules of air (mostly nitrogen and oxygen) move further apart. Temperature is a measure of molecular movement which is therefore lower when going higher. More elevation, more ice.  Because of this effect, a gently meandering downward trail can turn into an icy toboggan run without a sled. Ice can be beautiful just as it is oftentimes treacherous. Under certain conditions, it forms ice sculptures with variety of shapes and sizes. The most common form is needle ice.

The formation of needle ice structures is a well-recognized phenomenon in areas with the necessary and sufficient environmental conditions; it is called kammeis in Germany, pipkrake in Sweden and shimobashira in Japan. The German name kammeis translates to “comb ice” as the structure suggests the teeth of a hair comb. It occurs on sloped regions to the extent that a special name, kammeissolifluktion, is given to the process of movement of soil down the face of a slope due to comb ice. The Swedish name pipkrake is used largely in reference to sub-arctic needle ice. Pipkrake formation results in frost creep,  one of the primary geomorphologic processes associated with the shift of temperature across the freezing point of water. Frost creep occurs in permafrost regions due to the action of the individual pipkraken (needles) that rise beneath individual sediment particles. The net movement of soil due to needle ice/pipkrake is up to one meter per year; laboratory demonstrations have shown that pipkrake can lift ten pound rocks. The Japanese word for needle ice, shimobashira, translates as “columns of frost.”

Needle ice can be defined as “the accumulation of ice crystal growths in the direction of heat loss at, or directly beneath, the ground surface.” There are some complexities in this definition that relate to thermodynamics, the branch of physics that deals with the relationship between heat and energy.  [1] However, the mechanism of extending ice can be understood from observing the conditions under which it occurs. The fundamental requirement is a diurnal freeze-thaw cycle, which is nothing more than a 24 hour period during which freezing occurs at night followed by thawing with the radiant heat of the sun starting a dawn’s early light. In mountains, the area where this will occur depends on the height above ground due to the effect of elevation on temperature and the degree to which the sun is shaded by adjacent slopes. [2] Soil composition is also important in channeling the extending ice crystals in parallel columns. Soil is classified according to the relative amounts of three basic forms/sizes that arise depending on the degree of the erosion of weathered rocks. The largest particles are sand ranging in diameter from 0.05 mm to 2 mm. Silt is smaller, starting at 0.01mm. Clay particles are one order of magnitude lower, in the micron range, imparting a slippery feel to soil. A soil that has an even mix of sand, silt, and clay is called loam. Needle ice is most prevalent in soils that are made up of small sand particles with about 10 percent silt or clay. [3] It may be concluded that needle ice forms in columns separated by soil particles as water is pushed upward into the frozen air.

The thermodynamics of water is the essence of meteorology and oceanography (and therefore weather and climate) at the macro scale  just as it is of needle ice at the micro scale. Radiant heat from the sun in the form of photons is the font of all energy. Plants use sunlight energy to produce hydrocarbons and exhale oxygen. Oxidation of hydrocarbons in the mitochondria of all cells is the energy of growth and movement. With a higher intensity along equatorial latitudes, the sun’s radiant photons interact with either solid ground, causing concrete hot spots in cities, or water, mostly ocean, causing evaporation. Water molecules thus vaporized rise from the oceans and cool as they travel skywards to condense as clouds. The energy of evaporation, called the latent heat of evaporation, is returned when water vapor becomes liquid, falling as rain, snow, and sleet. This returned energy is what powers the weather, manifest in the extremes with thunderbolts of lightning and tornado whirlwinds. The rotation of the earth swirls the rising tropical vaporous clouds as they move away from the equator toward the poles to create weather. At the other end of the temperature spectrum is the latent heat of fusion, that amount of energy needed to melt solid ice to yield liquid water. Since it takes energy to melt ice, then energy must be released when ice forms. This is what is meant by the definitive statement that needle ice grows in the direction of heat loss. Energy from liquid water freezing is what forms the vertical needle column and moves it upward. [4]

Each water molecule is attracted to four adjacent water molecules with hydrogen or polar bonds.

The growth of needle ice is also affected by an increase in volume that occurs when liquid water solidifies. Solid water ice is 9 percent larger in volume that the liquid water from which it arose. This very unusual behavior for a substance occurs due to the nature of the bonding between the two hydrogen atoms and one oxygen atom that make up the water molecule, the familiar H2O.  The water molecular attractive bond called a hydrogen bond acts between two molecules that results from polarity, the familiar positive (+)  or negative (-)  of electric battery terminals. Hydrogen bond sites occur due to the way water molecules are put together.  Chemical compounds between atoms occur by sharing electrons so as to achieve a stable number of electrons which is the same as those in the inert gases at the far right side of the periodic table, which don’t react with anything (inert is the adjective form of inertia, to remain at rest). Oxygen needs two extra electrons which it shares with two hydrogens each with a single electron. Oxygen bonded to hydrogen in water is like the inert gas neon in stability. The result of the covalent water bonds is the creation of a positive charge on region on the hydrogen atom side of the water molecule  and a  negative charge on the oxygen atom side. These are called dipoles due to having two poles, one positive and one negative. The hydrogen or dipole to dipole bond occurs because opposite charges attract each other with an electrostatic force. Each water molecule is hydrogen bonded to four adjacent water molecules. [5]    

The weak electrostatic attraction of hydrogen bonds is what makes water fluid. It is also what makes liquid water more dense than frozen water. The freedom of liquid hydrogen bonds to attach to alternative and closer molecules draws them more tightly together. When crystallized as ice, molecules are rigidly set in space further apart, which is why ice occupies a larger volume than the liquid it formed from. Since ice is less dense than water, icebergs float and ponds freeze from the top down and not the bottom up.  This fact is enormously important to life on earth. If ice sank, the oceans would fill with ice and only a thin surface layer would be melted by the sun. Earth would essentially be an ice-covered ball. Further, since life (apparently) arose in aqueous (watery) saline (salty) conditions that we call oceans, there would almost certainly be no life on a frozen earth. When organisms eventually ventured out of the oceans onto dry land, they could continue to operate only by taking the ocean with them. Which is why humans and all other mammals are about 60 percent salt water. The hydrogen bond of water molecules in an aqueous environment is what makes life work. “The structures of the molecules on which life is based, proteins, nucleic acids, lipid membranes, and complex carbohydrates result directly from their interactions with their aqueous environment The combination of solvent properties responsible for the intramolecular and intermolecular associations of these substances is peculiar to water (italics in original).” [6]  Something to think about when you look down at the needle ice on the trail.  

Needle ice causes damage to plants by pushing up the soil around the roots.

Aside from aesthetics, needle ice formation is of scientific interest due to plant damage that is often its result. In order to establish the key variables in the formation and growth of needle ice, a montane area near Vancouver, British Columbia was instrumented and monitored in the late 1960’s. Weather conditions consisted of a prolonged anticyclonic period with clear, cold, and dry air. An anticyclone is the clockwise  (CW) circulation (in the northern hemisphere) of air around an area of high (H) pressure noted for cloudless blue brilliance. Cyclones are the opposite, turning counter-clockwise (CCW) around low (L) pressure areas which, under extreme conditions, result in hurricanes. Over the course of eleven sequential 24-hour noon to noon periods, parametric data were collected to evaluate the effects of temperature and time on needle formation and mean values calculated from the eleven data sets. Starting with the nucleation of ice at the bottom of the needle that started 9.9 hours after noon (about 2200), the nominal ice needle grew for 7.3 hours with an elongation of 9 millimeters (about 1 centimeter or 1/3 inch). As the sun rose the next day, the maximum surface temperature reached 12.8 °C (55 °F) at about 1330, resulting in some melting, and, more importantly, evaporation of the soil water into the desiccated air. Depending on the balance between freezing at night and melting during the day, needle ice formation is either homogenous (top photo), with continuous upward growth, or heterogeneous (right photo), with repetitive cycles of soil upheaval and subsidence, the latter resulting in greater damage. [7]

Ice flowers result from longitudinal splits on the stem of some plants.

The formation of ice structures that resemble flowers or ribbons is due to a freezing phenomenon that is closely related to that which causes needle ice. The fundamental difference is that ice flowers exude from the stems of certain plants whereas needle ice exudes from ground water without any botanical conduit. The geometry of certain plants and rotting wood is such that a passage for supercooled water is created. When the temperature drops, longitudinal cracks form along the axis of the stem and allow the liquid to ooze out into sub-zero air to be almost instantaneously frozen into a ribbon-like crispation. The overpressure that pushes the extruded ribbon out is thought to be the result of the gradual freezing of the water in the stem.[8] Ice flower formation is often erroneously attributed to frozen sap, which may contribute to the cracking of the stem wall.  However, there is not enough sap in the plant to create the “remarkable accumulations of voluminous friable masses of semi-pellucid ice around the footstalks of the Pluchea (fleabane) which grow along the road-side ditches”  as described by Dr LeConte of the University of Georgia in 1850. The plants that exhibit ice flower formation have been identified by observation, as the relevant dimensions of the plants’ structures that result in the phenomenon have not yet been determined. The list, largely anecdotal includes: dittany (Cunila origanoides), frostweed (Helianthemum canadense), yellow ironweed (Verbesina alternifolia) and white crownbeard (Verbesina virginica). [9]

Ice portal on AT in Shenandoah National Park near Matthews Arm

Spring is for flowers, summer is for fauna, fall is for fruit and fungi. Winter is for ice. Solid water frozen into structures sculpted by the physical properties of soil and air are quest-worthy. While ice needles may be admired for incongruous symmetry and ice flowers marveled for their mobius strip curvature, wind and water in frigid air works equal wonder. The trinity of water as vapor, liquid, and ice is as profound and deeply rooted in relevance to humans as the spiritual trinity that guides the lives of many. Cathedral portals through the iced boughs and branches offer solitude and purpose just as those of churches. Here one may find  the animist gods of the aborigine, the lares and penates of the wooded home, and the solitude of the soul, reduced to the raw elements that are ultimately its origin.

References:

1. Grab, S. “Needle-Ice”. In Goudie, Andrew (ed.). Encyclopedia of Geomorphology. Routledge. p. 709.

2. Pidwirny, M.: Fundamentals of Physical Geography, 2nd ed., section 10(ag), Periglacial Processes and Landforms

3. Nardi, J. Life in the Soil, University of Chicago Press, Chicago, 2007, pp 1-6.

4. Petrucci, R. General Chemistry, 4th Edition, Macmillan Publishing Company, New York, 1985, pp 140-151, 285-298.

5. Ibid. pp 305-307.

6. Voet, D. and Voet, J. Biochemistry, John Wiley and Sons, New York, 1990, pp 29-34.

7. Outcalt, S. “A Study of Time Dependence During Serial Needle Ice Events” Department of Environmental Sciences  University of Virginia, Charlottesville, Virginia, U. S. A. 23 April 1970. Water Resources Journal , Volume 7,  pp 394-400.

8. Carter, J. “Flowers and Ribbons of Ice” American Scientist. Sep-Oct 2013, Volume 101, Number 5. p. 360.

9. Carter, J.  “Needle Ice” Geography-Geology Department Illinois State University, Normal IL https://www.jrcarter.net/ice/needle/

Deer Truffle

Deer truffles look like small clods of dirt; sectioning reveals spores. Note acorns for size.

Common Name: Deer Truffle, Deer balls, Hart’s balls, Warty deer truffle, Fungus cervinus (cervus is Latin for deer), Lycoperdon nuts (Lycoperdon is a genus of puffball fungi) –  Truffle is a French variant of the Latin word tuber meaning lump or knob. Both truffles and tubers (like potatoes) are generally globular in shape. The association with deer is attributed to finding them in locations frequented by stags during mating season. This gave rise to the belief that truffles are aphrodisiacs.

Scientific Name: Elaphomyces granulata – The generic name is a literal translation of Greek,  deer (elaphos) fungus (mykes). The Latin granulum is used directly in English as granule, referring here to the protuberances on an otherwise smooth surface. A loose translation of the scientific name would be “warty deer fungus,” one of the common names.

Potpourri: The deer truffle genus Elaphomyces is one of the most important mycorrhizal genera in temperate and subarctic forests, establishing and maintaining the ecosystem balance between plants and fungi. They are also an important source of food for small mammals like mice and voles on every continent except Antarctica. Deer truffles are equally favored by their namesake, notably the red, roe, and fallow deer species of Europe. Of the 49 species of deer truffle so far recognized worldwide, 20 are European. E. granulatus is one of the  most important North American species. A related species, E. muricatus, has been used in Mexico, both as “a stimulant, for remaining young and treating serious wounds” and “in shamanic practices in association with psychoactive Psilocybe species.” [1] Limited research in the 21st century has revealed that E. granulatus has enzymes that are known to reduce inflammation in addition to a variety of anti-oxidants with potential medicinal applications for humans.

Deer truffes are among the most common of underground fungi globally, and equally one of the least documented. The lack of scientific research on deer truffles is due partly to their sub rosa, subterranean obscurity and ignorance about their ecological importance. Even when uncovered, they look like lumpy balls of dirt. However, unlike the more famous black and white truffles of Europe, they are neither redolent with beguiling aromas nor palatable. Taste testers report that the main body is like “thick cream that tastes like nothing,” a rind that is “rubbery but can be chewed quickly,” and “a taste that goes in the direction of earthy forest floor.” They are, nonetheless, relished by rodents. [2]

Truffle is defined as “any of an order (Tuberales) of fleshy, edible, potato-shaped ascomycetous fungi that grow underground.” However, truffle is broadly applied to any hypogenous (below ground) fungus that is shaped like a tuber, which is the thickened part of an underground plant stem like those of the yam, cassava, and potato. According to historical etymology, any roundish shaped globule dug out of the ground was a tuber and/or truffle. [3] The distinction between plant and fungi kingdoms was not established until the 20th century so it would have made no difference whether the earthen globule was a plant tuber or a fungal truffle. The lumpiness meaning is inherent in chocolate truffle, a confection shaped like a truffle having no fungal ingredients. The terms edible and ascomycetous in the definition require some elucidation.

Edible does not necessarily mean by humans, but merely that it is or can be eaten for nutrition by an animal. Being edible is also a matter of importance, as fungal truffles reproduce by spores that must be transported for propagation. Above ground or epigenous fungi/mushrooms accomplish this with airborne wind dispersion, a mechanism not available to truffles buried several centimeters deep. Truffles must usually be consumed by an animal to transport the spores to new fertile ground and must thus be at least palatable. The need to attract animals is key to the inimitable smell and taste of certain species of truffles. It is probable that some truffles are unearthed and broken open without being eaten to release spores, so consumption is not absolutely mandatory although certainly the norm. Insects and worms that tend to feed on fungi may also play a role. Edible is a broad term in this context.

The term ascomycetous is a bit more complicated. The vast majority of fungi typically called mushrooms are in the subkingdom Dikarya which means “two nuclei” in Greek. Dikaryotic cells replicate with cell division of one nucleus from one “parent” and one from the other as they grow so that each new cell has two nuclei until the creation of reproductive spores through meiosis to pass the combined DNA to future generations. The way in which spores are produced divides Dikarya into two phyla, Basidiomycota and Ascomycota. Basidiomycetes produce four spores at the end of a club-shaped structure called a basidium. Most of the fungi that look like a mushroom with a cap or pileus at the top of a stem or stipe in addition to the various bracket fungi, puffballs, and stinkhorns fall into this category. Ascomycetes produce eight spores inside a sac-like structure called an ascus, Greek for wineskin or bladder. The asci are typically arrayed on a concave surface giving rise to the more common name “cup fungi” for ascomycetes. Most fungi, including yeasts, rusts, smuts, lichens, and, notably, truffles are ascomycetes. [4]  False truffles are basidiomycetes that look like truffles―ball-shaped structures that grow below ground.

Both truffles and false truffles followed different ancestral trajectories to become nearly identical in size, shape, and disposition due to similar environmental factors, a process called convergent evolution. Richard Dawkins offers that this is because “however many ways there may be of being alive, it is certain that there are vastly more ways of being dead.” Organisms tend to come up with similar ways to survive in the unforgiving environments of nature. Life above ground can be dangerous due to predatory and environmental challenges making it  advantageous to seek refuge in the soil. Many animals also do this. It is hypothesized that truffles evolved from cup fungi and false truffles evolved from mushrooms like agarics and boletes as a matter of random mutation resulting in improved survival. However, it could equally be the other way around, i.e. fungi may have originally been underground ”truffles” and evolved mushroom stems and gills for spore wind dispersion. DNA sequencing of the world-renowned Périgord black truffle corroborated the estimate that Pezizomycetes, the largest group of Ascomycota that includes truffles, separated from other fungal lineages 450 million years ago, just as the first plants advanced onto land from the sea. [5]

Deer truffles from Germany. Note root-like attachment to the mycelium.

Most fungi start as a root-like structure called a hypha emanating from one spore joining up with another hypha from another spore to  form a mycelium, the tangled mass of hyphae that defines the fungus. Since no species can survive without reproducing at some point, the mycelium must somehow send spores somewhere to start anew. Just as plants have devised ingenious ways to spread seeds, so have fungi to spread spores. Mushrooms start as underground bodies called primordia that are formed by the mycelium. They erupt upward on a stem into open air when the time is right to expose the spore bearing gill or pore surface to transporting winds. In the case of truffles and false truffles, the spores are contained in the tuber-like body that is attached to and grows from the mycelium but remains underground. The evolutionary pathway for the truffles and false truffles was to attract animals with enticing smells, not all that different from plants producing flowers with complex chemical scents to attract pollinators. Note that it is important for truffle smell signaling to start only when the spores are fully mature and ready to transport. Animals drawn by the smell to eat them transport truffle spores unwittingly wherever and whenever they “go.” [6]

Animals attracted to truffles and false truffles are globally diverse, inclusive of  deer, bears, and rabbits in the Northern Hemisphere and armadillos, baboons, and wallabies in the Southern Hemisphere. [7]  Underground fungi offer a food source that is relatively independent of surface conditions making them especially important to cohabitating animals. While most if not all forest dwelling mammals consume truffles on occasion, it is the burrowing squirrels and voles that are best equipped to use them as a major food source. With a keen sense of smell and claws to dig up buried acorns, there can be no doubt that squirrels are truffle aficionados. One well studied example is the California red-backed vole of the Pacific Northwest which subsists almost entirely on truffles. A study in the Oregon Coast Range involving vole capture and evisceration found that truffles made up 85 percent of consumed food, the balance was mostly lichens, also predominantly fungal. The northern flying squirrel, with a range from Alaska to North Carolina, is a nationwide spreader of truffle spores. [8] The extent of the role that truffles play in forest ecology as both providers of key soil nutrients like phosphorous and nitrogen to trees and as food for foragers is not well studied and therefore mostly unknown. This relationship is called mycorrhizal (meaning fungus root in Greek) and was first discovered by a biologist named Albert Frank in 1885 while employed by the King of Prussia to attempt to cultivate truffles. [9] Since there is no above ground evidence and animals need to be literally caught in the act, data are mostly anecdotal. However, one can gather some insight of the range, diversity, and importance of truffles from the aptly named “desert truffles.”

A desert is a dry, barren place incapable of supporting almost any plant or animal life. And yet, truffles thrive across North Africa and the Middle East all the way to China. Eking out a tenuous existence with shrubby plants with which they are mycorrhizal, they are surprisingly ubiquitous. They are sold in many local markets and consumed as an important food source over a vast region, noted for having a taste characterized as “delicate, not pungent.” They are reportedly relatively easy to find as they grow close to the surface and make the ground harder, a property that can be discerned with experience by rubbing a bare toe over the area. [10]  As Mesopotamia was the cradle of western civilization, the long history of truffles as both food and medicine there is telling. Truffles have historically been a substitute for meat throughout the Arabian peninsula. Truffles (kama’ah in Arabic) appear in the Koran as preventive medicine, used as promoters of longevity and good health much as many other fungi are in Asia. This a measure of their reputed anti-oxidant, anti-inflammatory, and immune modulating activity. 11] The cultural importance and extensive range of desert truffles across a broad swath of Eurasia is a strong indicator that they are key components of the plant-fungi global ecological partnership. While truffles are surely common and keystone in many regions, almost  all of what is known and studied about the nature and nurture  of truffles derives almost in entirety from detailed study of a few species that are among those granted the rubric “true truffles.”

True truffles are the epitome of European gastronomy. The black truffle of the Périgord region in southern France (Tuber melanosporum) is surpassed only by the white truffle of the Piedmont region of northern Italy (Tuber magnatum) in desirability and exorbitant cost. The reason for the difference is supply and demand, the universal economic law. White truffles are rarer because, unlike their cultivated French cousins, they grow only under naturally appropriate conditions and require specialized skills to locate. Consequently, in local Italian trattoria, one can purchase risotto with black truffles for about 20 euros, but risotto with white truffles will run over five times as much. [12] The reason for truffle demand is the redolence they impart to food, beguiling gourmands in their search for epicurean nirvana. It is telling that truffles were originally hunted with domesticated female pigs attracted by their aroma which includes the steroid alpha-androstenol, also found in the saliva (and breath) of rutting boars. The same chemical is found in the underarm emanations of men and in the urine of women, and, while the sexual role of the steroid in human sexuality has not been proven, it has been demonstrated. Men rating photographs of (clothed) women for sexual attractiveness gave higher marks when smelling alpha-androstenol.  [13] In that smell is intertwined with taste according to the neural-networked brain, the irresistible allure of  truffles to humans probably has deeper meaning and possibly including subliminal sexual arousal. It is no wonder that they are considered to be aphrodisiacs. Perhaps at least mentally they are.

It is almost certain that boars that have roamed wild across Europe for millennia were the coevolutionary partners of white and black truffles, spreading their spores far and wide. It is probable that humans first became aware of truffles in association with hunting wild boars. Thus began the long partnership between domesticated pigs and people in the pursuit of pleasure. Dogs have mostly replaced pigs as the truffle hunter’s sensory companion. Heavy, sedentary pigs required carting to truffle forest habitats and had to be forcibly prevented from eating their quarry; many a truffle hunter lost a finger to an overzealous pig. Dogs are not sexually attracted to truffles and must therefore receive olfactory training, much like drug-sniffing dogs of the DEA. This takes a great deal of time and effort, which must of necessity include the use of valuable, short-lived truffles. Trained truffle dogs are dear, commanding prices of over 15,000 euros but rarely sold. They can transit and search woodlands with ease and are not overwhelmed by lust for consumption. In fact, most truffle dogs don’t even like them, though apparently some do. Dogs have different taste preferences, as do their best friends. But not pigs, apparently. [14] The wild boar fungus story has a recently discovered twist. Of 48 boars killed in hunts in Bavaria, Germany, 88 percent had radioactive cesium levels (from Chernobyl) exceeding safety standards. It is considered likely that eating fungi that tend to bioaccumulate heavy metals were the source, especially truffles. [15]

References:

1. Paz, A. et al . “The genus Elaphomyces (Ascomycota, Eurotiales): a ribosomal DNA-based phylogeny and revised systematics of European ‘deer truffles'”. Persoonia. 30 June 2017. Volume 38 Number 1 pp 197–239.

2. “Deer Truffles – biology, ecology, distribution and occurrence of Elaphomyces or False truffle” https://www.umweltanalysen.com/en/elaphomyces-deer-truffles/  

3. Neufeldt. V. ed Webster’s New World Dictionary of American English, Third College Edition, Simon and Schuster, New York, 1988, p 1435, 1438.

4. Lincoff, G. National Audubon Society Field Guide to North American Mushrooms, Alfred A. Knopf, New York, 1981, pp 323, 377.

5. Martin, F. et al “Périgord black truffle genome uncovers evolutionary origins and mechanisms of symbiosis” Nature, 28 March 2010, Volume 464 pp 1033-1038.  https://www.nature.com/articles/nature08867 

6. Arora, D. Mushrooms Demystified, Second Edition, Ten Speed Press, Berkeley, California, 1986 pp 739-741, 841-865.

7. Trappe J. and Claridge A.” The Hidden Life of Truffles” Scientific American April 2010.

8. Stephenson, S. The Kingdom Fungi, Timber Press, Portland, Oregon, 2010 pp 200-205.

9. Frank, A.B. “Über die auf Wurzelsymbiose beruhende Ernährung gewisser Bäume durch unterirdische Pilze” [On the nourishing, via root symbiosis, of certain trees by underground fungi]. Berichte der Deutschen Botanischen Gesellschaft. 1885 Volume 3: pp 128–145.

10. Schaechter. E. In the Company of Mushrooms, Harvard University Press, Cambridge, Massachusetts, 1997, pp 161-167.

11. Khalifa, S. et al “Truffles: From Islamic culture to chemistry, pharmacology, and food trends in recent times”  Trends in Science and Food Technology, Volume 91, September 2019, pp 193-218. https://www.sciencedirect.com/science/article/abs/pii/S0924224418303406 

12. Goldhor, S. “Hunting the White Truffle” Fungi. Volume 8 Number 3, Fall 2015, pp 18-23.

13. Kendrick, B. The Fifth Kingdom, Third Edition, Focus Publishing, Newburyport, Massachusetts, 2000 pp 281-283.

14. Campbell, D. “Sketches from the Italian Truffle Hunt.” Fungi, Volume 11 Number 1, Spring 2018, pp 20-25.

15. Rains, M. “Germany’s radioactive boars are a bristly reminder of nuclear fallout” Science, 30 August 2023.

Raspberry

Black raspberries turn from red to black when fully ripe

Common Name: Raspberry, Black raspberry (photo above. Note that black raspberries are initially red), Blackcap, Thimbleberry, Framboisier noir (Quebec) – The etymology of raspberry is uncertain. One hypothesis is that it is simply a combination of rasp in the sense of being rough or harsh from French rasper, to scrape together. An alternative origin is from raspis, a sweet red wine popular in Europe in the Middle Ages. Berry is from beri, a Germanic word for grape. The red raspberry is also known as European red raspberry.

Scientific Name: Rubus occidentalis (black raspberry) and Rubus idaeus (red raspberry) –  Rubus is Latin for bramble-bush and, by extension, blackberry. The species occidentalis is from the Latin occidens meaning to go down or set, generally used to refer to the western hemisphere. The Black raspberry is indigenous to eastern North America where it was first classified. The species idaeus refers to Mount Ida in Asia Minor, where red raspberries originated.

Potpourri: The ubiquitous raspberry was indisputably one of the first plants to be recognized as a source of food for many animals, especially the naked apes that eventually evolved to Homo sapiens about 60,000 years ago. The prominently colorful berries, raspberry red in Eurasia where they originated, stood out from the verdant foliage, a distinction unseen by other mammals lacking the red vision of primates. Raspberries were almost certainly spread globally by migrating birds where new species arose as a result of evolutionary diverse habitats. In its current bramble form, raspberries resist consumption of its growing plant parts with conspicuous thorny outgrowths characteristic of its Rose Family taxonomy. Growing in dense thickets from root extensions called rhizomes, brambles like raspberry produce copious quantities of enticing berries to perpetuate its dominance in open, sunny areas. Black raspberries form impenetrable hedges along many trails, offering a succulent snack and an occasional prick to passing hikers.

Raspberries are aggregate fruits with prickles

Raspberries are not berries and they do not have thorns. The fruit is an aggregate, and the sharp-pointed protuberance is a prickle. The use of berry for any small, roundish fruit is as fraught in common parlance as the distinction between fruits and vegetables. A fruit is “a ripened ovary and its contents together with any adjacent parts that may be fused to it.” Fruits are the seed carriers of propagation. Grains like barley, vegetables like peas, and nuts like acorns are fruits. A fruit “in which the entire ovary ripens into a fleshy, often juicy and edible” is the botanist’s berry, inclusive of tomatoes, eggplant, red peppers, and watermelons. A drupe is different, having a layered ovary that gives rise to a central stone or pit that encloses the seed, like plums and peaches. Raspberries arise from flowers with multiple pistils (central organ of a flower), each producing a small drupe, sometimes called a drupelet. The multiple small fruits cling together, separating as a single unit that is called an aggregate. Rasp-aggregate would be a more correct name, but hardly useful.  Spines, thorns, and prickles are all sharp-pointed outgrowths from a plant surface that evolved to repel herbivores. Spines like those on barberries originated from leaves. Thorns like those on Osage orange arose from branches. Prickles like those on raspberries are the real stickers, emerging directly from stem tissues. [1]

Raspberries are classified as members of Rosaceae, the family of roses in the genus Rubus, known colloquially as brambles. With about 3,000 species, the rose family is not that large compared to the 20,000 species of the orchid family and 19,000 species of the composite family, inclusive of asters, daisies, and sunflowers. [2] However, the rose family is arguably the most renowned of all plant families from the human perspective. Its prominent floral and fruit products that proliferate the temperate zones of primary habitation are without equal in the kingdom Plantae. In addition to the many cultivars of roses that dominate the floral trade, they are the sine qua non for spectacles like the annual Pasadena parade and namesake bowl game and Kentucky’s running of the roses in the first of the three horseracing crowns. English wars have been named for them. The red rose Lancasters and white rose Yorks have nothing to do with the Lannisters and Starks even though they were both involved in throne games of a sort.  However, the fruits of the rose family that dominate at the supermarket are its most enduring legacy. Life would be lessened absent apples, pears, cherries, plums, and the various bramble berries. The success of the rose family is a result of the evolution of a number of traits that promote reproduction and dispersion. Having a diversity of fruits with the color, shape, and taste that appeal to birds and mammals results in spreading seeds far and wide in a dollop of fertilizer. More important, however, is asexual reproduction called apomixis by which rose plants can spread without pollinators. This is especially true of the brambles like raspberry that extend by horizonal, leafless stem structures called rhizomes. [3]

Rose family fruits in general and raspberries in particular have spread far and wide, part of Darwin’s “endless forms most beautiful and most wonderful” due to hybridization that results from a combination of seed dispersion and asexual apomixis. Apples abound in variety and raspberries are not far behind. There were already 41 varieties of raspberry in the United States in 1866. [4] Both apples and raspberries tend toward polyploidy, having multiples of the basic number of chromosomes, which can result in the reestablishment of sexuality to create new hybrids. This is further complicated with raspberries, that can have one of three different basic chromosome numbers (7, 8, or 9) to start with. This means that from their inception in Asia Minor, raspberries have spread across the globe in many hybrid forms, drawing the attention of the hominids doing the same thing. That they were well known by the time of the Roman Empire is well established. Pliny the Elder (aka Gaius Plinius Secundus), the noted Roman military leader and naturalist author, wrote in his magnum opus Naturalis Historia that the raspberry was “known to the Greeks as the Idæan bramble, from the place where it grows.”  Mount Ida is in Northwestern Turkey near the site of Troy, providing the species name idaeus of red raspberry. It was even then regarded as a medicinal plant, for Pliny notes that: “Its flower, mixed with honey, is employed as an ointment for sore eyes and erysipelas, and an infusion of it in water is used for diseases of the stomach.” [5] As raspberry seeds have been found at Roman forts on the British Isles, it is considered likely that the Romans spread the raspberry from its Asian origins throughout their vast empire into Europe and Africa. [6]

Raspberries have served as a wellspring for both nutritious food and medicinal remedy for the millennial span of western civilization. They found their way into the various herbal collections that appeared in Europe in the late 16th century. John Gerard, calling it the Raspis, Hinde-berry, or Framboise (French), notes that “the floures (sic), the leaves, and the unripe fruit, being chewed, stay all manner of bleedings. They heal the eies (sic) that hang out.” The ripe fruit is described as sweet, and “not unpleasant to be eaten,” [7] As the modern era erupted from the rediscovery of Greco-Roman writings in the Renaissance, the expansion of raspberries as one of the first fruits followed. By the 17th century, white and red cultivar raspberries were recognized in Great Britain that differed only in the color and taste of the fruit, the “white raspis a little more pleasant than the red.” Red wines were available at the “vintners made from the berries of Raspis that grow in colder countries.” The medicinal uses had also expanded, extending to the use of leaves “in gargles and other decoctions that are cooling and drying, but not fully to that effect” whatever that means. A syrup made from the berries “is effectual to cool a hot stomach, helping to refresh and quicken up those that are overcome with faintness.” And of course the berries were eaten “to please the taste of the sick as well as the sound.” [8] As the consumer era took off in the middle of the last century, the raspberry became a mass market food and one of the myriad herbal remedies to assuage modern melancholia.  

 Raspberries are nutritious, contributing to a healthy diet. They are one of the highest sources of dietary fiber (6.5 grams fiber per 100 grams wright) relative to the energy provided (100 kilocalories per 12.5 grams). In addition, they are high in vitamins C and K and in the minerals calcium, magnesium, potassium, and iron. Raspberries also contain a unique set of phytochemicals, secondary substances not involved in plant metabolism, that are likely the basis for the many historic folk medicinal uses. Anthocyanins, which are what make berries (and fall leaves) red, are noted for their antioxidant and anti-inflammatory activity, deactivating the free radicals (ionic forms) that tend to disrupt cellular activity. In vivo animal studies have found that consuming raspberries resulted in “reduced blood pressure, improved lipid profiles, decreased atherosclerotic development, improved vascular function, stabilization of uncontrolled diabetic symptoms (e.g., glycemia), and improved functional recovery in brain injury models”. [9] It may be concluded that the use of raspberries in the treatment of a variety of ailments has at least some rational basis due to actual chemical interactions operating above and beyond the placebo effect.

Native American herbal remedies provide one of the best examples of genuine folk medicine unadulterated by marketing hucksterism. The Iroquois Confederacy of the northeast had many uses for raspberry leaves and roots, including treating bloody diarrhea, as an emetic, to remove bile, to treat children with whooping cough, and, perhaps with some hyperbole, as a  “decoction taken by a hunter and his wife to prevent her from fooling around.”  Raspberries were also important as food, especially in winter when dried fruits were combined with hominy. Further south, Cherokee also used raspberry plants for digestive problems and as food but in the form of pies and jellies suitable for the milder climate. On the more practical side, the prickly stems were used for scratching itchy hard to reach places. In addition, it was used for coughs, boils, and, most significantly to current usage,  to treat postpartum pain. [10] Current usage as an herbal remedy follows those of Native American usage, with an emphasis on pregnancy issues.

The most prevalent use of raspberry over the last century has been during the last trimester of pregnancy to “relax the uterine muscles and facilitate birth.” [11] However, in Germany this is proscribed “because of lack of scientific support of claimed activities as a uterine tonic.” [12] The widespread use of raspberry leaves in herbal preparations for pregnancies has resulted in some serious scientific assessment. Approximately 50 percent of all pregnant women use some form of herbal treatments during pregnancy and the use of raspberry extracts as tea, tablets, or tincture ranges from 7 to 56 percent depending on the country. The claim made by the herbal industry is a “positive effect on childbirth through the induction of uterine contractions, acceleration of the cervical ripening, and shortening of childbirth.” No studies clearly demonstrate that products derived from raspberries have a clear effect on the biochemical pathways of pregnancy. A recent review concludes that “the consumption of raspberry extracts could translate into decreased dynamics, or even the inhibition of the cervical ripening process, which could undoubtedly translate into a more tumultuous and traumatic childbirth course.” It is increasingly clear in the medical community that the best way to stay healthy is to eat a balanced diet, exercise regularly, and avoid stress. Hiking along trails in the quiet of the forest and eating raspberries is a good place to start.

References:

1. Wilson, C. and Looms, W. Botany, 4th Edition, Holt, Rinehart and Winston, New York, 1967, pp 30-31, 285-304.

2. Niering, W. and Olmstead, N. National Audubon Society Field Guide to North American Wildflowers. Alfred A. Knopf, New York, 1998, p.354, 646, 746.

3. Cowan, R. “Rosales” Encyclopedia Britannica Macropedia, William and Helen Benton, Publishers, Chicago, 1972 Volume 15, pp 1150-1154.

4. Stuart, M. ed The Encyclopedia of Herbs and Herbalism, MacDonald and Company Publishers, London, 1987, p 255.

5. Pliny the Elder,  The Natural History – John Bostock, M.D., F.R.S. H.T. Riley, Esq., B.A. London. Taylor and Francis, Red Lion Court, Fleet Street. 1855. Book 16 Chapter 71 – https://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1999.02.0137:book=16:chapter=71&highlight=raspberry    

6. Burton-Freeman, B. et al “Red Raspberries and Their Bioactive Polyphenols: Cardiometabolic and Neuronal Health Links” Advanced Nutrition, Volume 7, Number 1 January 2016 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4717884/

7. Gerard, J. Herball – Or, Generall Historie of Plantes, John Norton, London, 1597. Pp 260-261

8. Parkinson, J. Paradisi in Sole, Paradisus Terrestris  1629. Reprinted by Methuen &Company, London, 1904, p 557 -558  https://www.gutenberg.org/files/69425/69425-h/69425-h.htm#Page_557 

9. Burton-Freeman et al, op. cit.

10. Native American Ethnobotany Data Base. http://naeb.brit.org/  

11, Polunin, M. and Robbins, C. The Natural Pharmacy, Collier Books, New York, 1992, p 122.

12. Foster, S. and Duke, J. Medicinal Plants and Herbs, Petterson Field Guide Series, Houghton Mifflin Company, New York, 2000, pp 264-265

13. Socha, M. et al “Raspberry Leaves and Extracts-Molecular Mechanism of Action and Its Effectiveness on Human Cervical Ripening and the Induction of Labor” Nutrients, Volume 15 Number 14, 19 July 2023. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10383074/

Lone Star Tick

Common Name: Lone Star tick, Turkey tick– The prominent single white spot that marks the center of the carapace of females gave rise to the “lone star” metaphor. Although the tick is found in Texas, there is no evidence that it got its name from the state.

Scientific Name: Amblyomma americanumAmbly is a combining form directly from Greek that means blunt or obtuse. Omma is similarly derived meaning eye.  The genus was originally named by Carolinus Linnaeas in 1758 with the lone star tick as its type species. It is probable that “blunt eye” was due to the presence of the white dot. The species name is from its origination in the North America.

Potpourri: Up until about ten years ago, the lone star tick was dismissed as an interesting but esoteric member of Ixodidae, the tick family. It is not even included in popular field guides. [1] It was overshadowed by the black-legged tick, a cynosure due to the prevalence of Lyme Disease as a human pathogen. The wood tick, almost identical to the lone star tick lacking only the white dot, is much more common in the mid-Atlantic region. That all changed when people reported having a severe allergic reaction after eating red meat. This inexplicable and widespread medical issue that often led to hospital emergency rooms drove a quest for causation. Epidemiological research that spanned a decade eventually correlated times and locations of the severe and tellingly delayed anaphylactic shock of the red meat allergy to having been bitten by the lone star tick. Its spread eastward and northward over time has led to continued study and research as a public health issue.  

The lone star story (not the one about the Alamo) started as an attempt to find the root cause of a disturbing sequence of unusual and serious medical anomalies. This is not unlike most complex interactions between human health and nature exemplified by the English physician John Snow, who traced the cause of cholera to the contaminated Broad Street water pump in London in 1854 and true also for Lyme Disease. The first recorded correlation between lone star ticks and allergic reactions due to the consumption of mammalian meat occurred in Athens, Georgia in 1989. Dr. Antony Deutsch and Mrs. Sandra Latimer noted that there had been ten cases in which delayed anaphylaxis and/or hives occurred in patients who consumed red meat and who also been bitten by ticks some months before.  Blood tests revealed that all of those affected had elevated blood levels of Immunoglobin E (IgE), a naturally occurring antibody that is an integral part of the immune system of all mammals that had supposedly been stimulated by something contained in red meat. Although reported to the Centers for Disease Control (CDC) in 1991, there was no follow-up. [2]

Fifteen years later, a University of Virginia allergy researcher named Thomas Platts-Mills was trying to understand why some cancer patients had experienced severe allergic reactions when given the drug cetuximab. The key to the puzzle was a chemical named with the tongue twisting lexicon of organic chemistry galactose-alpha-1,3-galactose. This is a type of sugar in red meat (including pork, “the other white meat”) which was also one of the ingredients found in the cancer drug cetuximab. While the nickname alpha-gal seems to have been an innocuous shorthand version of the longer name, it does offer some speculation about its origination. The similarity to the sexist archetype alpha-male of boardroom and bedroom notoriety can hardly be ignored. That it is a component of red meat, the consummate breakfast of champions borders on sardonic, alpha-gal for alpha-male.  The final piece of the puzzle was geographic. Those cancer patients who experienced anaphylaxis when given cetuximab were from the so-called tick belt of the southeastern states, some with “massive tick bites on their ankles.” Dr. Platts-Mills confirmed his hypothesis a year later when he went on a hiking trip in the Appalachian Mountains near UVA in 2007. He returned to discover his ankles covered with lone star larva and a self-administered blood test confirmed high IgE levels. After a lamb dinner several months later, he awoke in the middle of the night covered with hives. [3]

In the succeeding fifteen years that brings us to the present, the correlation between lone star tick bites and red meat allergy has been accepted beyond correlation to causation. A report issued by the Centers for Disease Control in 2023 found that there were 110,000 confirmed cases of what has come to be known as the alpha-gal syndrome (Electra complex was ruled out) in the United States between 2010 and 2022. The number was based on data from laboratory testing incident to patients seeking medical care for life-threatening respiratory distress. The CDC concluded that underreporting of the condition was likely and that there were approximately four cases for every one reported and analyzed. With an estimated 450,000 cases during the twelve-year period of evaluation, tick-red meat allergy is in the top ten of food allergies in the United States. The reasons for the dearth of reported cases are uncertainty of symptoms and lack of awareness. The anaphylactic reaction usually occurs two to six hours after a meaty meal and months after a tick bite and can vary in severity across populations. A survey of 1500 clinicians revealed that almost half had never heard of the condition, precluding a diagnosis altogether. That lone star ticks are spreading north, and east is reflected in the data. The two states with the highest reported red meat allergy cases were Virginia and New York, where Suffolk County, the eastern two thirds of Long Island, accounted for four percent of all cases nationwide. [4]

While much has been learned about the lone star tick to date, research continues. The first order of business is to determine the ultimate source of the chemical that induces the IgE autoimmune spike. There are three hypotheses. The first is that the IgE stimulant is contained in the lone-star tick’s saliva, which, like all ticks, is a complex chemical mix that contains anticoagulants to ensure blood flow from the host, anesthetics to dull the host’s senses so as to avoid detection, and immunosuppressive agents to blunt host rejection. The second is that mammal blood from a previous meal contained the IgE antigens, a not unreasonable hypothesis given its provenance. The third is that the lone star tick is a vector to transfer the inflicting agent from one host to another, similar to the transmission of Borrelia burgdorferi that causes Lyme Disease from the white footed mouse to humans by the black-legged tick. Based on the results of the investigations, some form of antidote could presumably be devised. At present there is no cure beyond avoiding future tick bites and hoping that the autoimmune response will abate over time. [5] However, the anxiety for a potential midnight ambulance ride to the emergency room after having hamburger helper for dinner causes most victims to avoid meat altogether, which is not necessarily a bad thing when health and the environment are taken into account. But for those who cannot imagine a life without pork chops and bacon, a company in Blacksburg, Virginia has developed an FDA approved genetically engineered pig that lacks the alpha-gal gene, called (what else?) Galsafe. While ostensibly to produce organs for use in humans that might otherwise be rejected, the company offers Galsafe meat to alpha-gal patients. [6]

A second line of research seeks to understand the cause or causes of the tick diaspora over the last several decades The spread of the lone-star tick outside its historical range is not unique, but a pattern now typical for most if not all ticks. One possibility would be the movement of the winter freeze line northward due to the now perceptible higher temperatures which opens new habitats for invertebrates like insects and ticks, the bark beetle’s devastation of western pine forests a case in point. However, ticks don’t fly, have miniscule legs, and can only crawl a short distance during their brief lives. The commonly accepted reason for the spread of ticks is the burgeoning numbers of white-tailed deer across the eastern half of North America, long a matter of public concern due to environmental damage and deadly vehicular collisions―and now lone star tick vectors to boot. Deer feeding habits take them through ideal tick habitats where they stand and graze placidly while the ticks gleefully jump on board for a meal. Deer provide ticks geographic mobility.  The lone star tick is also commonly found on wild turkeys. So much so that in the Midwestern states, they are called turkey ticks (just as the black-legged tick is called bear tick in northern states). [7] The upshot of more ticks is more tick-borne diseases, now more common and with more variety. Two additional diseases associated with the lone-star tick have become a matter of public health concern (over and above red meat allergy) over the last ten years.

Southern tick-associated rash illness (STARI) emerged from the fog of tick-borne disease data only gradually. Metabolic measurements of the blood from individuals with both Lyme Disease and with STARI revealed different biosignatures, confirming an alternative etiology. [8] The CDC recognizes STARI as a new disease but considers it to be of unknown origin, noting only that it is correlated with individuals bitten by lone-star ticks. The symptoms are the red circular rash, generally described as “bull’s eye” identical in appearance to that of the black-legged tick when Lyme Disease is contracted. [9] To complicate matters, a spirochete similar to that causing Lyme Disease, tentatively named Borrelia lonestari, has been found in at least one patient but does not seem to be routinely present in other victims; the search continues.

A second emerging disease attributed to lone star tick bites is Heartland virus, named for the Heartland Regional Medical Center in St. Joseph, Missouri, where the first two patients were treated in 2009. When seven additional cases including two fatalities occurred several years later, the quest for the source of the virus began in earnest. In 2013, 50,000 ticks were collected in Missouri and surrounding states and tested for Heartland virus. Only the lone-star ticks were found to have the virus. In parallel, between 2009 and 2014, blood samples from 1428 animals including white-tailed deer, raccoons, moose, and coyotes were analyzed to reveal that 103 tested positive. The tentative theory is that the lone star tick nymph has a blood meal on an infected animal, most likely deer, and then transmits it to a human as an adult. [10] And that is not all, the lone star tick has also been identified in the transmission of many of the known tick-borne diseases including ehrlichiosis, rickettsiosis, tularemia and protozoan infections. The spread of tick-borne diseases is a concern for anyone who spends time outdoors in grassy areas―like hikers.

A working knowledge of cradle to grave tick behavior helps establish actions needed to minimize the chances of getting bitten and contracting a tick-borne disease. Each lone-star tick hatches as a six-legged larva, one of about 5.000 eggs laid by a gravid female in a location chosen for likely success (females have been tracked moving over half a meter to find a suitable, moist location). The first of three blood meals is necessary to proceed to the eight-legged nymph stage (ticks are arachnids akin to spiders). The second blood meal provides the energy and nutrients to molt to an adult. Both larval and nymph stage ticks can survive up to six months without eating.  A last supper for both males to produce spermatophores and for females to produce eggs and pheromones to attract a mate completes the life cycle with oviposition. While larvae and nymphs parasitize birds and small mammals, adults literally quest for larger prey like deer and humans. [11] The evolution of ticks to survive on three mammal blood meals for the entirety of their short lives is one of the wonders of the natural world. Only three sensory capabilities exist to achieve this end. Photosensitivity draws the tick upward on a blade of grass until it is at or near the apex. Once in position, the tick extends its front legs to attach to a passing animal in a stance appropriately called questing. Ticks, the epitome of persistence, can remain ready at the quest for up to three years. A second sensitivity is butyric acid, an exudation of all mammals, necessary to consummate the quest on a suitable host. A third sensitivity is surface warmth, triggering proboscis insertion to initiate feeding. It doesn’t matter what is on the other side of the warm surface. A tick will suck from a balloon filled with warm water. [12]

So how can you protect yourself from tickborne disease? Three methods have been prescribed. The first line of defense is to use tick repellent like DEET. An acaracide (Acarina is the order for ticks and mites in the Class Arachnida) like permethrin that kills them is preferable. However, nothing is absolute, and ticks are insidious, penetrating the shield at an untreated spot inevitably missed during application.  The second line of defense is habitat removal, getting rid of the grasses that promote knee high access for successful attachment. Mowing may work around a farm, but it cannot be practical in open woods with passing trails. While it is good practice to hike in the middle of a trail where grass is beaten flat, a trip to the bushes will at some point be required. The third method is host removal. There have been studies that focus on the number of deer necessary to reduce tick populations which conclude that it would only work with near annihilation, neither feasible nor desirable. The one guaranteed method of preventing tick borne diseases is vigilance. A complete whole-body inspection down to bare flesh must be conducted after every hike in potential tick habitats between March and November, active tick season, If a tick is found and removed within 24 hours, the chances of protracting a disease are negligible. This is because it takes that long for the tick to drill the hole to reach blood and then to inject the chemical saliva that keeps the well open and flowing. The injected saliva carries disease pathogens. There will still likely be a maddingly itchy red spot to deal with, but topical creams and patience will suffice in remediation. However, an ambulance ride after a steak dinner can be averted.

References:

1. Milne, L. and Milne, M. National Audubon Society Field Guide to North American Insects and Spiders, Alfred A. Knopf, New York, 1980, pp. 423-430

2. Steinke, J. et al. “The alpha gal story: Lessons learned from connecting the dots”. The Journal of Allergy and Clinical Immunology. March 2015. Volume 135 Number 3, pp 589–597.

3. Beck, M. “Ticks that spread Red-Meat Allergy” The Wall Street Journal, 11 June 2013

4. Sun, L. “Tick-linked meat allergy may not be so rare, researchers say,” Washington Post, 28 July 2023. (The meat need not be rare either)

5. Steinke, op .cit.

6. Lewis, T. “Red Meat Allergy Caused by Tick Bite Is Spreading—And Nearly Half of Doctors Don’t Know about It” Scientific American, 7 August 2023.

7. University of Florida Entomology and Nematology Department https://entnemdept.ufl.edu/creatures/urban/medical/lone_star_tick.htm    

8. Molins, C. et al “Metabolic differentiation of early Lyme disease from southern tick–associated rash illness (STARI)” Science. 16 August 2017, Volume 9 Issue 403

9. https://www.cdc.gov/lyme/about/about-southern-tick-associated-rash-illness.html       

10. Enserink, M.” The Heartland virus may occur across eastern U. S.” Science 18 September 2015

11. University of Florida, op cit.

12. Wilson, N. “Acarina” Encyclopedia Brittanica, Macropedia Volume 1 pp 19-23 and Volume 2 p 805, William and Helen Benton Publishers, University of Chicago, Chicago Illinois, 1972.

Chipmunk

The most notable feature of chipmunks is the prominent dorsal striping.

Common Name: Chipmunk, Ground squirrel, Grinny, Hackle, Chippee, Rock squirrel – Chipmunk is an Anglicized version of the name given to the animal by Native Americans. Originally  ajidamoo in the Ojibwa dialect of the Algonquian language group meaning red squirrel and traced to an earlier dialect as acitamo, which literally meant “upside down to eat.” With the removal of the two “a” sounds it became jidmoo or citmo which evolved to chipmunk probably due to the similarities to existing English words chip and mink. It first appeared in print in Huron Chief, a poetry collection by Adam Kidd published in 1830. [1]

Scientific Name: Tamias striatus – The generic name is Greek meaning “steward” or “dispenser” for the food hoarding behavior of chipmunks. They are stewards of their cache, keeping it replenished over summer and fall, dispensing when necessary in winter and spring. Striated means striped, derived from the Latin word striatus, emphasizing the most notable feature of the chipmunk―a striped ground squirrel.

Potpourri:  Chipmunks encountered along hiking trails offer only a streak of striped fur crossing a fallen log to a hidden den. While they are fervid foragers, packing their ample cheek pouches with seeds and nuts, they are fearful of looming, large animals. Rodents are an important source of food for a whole range of carnivorous animals and birds of prey.  The brief encounter is sometimes accompanied by a series of sharp sounds that consist of either a high pitched chirping or a staccato chipping or both. While it is contended by some and believed by many that this is the reason they are called chipmunks [2], linguists trace the name to an Algonquian word acitamo (as per etymology above). There are no chipmunks in Europe, so the immigrant colonists had never seen one and coopted the existing Native American name.

Chipmunks are in the Order Rodentia in the Family Sciuridae and are therefore closely related to squirrels and marmots. The rodents are the largest group of mammals, comprising roughly 50 percent of all species, closer to 70 percent if the number of individual animals on account of their geometric proliferation. Like all rodents, chipmunk incisors grow at a rate of several millimeters a week throughout there lives (less during hibernation), which promotes and necessitates gnawing hard objects frequently. [3] Phylogenic research over the last several decades has revealed that rodents are closely related to primates in the tongue-twisting superorder Euarchontoglires. Euarchonta is a grand order of mammals consisting of primates, and, surprisingly,  tree shrews and small flying mammals called colugos from South Asia.  Glire is a genetically related group or clade consisting of rabbits and rodents. In spite of our anthropocentric world view (Euarchonta means true rulers), humans share a common ancestor with rodents like chipmunks dating from the late Cretaceous Period at the end of the Mesozoic Era dominated by dinosaurs. [4] The behavioral characteristics of the chipmunks therefore offer some insight into the evolutionary foundations of all Euarchontoglires. A 130 million year old fossil rodent skull was recently found in Wyoming and nicknamed the mutant ninja chipmunk for its pronounced front teeth and saw-edged rear teeth, a “the terror of the underbrush.” Evolution is relentlessly creative. [5]

Chipmunks are hermit hoarders with masterful survival skills. They live alone in underground tunnels about two inches in diameter and up to thirty feet in length. The tunnels are dug with sharply clawed front feet, excess dirt pushed to the surface and carried away to conceal the openings. Multiple exits afford escape from predators that dig like dogs or slither like snakes. Tunnel depth ranges from two to three feet according to the prevailing winter temperatures.  The tunnels are fashioned with several sleeping areas lined with soft leaves and with several food storage areas, normally at the lowest point in the tunnel to keep it cool and fresh.

Chipmunks are omnivorous. Their primary diet of nuts, seeds and berries is occasionally augmented by the consumption of fungi, slugs, insects and even small birds and snakes. As the colder and shorter days of fall herald the coming of the winter, chipmunks become consummate foragers, storing nonperishable foodstuffs in their tunneled lairs for midwinter nourishment. The cheek pouches of the chipmunk are capacious, confirmed by field measurements of 31 corn kernels, 70 sunflower seeds, or 32 beech nuts. Experiments with chipmunk foraging revealed that a chipmunk can deliver about 4600 kilojoules (about 1,000 kilocalories which is the same as 1,000 calories in the vernacular) of food energy to the larder every day. The resulting cache can contain over 5,000 nuts. Chipmunk populations rise and fall with the availability of nuts and seeds.  Accordingly, when oak and hickory trees produce an abundant crop of nuts every three to five years, a phenomenon known as masting, chipmunks thrive. [6]

Chipmunks do not hibernate per se in the winter. They enter a torpid state called superficial or shallow hibernation. True hibernation occurs when body temperature is lowered near to that of the environment with concomitant physiological changes like a reduction in breathing to about three irregular breaths per minute.  Only three orders of mammals display true hibernation: Insectivora like the hedgehog, Chiroptera, the bats, and some Rodentia, like the marmot and the ground squirrel. Shallow hibernation is a compromise between full activity and true hibernation, the latter being a more precarious mode of survival from which a significant number of animals expire.  The chipmunk retires to its tunnel nest in the late fall and enters a deep sleep during which time its body temperature is lowered and metabolism slowed. About every two weeks, it awakens and snacks on cached food in a somnambulate state and then resumes its long sleep. The average 163 kJ energy consumption of an active chipmunk is reduced to 25kJ in the winter torpor. [7]

The only time chipmunks associate with each other except in confrontation over territory is during the spring mating season, when female pheromones attract males who aggressively compete against each other for paternity rights using loud chipping noises, chasing reversals, and even biting. After mating with multiple males during the 6-7-hour period of estrus, females return to their own burrow to give birth to about five blind and completely helpless young (which could be but are not called chipmunkies) that are nurtured absent any male parental care or provision. After about 40 days, they are weaned and abandoned, their mother establishing a new burrow elsewhere to sometimes produce a second litter. One in five survive the first year, resulting in a stable population of about three adult chipmunks per acre in suitable habitats. 

The coloration of western chipmunks is mor muted, although this picture from Wyoming is the closely related golden-mantled ground squirrel.

For a small and relatively insignificant rodent, chipmunk evolution and classification has aroused serious scientific inquiry for more than a century. A USDA survey in 1930 noted that “these animals, so attractive to every lover of nature” had been analyzed using “modern methods.” A total of 14,554 chipmunk specimens were evaluated to conclude that there were 65 species and sub-species in three genera: Eutamias in the eastern regions of Asia; Neotamias in western North America (the prefixes eu and neo both mean new); and Tamias in the east. [8] The most notable physiological difference was that chipmunks from humid areas were richly colored compared to pallid hues in dry regions. [9] However, there is not universal consensus concerning chipmunk taxonomy and many references include all 23 species of chipmunk in the single genus Tamias. One respected field guide addresses the similarity in the appearance of the western variants “with the shape of the penis (or baculum) and the call often serving as the basis for identification.” These species are generally confined to small, isolated areas along the west coast and probably evolved by interbreeding after separation. [10]

Chip ‘n’ Dale

The chipmunk is prominent in the legends of Native Americans. One well-known Iroquois story called “Chipmunk and Bear” anthropomorphizes the former as devil-may-care and the latter as unbearably arrogant. Bear proclaimed he could do anything, so Chipmunk challenged him to stop the sun from rising. Bear took the challenge and proclaimed that the sun would not rise the next morning. When the sun rose, Bear was upset, and Chipmunk laughed so hard that he collapsed from weakness.  Bear pinned Chipmunk to the ground with one big paw, proclaiming “your time to walk the Sky Road has come.” Chipmunk asked for one last prayer, beseeching Bear to lift his paw just enough for him to breathe. Bear complied and Chipmunk pulled free, the tips of Bear’s claws scraping his back in the process leaving three stripes. Chipmunk retains the stripes to this day as a reminder that one animal never makes fun of another. [11] Chipmunks also feature prominently in modern culture, epitomized by Alvin, Theodore, and Simon singing Christmas Carols in the 1960’s to Chip ‘n’ Dale, Disney cartoons with their own comic books and a television series.

Why do chipmunks have stripes? This may seem like an innocuous question, as many other animals are striped in whole or in part. However, rodents rely on a combination of stealth, speed, and dexterity to survive in a dangerous world of predators. Almost every other rodent has fur that ranges from gray to brown, blending into the background with no contrasting colors. Stripes are noticeable and would only provide camouflage in stripe-like grassy areas like the savannahs of African zebras or the reeds of Indian tigers. Chipmunks live in temperate deciduous forests of the Northern Hemisphere with brown tree trunks and tan leaf litter. Research on the African striped mouse, one of the only other striped rodents, revealed that the striping was caused by a gene that interrupts the development of pigment cells that was repurposed from a gene associated with cranial development, a phenomenon called co-option.  Chipmunks have the same gene. The hypothesis is that both species independently evolved striping through a mutation that was random but survived in some Darwinian way. [12] Since it is unlikely that the stripes improved survival from predation (or squirrels would have them), then there can be but one reason: female chipmunks prefer striped males. Bear claws can have nothing to do with it.

References:

1.Oxford English Dictionary  https://www.oed.com/dictionary/chipmunk_n?tl=true    

2. New Hampshire Public Television https://www.nhptv.org/natureworks/chipmunk.htm     

3. Wood, A. “Rodentia” Encyclopedia Brittanica, Macropedia William and Helen Benton Publishers, University of Chicago. 1974, Volume 15 pp 969-980.

4. Drew. L. I, Mammal, Bloomsbury Publishing, London, England, 2017 pp 193-224.

5. Holden, C. “The Rise of the Ninja Chipmunk”  Science, 18 May 1990, Volume 248 Issue 4957, p. 810

6. Saunders, D.  “Eastern Chipmunk”. Adirondack Mammals. Adirondack Ecological Center. 1988. https://www.esf.edu//aec/adks/mammals/chipmunk.php 

7. 10. Pennsylvania Game Commission  https://www.pgc.pa.gov/Education/WildlifeNotesIndex/Pages/Chipmunk.aspx#

8. Patterson, B. and Norris, R,“Towards a uniform nomenclature for ground squirrels: the status of the Holarctic chipmunks” . Mammalia. 1 May 2016 Volume 80 Number 3 pp 241–251.

9. Cockerell, T. “Book Review of Revision of the American Chipmunks by Arthur Howell, USDA Publication 52, November 1929” 28 March 1930 Science Volume 71 Issue 1839, pp 342-343.

10. Whitaker, J. National Audubon Society Field Guide to North American Mammals, Alfred A. Knopf, New York,1996, pp 408-438

11. https://www.native-languages.org/legends-chipmunk.htm#google_vignette  

12. . Reuell, P. “Science of Stripes” Harvard Gazette, 17  November 2016 https://news.harvard.edu/gazette/story/2016/11/science-of-stripes/     

Horsenettle

Horsenettle flowers range from light purple to white, all with tubular yellow stamens to attract pollinators

Common Name:  Horsenettle, Bull nettle, Carolina horse nettle, Apple of Sodom, Devil’s potato, Thorn apple, Wild tomato, Poisonous potato – A nettle is a plant of the genus Urtica noted for stinging hairs. The name has been widely applied to other plants that have prickles like the horsenettle. The horse association is likely due to the fact that horsenettle plants are commonly found in pastures, like those fenced off for horses.

Scientific Name: Solanum carolinense – Solanum is Latin for nightshade. The genus name is attributed to Pliny the Elder (Gaius Plinius Secundus), a Roman military commander and naturalist in the first century AD. The origins of Solanum are unclear, but sol is Latin for sun; there is a sunberry flower in the nightshade family. The similarity in spelling to the Latin word solamen which means comfort is another possible etymology. [1] Plants of the Solanum genus have historically been widely used as medicine for a variety of ailments and conditions.  The species name is reference to the North American colony Carolina where it was first noted, probably before its division between north and south.

Potpourri: The horsenettle is a weed according to the standard definition as it grows where humans don’t want it to grow and crowds out preferred plants. If weediness is a matter of garden aesthetics, however, an argument can be made that the five-petalled white or purplish star with five yellow elongated stamens projecting from the center has some appeal. If weediness is detrimental to food crops like soybeans and wheat awaiting harvest from farm fields, then eradication with herbicides may be justified. Horsenettle is also poisonous to the extent that it is included in edible wild plant field guides as a cautionary measure to prevent gathering the wrong things when edible plants are sought. [2] But it is also medicinal, having been used by Native Americans and subsequently by colonizing Europeans for centuries. This, too, is not unusual, as horsenettle is a member of the Nightshade family, a rogue’s gallery of deadly plants that also includes potatoes, tomatoes, peppers, and eggplants, mainstay edibles of western cuisines. Horsenettle is bad weed, good medicine, and has ugly prickles.

Another thing that can be said about weeds like horsenettle is that they are successful plants, able to flourish in marginal soils and spread outward in profusion. That is what all living things aspire to do, perpetuating their own kind following the recipe for survival by being fittest. Darwin came to recognize that competition among plants was equal to if not more than that among animals, even as Galapagos finch beaks became his focus. As a backyard scientist with inimitable curiosity, he conducted an field test in his backyard by clearing six square feet down to bare soil to observe the emergence of native weeds. He noted that “out of 357 no less than 295 were destroyed, chiefly by slugs and insects,” the detail testimony to thoroughness. As confirmation, he repeated the experiment on a second area of established turf, noting that “out of twenty species … nine species perished” because the “more vigorous plants gradually kill the less vigorous.” [3] It is evident that becoming a successful weed is an evolutionary feat rather than a routine event. It is also apparent that the weeds that persist and become human problems are the cream of the weed crop, exceptionally evolved with propagative efficiency.

Horsenettles are poisonous because they produce an alkaloid chemical named solanine, the name derived from Solanaceae, the Nightshade family of almost 4,000 plant species in nearly 100 genera. Alkaloids are complex organic chemical compounds that can in many cases have physiological effects on animals ranging from medicinal like morphine, hallucinogenic like mescaline, and stimulants like nicotine (the “ine” suffix is prescribed). The root alkali is derived from the Arabic word for the calcined ashes of the saltwort plant, and refers to molecules that are basic (pH > 7), the opposite of acidic. Alkaloids are mostly bitter, which is undoubtedly the reason why bitter is one of the five tastebud types also including sweet for sugars, salt for minerals, sour for ripeness, and savory for proteins. Bitterness warns of  poison and most animals avoid bitter plants like horsenettle. The genetic code for bitterness taste sensors was retained by the survivors; individuals that lacked sensitivity learned about bitter poisons the hard way. Up until the nineteenth century, plant compounds were only known through trial and error. The alkaloid associated with the poison hemlock (coniine) was the first to be synthesized in 1886. [4]

The taxonomy of plants is based on familial similarities. The production of a specific alkaloid is typically a shared characteristic. This is true of the nightshades (Solanaceae) just as it is of buttercups (Ranunculaceae), poppies (Papaveraceae) and barberries (Berberidaceae). Alkaloid concentrations vary among the different species of a plant from plentiful to nearly nonexistent. The nightshades range from almost no alkaloid in tomatoes, potatoes, and eggplant to substantial amounts in horsenettle and tobacco. Why plants produce alkaloids is uncertain. Experiments have shown that tomatoes grafted onto tobacco stems produce no solanine.  Conversely, tobacco grafted onto tomato stocks does. This would indicate that solanine isn’t involved in growth or metabolism. However, that is not to say that there is not a purpose for a plant to make a complex chemical compound, which takes energy and raw materials. There is more to life than growth and there is more to genetics than the here and now. Alkaloids may be vestigial remnants that once had a purpose in the evolutionary past but which is no longer relevant.

Horsenettle fruits look like small tomatoes

Alkaloids may also have a role in reproduction, as some plants produce high levels during seed and fruit formation which become depleted when the seed is ripe. Horsenettle fruits look like miniature tomatoes. Whether they are toxic or not is an open question. One source says “the berries are the most toxic when they are mature” [5] and another says “all parts of the plants, except the mature fruit, are capable of poisoning livestock” [6] Since poisoning experiments on humans and livestock are not ethically acceptable, almost all reports of poisoning are anecdotal. It is probable that immature fruits are poisonous and mature, ripe fruits are not. This makes sense, as plants produce fruit to be eaten by animals so that the seeds are distributed in a dollop of fertilizing manure. For example, all parts of the mayapple are poisonous except the ripe fruit. Experiments with livestock that consumed ripe horsenettle fruits have shown that the seeds pass through the gut unharmed, exactly as would be intended and predicted. [7]

The relationships between animals and plants are complex. This is particularly true when it comes to alkaloids. Ostensibly, plants produce the bitter compounds through random genetic mutation and eventually a formulation occurs that keeps animal predation in check. However, in the niche-centric ecology of survival, the opposite must also occur. That is, animals that evolve some form of immunity to certain alkaloids in certain plants gain the advantage of abundant food avoided by competitive herbivores. The example of the monarch butterfly caterpillars eating milkweed that is poisonous to nearly all other animals is well known. Experimentation has shown that this is more the rule than the exception. When the Panama Canal was built in the early twentieth century, the flooding of Gatun lake created Barro Colorado Island where a Smithsonian Field Station was opened in 1924 to conduct long term experiments of evolution in an isolated biosphere. A recent study of the 174 caterpillars found on the island found that they were “picky eaters” is choosing which types of over 200 toxic compounds they would consume. This “encourages diversification, as new species with new, temporarily insect-proof toxin profiles emerge.” [8] It is not therefore surprising that a fair number of insects, and some animals, eat horsenettle leaves, stems, and fruit.

The vast majority of twenty first century humans have plenty to eat―in many cases too much. There is no cornucopia in the wild where life is “nasty, brutish, and short” according to Thomas Hobbes. Many insects and a few animals consume not only the horsenettle fruit, but also the bitter, normally poisonous leaves and stems as well.  A study conducted in Virginia over a period of six years (1996-2002) revealed that 31 insects from six different orders ate horsenettle voraciously. In fact, a detailed survey of 960 horsenettle plants found that the plants were severely damaged. And it wasn’t just bugs, as meadow voles also consumed horsenettle with no apparent ill effects. The most damaging insect species were those that also fed on other Nightshade family plants including the eggplant flea beetle and the false potato beetle in keeping with the evolutionary pathway of alkaloid tolerance.  Fruits were assessed separately due to their importance in propagation as the seed bearing component of the plant. The three species accounted for 75 percent of fruit damage were false potato beetles, pepper maggots, and meadow voles. [9] This also provides some validity to the overall scheme of life with plants producing sweet, tasty fruit to attract animals for seed dissemination.  

As is the case with many plants that are listed as poisonous to animals in general and humans in particular, horsenettle has historically been used for medicinal purpose. In the eons that preceded the Renaissance in the arts and sciences, treatment of human and livestock ailments was a matter of local lore and tradition using naturally occurring substances, mostly plants. Essentially, the chemicals created by a plant for its own use and protection provided similar benefits when consumed by an animal. In the case of horsenettle, the Cherokee who were indigenous to Virginia and the Carolinas where it originated were its most inventive purveyors. The leaves were used internally to dispel worms (apparently worms don’t like it either) and externally to treat poison ivy (although one would think that Cherokee had figured out the “leaves of three let it be” rule). Fruits were boiled in grease to treat dogs with mange and the seeds of the fruit were made into a sore throat gargle. [10] The Native American uses of native plants were in many cases adopted by early colonists so that these “natural remedies” appeared in the early listings of drugs. Horsenettle was listed in the United States Pharmacopeia  from 1916 to 1936 as a treatment for epilepsy, and, in keeping with the “snake oil” practices of unregulated past, both an aphrodisiac and a diuretic. It has long since disappeared from the apothecaries shelves, and is now mostly known for its toxicity. A modern medicinal plant guide concludes with “fatalities reported in children from eating berries.” [11]

References:

1. Simpson, D. Cassell’s Latin Dictionary, Wiley Publishing New York, 1968, pp 560, 772.

2.  Elias T. and Dykeman, P. Edible Wild Plants, Sterling Publication Co. New York, 1990, p 265.

3. Darwin, C. On the Origin of Species, Easton Press, Norwalk, Connecticut, 1976, p.50.

4. Manske, R, “Alkaloids” Encyclopedia Britannica, Micropedia, William Benton Publisher University of Chicago, 1974, Volume 1 pp 595-608.

5. North Carolina State University Agricultural Extension https://plants.ces.ncsu.edu/plants/solanum-carolinense/   

6. Bradley, K. and Hagood, E.  “ Identification and Control of Horsenettle (Solanum carolinense) in Virginia” http://www.ppws.vt.edu/scott/weed_id/horsenettle.PDF           

7.  https://www.illinoiswildflowers.info/prairie/plantx/hrs_nettlex.htm

8. “One hundred years of plenitude” The Economist, Science and Technology, 6 July 2024. p 64.

9. Wise, M. “The Herbivores of Solanum carolinense (Horsenettle) in Northern Virginia: Natural History and Damage Assessment” Southeastern Naturalist,  1  September 2007,  Volume 6,  Number 3, pp 505-522.

10. Native American Ethnobotany Data Base http://naeb.brit.org/  

11. Duke, J. and Foster, F. Medicinal Plants and Herbs, Peterson Field Guide Series 2nd edition, Houghton Mifflin Company, Boston, 2000, p 206.

Destroying Angel – Amanita bisporigera

The key features of the Destroying Angel are the cup-like volva at the base of the stem, the stark whiteness of the stem, cap, and gills, and the partial veil hanging from the top of the stem just below the gills under the cap.

Common Name: Destroying Angel, Fool’s Mushroom, Death Angel, White Death Cap – The virginal whiteness of all parts of the mushroom are aptly described as angelic – beautiful, good, and innocent. The fact that it is anything but is conveyed by the addition of destroying with death-dealing toxicity.

Scientific Name: Amanita bisporigera – The generic name is taken directly from the Greek word amanitai, probably from the Amanus Mountains of southern Turkey where the noted Greek physician Galen may first have been identified the archetype, Amanita. [1] The specific name indicates that there are only two spores on each of its basidia in contrast to the four spores of other basidiomycete fungi. Virtually indistinguishable from Amanita virosa and Amanita verna which both frequently appear as synonyms in mushroom field guides.

Potpourri:  The destroying angel is a toadstool nonpareil. While the origin of the term toadstool is obscure, it cannot be a coincidence that tode stuhl means death chair in German, the language of the Saxons who emigrated to England. Its notoriety is not only because it is one of several mushrooms that contain deadly poisons called amatoxins, but also due to its close resemblance to Agaricus campestris, the edible field mushroom which is the cousin of the cultivated white button mushroom of supermarkets and salad bars. Both are white, similar in size and shape, and grow in the same habitat, primarily grass under or near trees. The destroying angel is the most dangerous of the numerous doppelgänger mushrooms as the deadly twin of a well-known and often consumed edible.  Misidentification absent knowledge of the subtle physical differences between the two can result in discovering the profound physiological differences with sometimes deadly result. The field white mushroom is nourishing. The angelic white mushroom is Shiva.

The cup at the bottom of the stem is the volva, the bottom half of the universal veil.

The key features that distinguish the destroying angel from similar mushrooms are straightforward if you know what to look for. First and foremost is the volva, (Latin for a covering like a husk or shell) which is the cuplike structure at the base of and surrounding the stem or stipe. The volva is frequently hypogeal, i. e. underground and out of sight. This means that it can only be positively identified by digging up the soil around the base of the mushroom. [2] However, it is the standard and preferred practice among mushroom gatherers to use a knife to cut through the stem cleanly at the base. This is done so the mycelium of the fungus from which the fruiting body mushroom grows is not seriously disturbed. The procedure is analogous to gathering apples from an apple tree. The fungal mycelium and the apple tree survive to produce new mushroom spores and fruit seeds for future generations. Using the standard harvesting technique, it is easy to see how the below the cut volva would not be noted.  White mushrooms must be dug out to the roots to avoid the dilemma of the death mushroom.

The only way to be certain that you have a puffball and not a Destroying Angel is to cut it in half.

The volva is the bottom part of what is known as a universal veil, a thin membrane that envelops the mushroom during the subterranean growth phase to protect the gills and the spores they hold from damage. The universal veil is a characteristic of all mushrooms in the Amanita Family. While there are a few other mushrooms that have a universal veil and its volva (such as the genus Volvariella named for this characteristic feature), it is a reliable identification feature for the destroying angel. All spore-bearing mushrooms are produced by the fungal mycelium underground as an ovoid called a primordium. Once they mature and environmental conditions are promising (like after rain) the extension of the stem causes the universal veil to tear around its circumference to expose the cap and gills of the fruiting body for spore dispersal. The volva is the lower part of the “eggshell” that remains attached to the bottom of the stem. Prior to upward extension, the destroying angel looks like a white egg, similar in appearance to a puffball, another type of edible fungus with which the destroying angel can be confused.  Some field guides include a picture of it in the puffball section to emphasize the danger of mistaken identity. [3] The only way to be absolutely sure is to cut the fungus lengthwise to reveal a cap and gills within.

Many mushrooms have what is known as a partial veil which also helps prevent damage to the reproductive gill surface. It is partial in that it only covers the underside of the cap, extending from the edges of the cap to the stem. When the mushroom cap expands fully, the partial veil also tears, in many cases leaving some remnants around the edges and a ring called an annulus attached to the stem just below the cap. In some cases, the partial veil remnant can be seen hanging like a draped clerical mozetta at the top of the stem. However, this annular ring is not well connected, and in many mushrooms with partial veils, there is no remnant. Most Amanita family mushrooms have both universal veils and partial veils with both a volva at the bottom and a ring around the stem as is the case with the destroying angel. The double protection afforded to the gills must have evolved due to the success of the species in propagation. Amanitas are one of the most prolific of all mushroom families. Partial veils and the remnant annulus are also a characteristic of the Agaricus family, which includes the edible field mushroom Agaricus campestris. They do not have universal veils with the tell tale volva.

The second prominent feature of the destroying angel is the stark whiteness of the cap, stem, and gills that has been described as having a “strange luminous aura that draws the eye” that is “easily visible from one hundred feet away with its serene, sinister, angelic radiance.” [4] The cap is smooth and usually described as viscid or tacky when wet.  This is to distinguish it from most of the other species in the Amanita genus that have warty patches on the cap from the dried out and cracking universal veil like the white dot warts on the bright red cap of the iconic fly agaric (Amanita muscaria).  The glowing purity of the whiteness is a reliable feature for initial field identification. Confirmation by looking for a picture or drawing of a white mushroom with a volva and annular stem ring using a field guide is another matter. One provides only Amanita verna or fool’s mushroom, prevalent only in spring (vernus in Latin). The common name implies that it fools the observer with its deception. [5] A second field guide provides both A. verna as the spring destroying angel, and Amanita virosa (virosus is poisonous in Latin) for mushrooms that appear in the fall with only a passing reference to A. bisporigera. [6] DNA sequencing of fungi has had a profound impact on the eighteenth-century Linnaean system basing taxonomy on physical similarity. It has been shown that all destroying angels of North America are A. bisporigera (with one additional species A. ocreata in California) and that A. verna and A. virosa are only found in Eurasia. Destroying angel is a universal common name for all species for the white mushrooms with volva.

The destroying angel is one of the deadliest mushrooms known. According to one account “misused as a cooking ingredient, its alabaster flesh has wiped out whole families.” [7] The toxic chemicals are called amatoxins (from the generic name Amanita), which are protein molecules made up of eight amino acids in a ring called a cyclopeptide with a molecular weight of about 900. The death dealing amatoxin variant is alpha-amanitin, which destroys RNA polymerase, a crucial metabolic enzyme. RNA polymerase transcribes the DNA blueprint, creating  messenger RNA that transport the codon amino acid recipe used  to make proteins on which all life depends. The ultimate result is rapid cell death. The gastrointestinal mucosa cells of the stomach, the hepatocytes of the liver, and the renal tubular cells of the kidneys are the most severely affected cells because they have the highest turnover rate and are rapidly depleted. The liver is most at risk because the hepatocytes that absorb alpha-amanitins are excreted with the bile and then reabsorbed. The initial stages of amatoxin poisoning start about ten hours after ingestion; the gastrointestinal mucosa cells are the first to be affected resulting in forcible eviction (aka vomiting) of the intruding poisons.  There follows a period of several days of calm as the stomach cells recover somewhat before the storm of  hepatic and renal debilitation. The third and final stage can in severe cases lead to the crescendo of convulsions, coma and death. The lethal dose for 50 percent of the population or LD50 is used by toxicologists as a benchmark for relative virulence. The LD50 for alpha-amanitin is 0.1 mg/kg.  A 70 kg adult will have a 50-50 chance of survival with a dose of 7 milligrams, the amount of alpha-amanitin in one small destroying angel. [8]

The North American Mycological Association (NAMA) received a total of 126 reports of amatoxin poisoning over a period of thirty years, about four annually. The fatality rate has historically been on the order of thirty percent attributed to liver and/or kidney failure; this number has improved over the last several decades to about five percent due to a better understanding of amatoxin physiology effects combined with aggressive therapy. The basic tenet of the treatment is to reduce the toxic concentration in the blood serum as rapidly as possible. Gastric lavation is used if the ingestion was recent enough followed by a thorough purging using emetics to induce vomiting and cathartics to induce evacuation of the bowels (essentially the same effect on the gastrointestinal mucosa cells to expel the poison).  Perhaps the most important therapy is the use of activated charcoal, as amatoxins have a high affinity for adsorption on its surface. Although there is no proven antidote, intravenous injections of penicillin have been used with some apparent benefit. A French physician named Bastien developed a three part procedure using intravenous injections of vitamin C and two types of  antibacterial drugs supplemented with penicillin to successfully treat 15 cases. To unequivocally prove its efficacy, he conducted the ultimate experiment by eating 70 grams of Amanita phalloides, the death cap cousin of the destroying angel and using the protocol on himself. [9] The most promising new treatment is silibinin, an extract of the blessed milk thistle (Silybum marianum), which is sold commercially as Legalon by a German pharmaceutical company. Liver transplant was once considered the last resort for amatoxin poisoning, but that may no longer be necessary. [10]

The destroying angel is not the only mushroom that produces amatoxin, nor is amatoxin the only substance produced by fungi that is inimical to humans. The identification of fungal toxins and the characterization of their imputed symptoms are among the most empirical of forensic science. The facts are based almost entirely on the anecdote. The identification of the mushroom that caused the condition under evaluation is usually a matter of conjecture since the victim has eaten the evidence. To add to the confusion, the alleged offending mushroom may have been consumed with a mixture of other wild foods and fungi gathered over a wide area in obscure nooks.  The dearth of fungal knowledge in the medical community contributes to uncertainly. Poison Control Centers (PCC) were established after World War II to deal with the proliferation of chemicals as clearing houses for information about poisons and their antidotes and treatment protocols. [11] Over the ensuing years, mushroom poisonings accounted for only one half of one percent of all PCC reports (1 in 200). Of those reported, only 10 percent included any information about the mushroom. Based on limited data, NAMA established a toxicology committee in 1985 and began to supplement the PCC data with a separate data base using the input from experienced mycologists and mushroom aficionados. The result to date is a more comprehensive accounting with fairly reliable identification of 80 percent of the mushrooms involved in poisoning. [12] This is a good start but has done little to assuage the beliefs of the general public that most if not all mushrooms are toadstools and that eating wild mushrooms is a fool’s errand, sometimes literally.

One example suffices to point out the irrational fear of amanita mushroom poisoning and the broader category of mycophobia. In 1991, the venerable French reference Petit Larousse Encyclopédie was recalled because the deadly amanita article lacked the appropriate symbol for poison. But this was not enough, since almost 200,000 copies had already been sold.  Several hundred students were hired to visit 6,000 stores throughout Europe and Canada to affix stickers with the appropriate symbol for poison on the pages and append a notice on the cover of the book that it was a new edition. [13]  History has impugned the mushroom as the source of the poison that has dispatched any number of notables, among them Claudius, the fourth Roman Emperor. The perpetrator is alleged to have been his fourth wife Agrippina who wanted her son Nero to succeed to the throne. The death is recounted by the philosopher Seneca the Younger in December 54 CE, only two months after the event occurred. According to his account, it happened quite quickly, the onset of illness and death being separated only by about an hour. [14] The mushroom assassination of Claudius is almost certainly apocryphal, as deadly mushrooms are relatively slow to act; those that act rapidly generally cause gastrointestinal distress that is rarely fatal. Hyperbole is not out of the question. One recent account attributes the disappearance of the Lost Colony of Roanoke to the relocation of the starving colonists to the island of Croatoan. Gorging themselves on the mushroom bounty that they found there, they died a horrible death of grotesque contortions. [15]

References:

1. McIlvaine, C. One Thousand American Fungi, Dover Publications, New York, 1973 pp 2-5

2. Roody. W. Mushrooms of West Virginia and the Central Appalachians, The University Press of Kentucky, Lexington, Kentucky, 2003, pp 62-63.

3. Lincoff, G. National Audubon Society Field Guide to North American Mushrooms, Alfred A. Knopf, New York, 1981. pp 551-552.

4. Russel, B. Field Guide to Wild Mushrooms of Pennsylvania and the Mid-Atlantic, The Penn State University Press, University Park, Pennsylvania, 1935, pp 67-69.

5. McKnight, K and McKnight, V.  Peterson Field Guide to Mushrooms of North America, Houghton Mifflin Company, Boston, 1987, pp 238-239, Plate 27.

6. Pacioni, G. (Lincoff, G, US editor) Guide to Mushrooms, Simon and Schuster, New York, 1981, pp 76-77.

7. Money, N. Mr. Bloomfield’s Orchard, Oxford University Press, Oxford. 2002 p 151

8. Hallen, H. et al. “Gene family encoding the major toxins of lethal Amanita mushrooms”. Proceedings of the National Academy of Sciences. 27 November 2007 Volume  104  Number 48  pp 19097–19101

9. Kendrick, B. The Fifth Kingdom, Focus Publishing, Newburyport, Massachusetts, 2000, pp 319-321.

10. Beug, M. in Fungi Magazine Volume 1 Number 2 Spring 2008. Beug is a Professor Emeritus at Evergreen State College and a member of the NAMA toxicology committee.

11. Wyckoff, A. “AAP Had First Hand in Poison Control Center” AAP News Sept. 2013 http://www.aappublications.org/content/34/10/45

12. Beug, M, et al “Thirty-Plus Years of Mushroom Poisoning: Summary of the Approximately 2,000 Reports in the NAMA Case Registry” Mcllvanea Volume 16 number 2 Fall 2006 pp 47-68.

13, Schaechter, E. In the Company of Mushrooms,  Harvard University Press, Cambridge, Massachusetts, 1997, pp 210-211.

14. Marmon, V. and Wiedemann, T. “The Death of Claudius” Journal of the Royal Society of Medicine, Volume 95, May 2002 pp. 260-261.

15. Spenser, S. “The First Case of Mass Mushroom Poisoning in the New World” Fungi Magazine, Volume 11, Number 4, Fall 2018, pp 30-33.