Google+ Badge

Follow by Email

Search This Blog

Thursday, November 23, 2017

Ascorbic acid

From Wikipedia, the free encyclopedia
L-Ascorbic acid
L-Ascorbic acid.svg
IUPAC name
Other names
Vitamin C
3D model (JSmol)
EC Number 200-066-2
PubChem CID
Molar mass 176.12 g·mol−1
Appearance White or light yellow solid
Density 1.65 g/cm3
Melting point 190 to 192 °C (374 to 378 °F; 463 to 465 K) decomposes
330 g/L
Solubility in ethanol 20 g/L
Solubility in glycerol 10 g/L
Solubility in propylene glycol 50 g/L
Solubility in other solvents insoluble in diethyl ether, chloroform, benzene, petroleum ether, oils, fats
Acidity (pKa) 4.10 (first), 11.6 (second)
A11GA01 (WHO) G01AD03 (WHO), S01XA15 (WHO)
Safety data sheet JT Baker
NFPA 704
Flammability code 1: Must be pre-heated before ignition can occur. Flash point over 93 °C (200 °F). E.g., canola oil Health code 1: Exposure would cause irritation but only minor residual injury. E.g., turpentine Reactivity code 0: Normally stable, even under fire exposure conditions, and is not reactive with water. E.g., liquid nitrogen Special hazards (white): no codeNFPA 704 four-colored diamond
Lethal dose or concentration (LD, LC):
LD50 (median dose)
11.9 g/kg (oral, rat)[1]
Except where otherwise noted, data are given for materials in their standard state (at 25 °C [77 °F], 100 kPa).
 verify (what is Yes ?)
Infobox references

Ascorbic acid is a naturally occurring organic compound with antioxidant properties. It is a white solid, but impure samples can appear yellowish. It dissolves well in water to give mildly acidic solutions. Ascorbic acid is one form ("vitamer") of vitamin C. It was originally called L-hexuronic acid, but, when it was found to have vitamin C activity in animals ("vitamin C" being defined as a vitamin activity, not then a specific substance), the suggestion was made to rename it. The new name, ascorbic acid, is derived from a- (meaning "no") and scorbutus (scurvy), the disease caused by a deficiency of vitamin C. Because it is derived from glucose, many animals are able to produce it, but humans require it as part of their nutrition. Other vertebrates which lack the ability to produce ascorbic acid include some primates, guinea pigs, teleost fishes, bats, and some birds, all of which require it as a dietary micronutrient (that is, in vitamin form).[2]


From the middle of the 18th century, it was noted that lemon and lime juice could help prevent sailors from getting scurvy. At first, it was supposed that the acid properties were responsible for this benefit; however, it soon became clear that other dietary acids, such as vinegar, had no such benefits. In 1907, two Norwegian physicians reported an essential disease-preventing compound in foods that was distinct from the one that prevented beriberi. These physicians were investigating dietary-deficiency diseases using the new animal model of guinea pigs, which are susceptible to scurvy. The newly discovered food-factor was eventually called vitamin C.

From 1928 to 1932, the Hungarian research team led by Albert Szent-Györgyi, as well as that of the American researcher Charles Glen King, identified the antiscorbutic factor as a particular single chemical substance. Szent-Györgyi isolated the chemical hexuronic acid first from plants and later from animal adrenal glands. He suspected it to be the antiscorbutic factor but could not prove it without a biological assay. This assay was finally conducted at the University of Pittsburgh in the laboratory of King, which had been working on the problem for years, using guinea pigs. In late 1931, King's lab obtained adrenal hexuronic acid indirectly from Szent-Györgyi and, using their animal model, proved that it is vitamin C, by early 1932.

This was the last of the compound from animal sources, but, later that year, Szent-Györgyi's group discovered that paprika pepper, a common spice in the Hungarian diet, is a rich source of hexuronic acid. He sent some of the now more available chemical to Walter Norman Haworth, a British sugar chemist.[3] In 1933, working with the then-Assistant Director of Research (later Sir) Edmund Hirst and their research teams, Haworth deduced the correct structure and optical-isomeric nature of vitamin C, and in 1934 reported the first synthesis of the vitamin.[4] In honor of the compound's antiscorbutic properties, Haworth and Szent-Györgyi now proposed the new name of "a-scorbic acid" for the compound. It was named L-ascorbic acid by Haworth and Szent-Györgyi when its structure was finally proven by synthesis.[5]

In 1937, the Nobel Prize for chemistry was awarded to Haworth for his work in determining the structure of ascorbic acid — shared with Paul Karrer, who received his award for work on vitamins — and the prize for Physiology or Medicine that year went to Albert Szent-Györgyi for his studies of the biological functions of L-ascorbic acid.

The American physician Fred R. Klenner, M.D. promoted vitamin C as a cure for many diseases in the 1950s by elevating the dosages greatly to as much as tens of grams vitamin C daily orally and by injection. From 1967 on, Nobel prize winner Linus Pauling recommended high doses of ascorbic acid as a prevention against cold and cancer. However, modern evidence does not support a role for high-dose vitamin C in the treatment of cancer or the prevention of the common cold in the general population.[6][7]


Canonical structures for the ascorbate anion
Ascorbic acid is classed as a reductone. The ascorbate anion is stabilized by electron delocalization, as shown above in terms of resonance between two canonical forms. For this reason, ascorbic acid is much more acidic than would be expected if the compound contained only isolated hydroxyl groups.

Antioxidant mechanism

Semidehydroascorbate acid radical

The ascorbate ion is the predominant species at typical biological pH values. It is a mild reducing agent and antioxidant. It is oxidized with loss of one electron to form a radical cation and then with loss of a second electron to form dehydroascorbic acid. It typically reacts with oxidants of the reactive oxygen species, such as the hydroxyl radical. Such radicals are damaging to animals and plants at the molecular level due to their possible interaction with nucleic acids, proteins, and lipids. Sometimes these radicals initiate chain reactions. Ascorbate can terminate these chain radical reactions by electron transfer. Ascorbic acid is special because it can transfer a single electron, owing to the resonance-stabilized nature of its own radical ion, called semidehydroascorbate. The net reaction is:
RO + C
→ RO + C6H7O
→ ROH + C6H6O6[8]
The oxidized forms of ascorbate are relatively unreactive and do not cause cellular damage.
However, being a good electron donor, excess ascorbate in the presence of free metal ions can not only promote but also initiate free radical reactions, thus making it a potentially dangerous pro-oxidative compound in certain metabolic contexts.

On exposure to oxygen, ascorbic acid will undergo further oxidative decomposition to various products including diketogulonic acid, xylonic acid, threonic acid and oxalic acid.[9]


Nucleophilic attack of ascorbic enol on proton to give 1,3-diketone

Food chemistry

Ascorbic acid and its sodium, potassium, and calcium salts are commonly used as antioxidant food additives. These compounds are water-soluble and, thus, cannot protect fats from oxidation: For this purpose, the fat-soluble esters of ascorbic acid with long-chain fatty acids (ascorbyl palmitate or ascorbyl stearate) can be used as food antioxidants. Eighty percent of the world's supply of ascorbic acid is produced in China.[10]

The relevant European food additive E numbers are:
  1. E300 ascorbic acid (approved for use as a food additive in the EU[11] USA[12] and Australia and New Zealand)[13]
  2. E301 sodium ascorbate (approved for use as a food additive in the EU[11] USA[14] and Australia and New Zealand)[13]
  3. E302 calcium ascorbate (approved for use as a food additive in the EU[11] USA[12] and Australia and New Zealand)[13]
  4. E303 potassium ascorbate
  5. E304 fatty acid esters of ascorbic acid (i) ascorbyl palmitate (ii) ascorbyl stearate.
It creates volatile compounds when mixed with glucose and amino acids in 90 °C.[15]

It is a cofactor in tyrosine oxidation.[16]

Niche, non-food uses

  • Ascorbic acid is easily oxidized and so is used as a reductant in photographic developer solutions (among others) and as a preservative.
  • In fluorescence microscopy and related fluorescence-based techniques, ascorbic acid can be used as an antioxidant to increase fluorescent signal and chemically retard dye photobleaching.[17]
  • It is also commonly used to remove dissolved metal stains, such as iron, from fiberglass swimming pool surfaces.
  • In plastic manufacturing, ascorbic acid can be used to assemble molecular chains more quickly and with less waste than traditional synthesis methods.[18]
  • Heroin users are known to use ascorbic acid as a means to convert heroin base to a water-soluble salt so that it can be injected.[19]
  • As justified by its reaction with iodine, it is used to negate the effects of iodine tablets in water purification. It reacts with the sterilized water, removing the taste, color, and smell of the iodine. This is why it is often sold as a second set of tablets in most sporting goods stores as Portable Aqua-Neutralizing Tablets, along with the potassium iodide tablets.
  • Intravenous high-dose ascorbate is being used as a chemotherapeutic and biological response modifying agent.[20] Currently it is still under clinical trials.[21]


Ascorbic acid is found in plants and animals where it is produced from glucose.[22] Animals must either produce it or digest it, otherwise a lack of vitamin C may cause scurvy, which may eventually lead to death. Reptiles and older orders of birds make ascorbic acid in their kidneys. Recent orders of birds and most mammals make ascorbic acid in their liver where the enzyme L-gulonolactone oxidase is required to convert glucose to ascorbic acid.[22] Humans, other higher primates, guinea pigs and most bats require dietary ascorbic acid because the enzyme L-gulonolactone oxidase catalysing the last step in the biosynthesis is highly mutated and non-functional, therefore, unable to make ascorbic acid. Synthesis and signalling properties are still under investigation.[23]

Animal ascorbic acid biosynthesis pathway

The biosynthesis of ascorbic acid starts with the formation of UDP-glucuronic acid. UDP-glucuronic acid is formed when UDP-glucose undergoes two oxidations catalyzed by the enzyme UDP-glucose 6-dehydrogenase. UDP-glucose 6-dehydrogenase uses the co-factor NAD+ as the electron acceptor. The transferase UDP-glucuronate pyrophosphorylase removes a UMP and glucuronokinase, with the cofactor ADP, removes the final phosphate leading to D-glucuronic acid. The aldehyde group of this is reduced to a primary alcohol using the enzyme glucuronate reductase and the cofactor NADPH, yielding L-gulonic acid. This is followed by lactone formation with the hydrolase gluconolactonase between the carbonyl on C1 and hydroxyl group on C4. L-Gulonolactone then reacts with oxygen, catalyzed by the enzyme L-gulonolactone oxidase (which is nonfunctional in humans and other Haplorrhini primates) and the cofactor FAD+. This reaction produces 2-oxogulonolactone, which spontaneously undergoes enolization to form ascorbic acid.[24]

Plant ascorbic acid biosynthesis pathway

There are many different biosynthesis pathways for ascorbic acid in plants. Most of these pathways are derived from products found in glycolysis and other pathways. For example, one pathway goes through the plant cell wall polymers.[23] The plant ascorbic acid biosynthesis pathway most principal seems to be L-galactose. L-Galactose reacts with the enzyme L-galactose dehydrogenase, whereby the lactone ring opens and forms again but with between the carbonyl on C1 and hydroxyl group on the C4, resulting in L-galactonolactone.[24] L-Galactonolactone then reacts with the mitochondrial flavoenzyme L-galactonolactone dehydrogenase.[25] to produce ascorbic acid.[24] L-Ascorbic acid has a negative feedback on L-galactose dehydrogenase in spinach.[26] Ascorbic acid efflux by embryo of dicots plants is a well-established mechanism of iron reduction, and a step obligatory for iron uptake.[27]

Yeasts do not make L-ascorbic acid but rather its stereoisomer, erythorbic acid[28]

Industrial preparation

Ascorbic acid is prepared in industry from glucose in a method based on the historical Reichstein process. In the first of a five-step process, glucose is catalytically hydrogenated to sorbitol, which is then oxidized by the microorganism Acetobacter suboxydans to sorbose. Only one of the six hydroxy groups is oxidized by this enzymatic reaction. From this point, two routes are available. Treatment of the product with acetone in the presence of an acid catalyst converts four of the remaining hydroxyl groups to acetals. The unprotected hydroxyl group is oxidized to the carboxylic acid by reaction with the catalytic oxidant TEMPO (regenerated by sodium hypochloritebleaching solution). Historically, industrial preparation via the Reichstein process used potassium permanganate as the bleaching solution. Acid-catalyzed hydrolysis of this product performs the dual function of removing the two acetal groups and ring-closing lactonization. This step yields ascorbic acid. Each of the five steps has a yield larger than 90%.[29]

A more biotechnological process, first developed in China in the 1960s, but further developed in the 1990s, bypasses the use of acetone-protecting groups. A second genetically modified microbe species, such as mutant Erwinia, among others, oxidises sorbose into 2-ketogluconic acid (2-KGA), which can then undergo ring-closing lactonization via dehydration. This method is used in the predominant process used by the ascorbic acid industry in China, which supplies 80% of world's ascorbic acid.[30] American and Chinese researchers are competing to engineer a mutant that can carry out a one-pot fermentation directly from glucose to 2-KGA, bypassing both the need for a second fermentation and the need to reduce glucose to sorbitol.[31]

There exists a D-ascorbic acid, which does not occur in nature but can be synthesized artificially. To be specific, L-ascorbate is known to participate in many specific enzyme reactions that require the correct enantiomer (L-ascorbate and not D-ascorbate). L-Ascorbic acid has a specific rotation of [α]20
 = +23°.[32]

The outdated, but historically important industrial synthesis of ascorbic acid from glucose via the Reichstein process.


The traditional way to analyze the ascorbic acid content is the process of titration with an oxidizing agent, and several procedures have been developed, mainly relying on iodometry. Iodine is used in the presence of a starch indicator. Iodine is reduced by ascorbic acid, and, when all the ascorbic acid has reacted, the iodine is then in excess, forming a blue-black complex with the starch indicator. This indicates the end-point of the titration. As an alternative, ascorbic acid can be treated with iodine in excess, followed by back titration with sodium thiosulfate using starch as an indicator.[33] The preceding iodometric method has been revised to exploit reaction of ascorbic acid with iodate and iodide in acid solution. Electrolyzing the solution of potassium iodide produces iodine, which reacts with ascorbic acid. The end of process is determined by potentiometric titration in a manner similar to Karl Fischer titration. The amount of ascorbic acid can be calculated by Faraday's law.

An uncommon oxidising agent is N-bromosuccinimide (NBS). In this titration, the NBS oxidizes the ascorbic acid in the presence of potassium iodide and starch. When the NBS is in excess (i.e., the reaction is complete), the NBS liberates the iodine from the potassium iodide, which then forms the blue-black complex with starch, indicating the end-point of the titration.

Friday, November 17, 2017


From Wikipedia, the free encyclopedia

Ethnobiology is the scientific study of the way living things are treated or used by different human cultures. It studies the dynamic relationships between people, biota, and environments, from the distant past to the immediate present.[1]

"People-biota-environment" interactions around the world are documented and studied through time, across cultures, and across disciplines in a search for valid, reliable answers to two 'defining' questions: "How and in what ways do human societies use nature, and how and in what ways do human societies view nature?"[2]


Beginnings (15th century–19th century)

16th-century English map of the world showing extent of western geographic knowledge in 1599

Naturalists have been interested in local biological knowledge since the time Europeans started colonising the world, from the 15th century onwards. Paul Sillitoe wrote that:[3]
Europeans not only sought to understand the new regions they intruded into but also were on the look-out for resources that they might profitably exploit, engaging in practices that today we should consider tantamount to biopiracy. Many new crops .. entered into Europe during this period, such as the potato, tomato, pumpkin, maize, and tobacco.[3] (Page 121)
Local biological knowledge, collected and sampled over these early centuries significantly informed the early development of modern biology:[3]

Phase I (1900s–1940s)

Ethnobiology itself, as a distinctive practice, only emerged during the 20th century as part of the records then being made about other peoples, and other cultures. As a practice, it was nearly always ancillary to other pursuits when documenting others' languages, folklore, and natural resource use. Roy Ellen commented that:
At its earliest and most rudimentary, this comprised listing the names and uses of plants and animals in native non-Western or 'traditional' populations often in the context of salvage ethnography ..[ie] ethno-biology as the descriptive biological knowledge of 'primitive' peoples.[4]
This 'first phase' in the development of ethnobiology as a practice has been described as still having an essentially utilitarian purpose, often focusing on identifying those 'native' plants, animals and technologies of some potential use and value within increasingly dominant western economic systems[4][5]

Phase II (1950s–1970s)

Arising out of practices in Phase I (above) came a 'second phase' in the development of 'ethnobiology', with researchers now striving to better document and better understand how other peoples' themselves "conceptualise and categorise" the natural world around them.[4] In Sillitoe's words:
By the mid-20th century .. utilitarian-focussed studies started to give way to more cognitively framed ones, notably studies that centred on elucidating classificatory schemes.[3] (Page 122)
Some Mangyan (who count the Hanunóo among their members) men, on Mindoro island, Philippines, where Harold Conklin did his ethnobiological work
This 'second' phase is marked:[4]

Present (1980s–2000s)

By the turn of the 21st century ethnobiological practices, research, and findings have had a significant impact and influence across a number of fields of biological inquiry including ecology,[10] conservation biology,[11][12] development studies,[13] and political ecology.[14]

The Society of Ethnobiology advises on its web page:
Ethnobiology is a rapidly growing field of research, gaining professional, student, and public interest .. internationally
Ethnobiology has come out from its place as an ancillary practice in the shadows of other core pursuits, to arise as a whole field of inquiry and research in its own right: taught within many tertiary institutions and educational programmes around the world;[4] with its own methods manuals,[15] its own readers,[16] and its own textbooks[17]

Subjects of inquiry


All societies make use of the biological world in which they are situated, but there are wide differences in use, informed by perceived need, available technology, and the culture's sense of morality and sustainability.[citation needed] Ethnobiologists investigate what lifeforms are used for what purposes, the particular techniques of use, the reasons for these choices, and symbolic and spiritual implications of them.


Different societies divide the living world up in different ways. Ethnobiologists attempt to record the words used in particular cultures for living things, from the most specific terms (analogous to species names in Linnean biology) to more general terms (such as 'tree' and even more generally 'plant'). They also try to understand the overall structure or hierarchy of the classification system (if there is one; there is ongoing debate as to whether there must always be an implied hierarchy.[18]

Cosmological, moral and spiritual significance

Societies invest themselves and their world with meaning partly through their answers to questions like "how did the world happen?", "how and why did people come to be?", "what are proper practices, and why?", and "what realities exist beyond or behind our physical experience?" Understanding these elements of a societies' perspective is important to cultural research in general, and ethnobiologists investigate how a societies' view of the natural world informs and is informed by them.

Traditional ecological knowledge

In order to live effectively in a given place, a people needs to understand the particulars of their environment, and many traditional societies have complex and subtle understandings of the places in which they live.[citation needed] Ethnobiologists seek to share in these understandings, subject to ethical concerns regarding intellectual property and cultural appropriation.

Cross-cultural ethnobiology

In cross cultural ethnobiology research, two or more communities participate simultaneously. This enables the researcher to compare how a bio-resource is used by different communities.[19]



Ethnobotany investigates the relationship between human societies and plants: how humans use plants – as food, technology, medicine, and in ritual contexts; how they view and understand them; and their symbolic and spiritual role in a culture.


The subfield ethnozoology focuses on the relationship between animals and humans throughout human history. It studies human practices such as hunting, fishing and animal husbandry in space and time, and human perspectives about animals such as their place in the moral and spiritual realms.[citation needed]


Ethnoecology refers to an increasingly dominant 'ethnobiological' research paradigm focused, primarily, on documenting, describing, and understanding how other peoples perceive, manage, and use whole ecosystems.

Other disciplines

Studies and writings within ethnobiology draw upon research from fields including archaeology, geography, linguistics, systematics, population biology, ecology, cultural anthropology, ethnography, pharmacology, nutrition, conservation, and sustainable development.[1]


Through much of the history of ethnobiology, its practitioners were primarily from dominant cultures, and the benefit of their work often accrued to the dominant culture, with little control or benefit invested in the indigenous peoples whose practice and knowledge they recorded.

Just as many of those indigenous societies work to assert legitimate control over physical resources such as traditional lands or artistic and ritual objects, many work to assert legitimate control over their intellectual property.

In an age when the potential exists for large profits from the discovery of, for example, new food crops or medicinal plants, modern ethnobiologists must consider intellectual property rights, the need for informed consent, the potential for harm to informants, and their "debt to the societies in which they work".[20]

Furthermore, these questions must be considered not only in light of western industrialized nations' common understanding of ethics and law, but also in light of the ethical and legal standards of the societies from which the ethnobiologist draws information.[21]

Thursday, November 16, 2017

Holocene extinction

From Wikipedia, the free encyclopedia
Extinction intensity.svg Cambrian Ordovician Silurian Devonian Carboniferous Permian Triassic Jurassic Cretaceous Paleogene Neogene
Marine extinction intensity during the Phanerozoic
Millions of years ago
Extinction intensity.svg
The percentage of marine animal extinction at the genus level through the five mass extinctions

The Holocene extinction, otherwise referred to as the sixth extinction or Anthropocene extinction, is the ongoing extinction event of species during the present Holocene epoch, mainly due to human activity. The large number of extinctions spans numerous families of plants and animals, including mammals, birds, amphibians, reptiles and arthropods. With widespread degradation of highly biodiverse habitats such as coral reefs and rainforest, as well as other areas, the vast majority of these extinctions is thought to be undocumented. The current rate of extinction of species is estimated at 100 to 1,000 times higher than natural background rates.

The Holocene extinction includes the disappearance of large land animals known as megafauna, starting at the end of the last Ice Age. Megafauna outside of the African continent, which did not evolve alongside humans, proved highly sensitive to the introduction of new predation, and many died out shortly after early humans began spreading and hunting across the Earth (additionally, many African species have also gone extinct in the Holocene). These extinctions, occurring near the PleistoceneHolocene boundary, are sometimes referred to as the Quaternary extinction event.

The arrival of humans on different continents coincides with megafaunal extinction. The most popular theory is that human overhunting of species added to existing stress conditions. Although there is debate regarding how much human predation affected their decline, certain population declines have been directly correlated with human activity, such as the extinction events of New Zealand and Hawaii. Aside from humans, climate change may have been a driving factor in the megafaunal extinctions, especially at the end of the Pleistocene.

The ecology of humanity has been noted as being that of an unprecedented "global superpredator" that regularly preys on the adults of other apex predators and has worldwide effects on food webs. Extinctions of species have occurred on every land mass and ocean, with many famous examples within Africa, Asia, Europe, Australia, North and South America, and on smaller islands. Overall, the Holocene extinction can be characterized by the human impact on the environment. The Holocene extinction continues into the 21st century, with meat consumption, overfishing, ocean acidification and the amphibian crisis being a few broader examples of an almost universal, cosmopolitan decline in biodiversity. Human overpopulation (and continued population growth) along with profligate consumption are considered to be the primary drivers of this rapid decline.[1][2]


The Holocene extinction is also known as the "sixth extinction", due to its possibly being the sixth mass extinct event, after the Ordovician–Silurian extinction events, the Late Devonian extinction, the Permian–Triassic extinction event, the Triassic–Jurassic extinction event, and the Cretaceous–Paleogene extinction event.[3][4][5][6][1] There is no general agreement on where the Holocene, or anthropogenic, extinction begins, and the Quaternary extinction event, which includes climate change resulting in the end of the last ice age, ends, or if they should be considered separate events at all.[7][8] Some have suggested that anthropogenic extinctions may have begun as early as when the first modern humans spread out of Africa between 100,000 and 200,000 years ago, which is supported by rapid megafaunal extinction following recent human colonisation in Australia, New Zealand and Madagascar,[3] in a similar way that any large, adaptable predator moving into a new ecosystem would. In many cases, it is suggested even minimal hunting pressure was enough to wipe out large fauna, particularly on geographically isolated islands.[9][10] Only during the most recent parts of the extinction have plants also suffered large losses.[11]

In The Future of Life (2002), E.O. Wilson of Harvard calculated that, if the current rate of human disruption of the biosphere continues, one-half of Earth's higher lifeforms will be extinct by 2100. A 1998 poll conducted by the American Museum of Natural History found that seventy percent of biologists acknowledge the existence of the anthropogenic extinction.[12] Numerous scientific studies — such as a 2004 report published in Nature,[13] and papers authored by the IUCN's annual Red List of threatened species — have since reinforced this conviction. At present, the rate of extinction of species is estimated at 100 to 1,000 times higher than the "base" or historically typical rate of extinction (in terms of the natural evolution of the planet)[14][15][16] and also the current rate of extinction is, therefore, 10 to 100 times higher than any of the previous mass extinctions in the history of Earth. It is also the only known mass extinction of plants.[citation needed] One scientist estimates the current extinction rate may be 10,000 times the background extinction rate. Nevertheless, most scientists predict a much lower extinction rate than this outlying estimate.[17] Stuart Pimm stated "the current rate of species extinction is about 100 times the natural rate" for plants.[18] Mass extinctions are characterized by the loss of at least 75% of species within a geologically short period of time.[19][20][21]

In a pair of studies published in 2015, extrapolation from observed extinction of Hawaiian snails led to the conclusion that 7% of all species on Earth may have been lost already.[22][23]

While there is widespread consensus in the scientific community that human activity is accelerating the extinction of many animal species through the destruction of wild lands, the consumption of animals as resources or luxuries, and the persecution of species that humans view as threats or competitors,[24] some contend that this biotic destruction has yet to rise to the level of the previous five mass extinctions. Stuart Pimm, for example, asserts that the sixth mass extinction "is something that hasn’t happened yet – we are on the edge of it."[25]


A diagram showing the ecological processes of coral reefs before and after the Anthropocene

The abundance of species extinctions considered anthropogenic, or due to human activity, have sometimes (especially when referring to hypothesized future events) been collectively called the "Anthropocene extinction".[26][27][24] "Anthropocene" is a term introduced in 2000. It is now posited by some that a new geological epoch has begun, characterised by the most abrupt and widespread extinction of species since the Cretaceous–Paleogene extinction event 66 million years ago.[3]

The term "anthropocene" is being used more frequently by scientists, and some commentators may refer to the current and projected future extinctions as part of a longer Holocene extinction.[28][29] The Holocene–Anthropocene boundary is contested, with some commentators asserting significant human influence on climate for much of what is normally regarded as the Holocene Epoch.[30] Other commentators place the Holocene–Anthropocene boundary at the industrial revolution while also saying that, "[f]ormal adoption of this term in the near future will largely depend on its utility, particularly to earth scientists working on late Holocene successions."

It has been suggested that human activity has made the period following the mid-20th century different enough from the rest of the Holocene to consider it a new geological epoch, known as the Anthropocene,[31] which was considered for implementation into the timeline of Earth's history by the International Commission on Stratigraphy in 2016.[32][33] In order to constitute the Holocene as an extinction event, scientists must determine exactly when anthropogenic greenhouse gas emissions began to measurably alter natural atmospheric levels at a global scale and when these alterations caused changes to global climate. Employing chemical proxies from Antarctic ice cores, researchers have estimated the fluctuations of carbon dioxide (CO2) and methane gases (CH4) in the earth’s atmosphere for the late Pleistocene and Holocene epochs.[30] Based on studies that estimated fluctuations of carbon dioxide and methane in the atmosphere using chemical proxies from Antarctic ice cores, general argumentation of when the peak of the Anthropocene occurred pertains to the timeframe within the previous two centuries; typically beginning with the Industrial Revolution, when greenhouse gas levels were recorded by contemporary methods at its highest.[34][35]


Competition by humans

The percent of megafauna on different land masses over time, with the arrival of humans indicated.

The Holocene extinction is mainly caused by human activity.[4][5][24][6][1] Extinction of animals, plants, and other organisms caused by human actions may go as far back as the late Pleistocene, over 12,000 years ago.[24] There is a correlation between megafaunal extinction and the arrival of humans, and human overpopulation and human population growth, along with overconsumption and consumption growth, most prominently in the past two centuries, are regarded as one of the underlying causes of extinction.[4][36][1][2][37]

Megafauna were once found on every continent of the world and large islands such as New Zealand and Madagascar, but are now almost exclusively found on the continent of Africa, with notable comparisons on Australia and the islands previously mentioned experiencing population crashes and trophic cascades shortly after the earliest human settlers.[9][10] It has been suggested that the African megafauna survived because they evolved alongside humans.[3] The timing of South American megafaunal extinction appears to precede human arrival, although the possibility that human activity at the time impacted the global climate enough to cause such an extinction has been suggested.[3]

It has been noted, in the face of such evidence, that humans are unique in ecology as an unprecedented 'global superpredator', regularly preying on large numbers of fully grown terrestrial and marine apex predators, and with a great deal of influence over food webs and climatic systems worldwide.[38] Although significant debate exists as to how much human predation and indirect effects contributed to prehistoric extinctions, certain population crashes have been directly correlated with human arrival.[8][3][24]


Human civilization flourished in accordance to the efficiency and intensification of prevailing subsistence systems.[39] Local communities that acquire more subsistence strategies increased in number to combat competitive pressures of land utilization.[30][39] Therefore, the Holocene developed competition on the basis of agriculture. The growth of agriculture has then introduced newer means of climate change, pollution, and ecological development.[40]
The dodo, a flightless bird native to Mauritius, became extinct during the mid- to late 17th century due to habitat destruction and predation by introduced mammals.[41]

Habitat destruction by humans, including oceanic devastation, such as through overfishing and contamination; and the modification and destruction of vast tracts of land and river systems around the world to meet solely human-centered ends (with 13 percent of Earth's ice-free land surface now used as row-crop agricultural sites, 26 percent used as pastures, and 4 percent urban-industrial areas[42]), thus replacing the original local ecosystems.[43] Other, related human causes of the extinction event include deforestation, hunting, pollution,[44] the introduction in various regions of non-native species, and the widespread transmission of infectious diseases spread through livestock and crops.[15]

Recent investigations about hunter-gatherer landscape burning has a major implication for the current debate about the timing of the Anthropocene and the role that humans may have played in the production of greenhouse gases prior to the Industrial Revolution.[39] Studies on early hunter-gatherers raises questions about the current use of population size or density as a proxy for the amount of land clearance and anthropogenic burning that took place in pre-industrial times.[45][46] Scientists have questioned the correlation between population size and early territorial alterations.[46] Ruddiman and Ellis' research paper in 2009 makes the case that early farmers involved in systems of agriculture used more land per capita than growers later in the Holocene, who intensified their labor to produce more food per unit of area (thus, per laborer); arguing that agricultural involvement in rice production implemented thousands of years ago by relatively small populations have created significant environmental impacts through large-scale means of deforestation.[39]

While a number of human-derived factors are recognized as potentially contributing to rising atmospheric concentrations of CH4 and CO2, deforestation and territorial clearance practices associated with agricultural development may be contributing most to these concentrations globally.[34][47][39] Scientists that are employing a variance of archaeological and paleoecological data argue that the processes contributing to substantial human modification of the environment spanned many thousands of years ago on a global scale and thus, not originating as early as the Industrial Revolution. Gaining popularity on his uncommon hypothesis, palaeoclimatologist William Ruddiman in 2003, stipulated that in the early Holocene 11,000 years ago, atmospheric carbon dioxide and methane levels fluctuated in a pattern which was different from the Pleistocene epoch before it.[30][45][47] He argued that the patterns of the significant decline of CO2 levels during the last ice age of the Pleistocene inversely correlates to the Holocene where there has been dramatic increases of CO2 around 8000 years ago and CH4 levels 3000 years after that.[47] The correlation between the decrease of CO2 in the Pleistocene and the increase of it during the Holocene implies that the causation of this spark of greenhouse gases into the atmosphere was the growth of human agriculture during the Holocene such as the anthropogenic expansion of (human) land use and irrigation.[30][47]


Human arrival in the Caribbean around 6,000 years ago is correlated with the extinction of many species.[48] Examples include many different genera of ground and arboreal sloths across all islands. These sloths were generally smaller than those found on the South American continent. Megalocnus were the largest genus at up to 90 kilograms (200 lb), Acratocnus were medium-sized relatives of modern two-toed sloths endemic to Cuba, Imagocnus also of Cuba, Neocnus and many others.[49]

Recent research, based on archaeological and paleontological digs on 70 different Pacific islands has shown that numerous species became extinct as people moved across the Pacific, starting 30,000 years ago in the Bismarck Archipelago and Solomon Islands.[50] It is currently estimated that among the bird species of the Pacific, some 2000 species have gone extinct since the arrival of humans, representing a 20% drop in the biodiversity of birds worldwide.[51]

The first settlers are thought to have arrived in the islands between 300 and 800 CE, with European arrival in the 16th century. Hawaii is notable for its endemism of plants, birds, insects, mollusks and fish; 30% of its organisms are endemic. Many of its species are endangered or have gone extinct, primarily due to accidentally introduced species and livestock grazing. Over 40% of its bird species have gone extinct, and it is the location of 75% of extinctions in the United States.[52] Extinction has increased in Hawaii over the last 200 years and is relatively well documented, with extinctions among native snails used as estimates for global extinction rates.[22]
Genyornis newtoni, a 2-metre (7 ft) tall flightless bird. Evidence of egg cooking in this species is the first evidence of megafaunal hunting by humans on Australia.[53]

Australia was once home to a large assemblage of megafauna, with many parallels to those found on the African continent today. Australia's fauna is characterised by primarily marsupial mammals, and many reptiles and birds, all existing as giant forms until recently. Humans arrived on the continent very early, about 50,000 years ago.[3] The extent human arrival contributed is controversial; climatic drying of Australia 40,000–60,000 years ago was an unlikely cause, as it was less severe in speed or magnitude than previous regional climate change which failed to kill off megafauna. Extinctions in Australia continued from original settlement until today in both plants and animals, whilst many more animals and plants have declined or are endangered.[54]

Due to the older timeframe and the soil chemistry on the continent, very little subfossil preservation evidence exists relative to elsewhere.[55] However, continent-wide extinction of all genera weighing over 100 kilograms, and six of seven genera weighing between 45 and 100 kilograms occurred around 46,400 years ago (4,000 years after human arrival)[56] and the fact that megafauna survived until a later date on the island of Tasmania following the establishment of a land bridge[57] suggest direct hunting or anthropogenic ecosystem disruption such as fire-stick farming as likely causes. The first evidence of direct human predation leading to extinction in Australia was published in 2016.[53]
Radiocarbon dating of multiple subfossil specimens shows that now extinct giant lemurs were present in Madagascar until after human arrival.

Within 500 years of the arrival of humans between 2,500–2,000 years ago, nearly all of Madagascar's distinct, endemic and geographically isolated megafauna became extinct.[58] The largest animals, of more than 150 kilograms (330 lb), were extinct very shortly after the first human arrival, with large and medium-sized species dying out after prolonged hunting pressure from an expanding human population moving into more remote regions of the island around 1000 years ago. Smaller fauna experienced initial increases due to decreased competition, and then subsequent declines over the last 500 years.[10] All fauna weighing over 10 kilograms (22 lb) died out. The primary reasons for this are human hunting and habitat loss from early aridification, both of which persist and threaten Madagascar's remaining taxa today.[citation needed]

The eight or more species of elephant birds, giant flightless ratites in the genera Aepyornis and Mullerornis, are extinct from over-hunting,[59] as well as 17 species of lemur, known as giant, subfossil lemurs. Some of these lemurs typically weighed over 150 kilograms (330 lb), and fossils have provided evidence of human butchery on many species.[60]
New Zealand
New Zealand is characterised by its geographic isolation and island biogeography, and had been isolated from mainland Australia for 80 million years. It was the last large land mass to be colonised by humans. The arrival of Polynesian settlers circa 12th century resulted in the extinction of all of the islands' megafaunal birds within several hundred years.[61] The last moa, large flightless ratites, became extinct within 200 years of the arrival of human settlers.[9] The Polynesians also introduced the Polynesian rat. This may have put some pressure on other birds but at the time of early European contact (18th Century) and colonisation (19th Century) the bird life was prolific. With them, the Europeans brought ship rats, possums, cats and mustelids which decimated native bird life, some of which had adapted flightlessness and ground nesting habits and others had no defensive behavior as a result of having no extant endemic mammalian predators. The kakapo, the world's biggest parrot, which is flightless, now only exists in managed breeding sanctuaries and NZ's national emblem, the kiwi, is on the endangered bird list.[61]


Reconstructed woolly mammoth bone hut, based on finds in Mezhyrich.
The passenger pigeon was a species of pigeon endemic to North America. It experienced a rapid decline in the late 1800s due to intense hunting after the arrival of Europeans. The last wild bird is thought to have been shot in 1901.

There has been a debate as to the extent to which the disappearance of megafauna at the end of the last glacial period can be attributed to human activities by hunting, or even by slaughter[62] of prey populations. Discoveries at Monte Verde in South America and at Meadowcroft Rock Shelter in Pennsylvania have caused a controversy[63] regarding the Clovis culture. There likely would have been human settlements prior to the Clovis Culture, and the history of humans in the Americas may extend back many thousands of years before the Clovis culture.[63] The amount of correlation between human arrival and megafauna extinction is still being debated: for example, in Wrangel Island in Siberia the extinction of dwarf woolly mammoths (approximately 2000 BCE)[64] did not coincide with the arrival of humans, nor did megafaunal mass extinction on the South American continent, although it has been suggested climate changes induced by anthropogenic effects elsewhere in the world may have contributed.[3]

Comparisons are sometimes made between recent extinctions (approximately since the industrial revolution) and the Pleistocene extinction near the end of the last glacial period. The latter is exemplified by the extinction of large herbivores such as the woolly mammoth and the carnivores that preyed on them. Humans of this era actively hunted the mammoth and the mastodon[65] but it is not known if this hunting was the cause of the subsequent massive ecological changes, widespread extinctions and climate changes.[7][8]

The ecosystems encountered by the first Americans had not been exposed to human interaction, and may have been far less resilient to human made changes than the ecosystems encountered by industrial era humans. Therefore, the actions of the Clovis people, despite seeming insignificant by today's standards could indeed have had a profound effect on the ecosystems and wild life which was entirely unused to human influence.[3]


Africa experienced the smallest decline in megafauna compared to the other continents. This is presumably due to the idea that Afroeurasian megafauna evolved alongside humans, and thus developed a healthy fear of them, unlike the comparatively tame animals of other continents.[66] Unlike other continents, the megafauna of Eurasia went extinct over a relatively long period of time, possibly due to climate fluctuations fragmenting and decreasing populations, leaving them vulnerable to over-exploitation, as with the steppe bison (Bison priscus).[67] The warming of the arctic region caused the rapid decline of grasslands, which had a negative effect on the grazing megafauna of Eurasia. Most of what once was mammoth steppe has been converted to mire, rendering the environment incapable of supporting them, notably the woolly mammoth.[68]

Climate change

Top: Arid ice age climate Middle: Atlantic Period, warm and wet Bottom: Potential vegetation in climate now if not for human effects like agriculture.[69]

One of the main theories to the extinction is climate change. The climate change theory has suggested that a change in climate near the end of the late Pleistocene stressed the megafauna to the point of extinction.[28][70] Some scientists favor abrupt climate change as the catalyst for the extinction of the mega-fauna at the end of the Pleistocene, but there are many who believe increased hunting from early modern humans also played a part, with others even suggesting that the two interacted.[3][71][72] However, the annual mean temperature of the current interglacial period for the last 10,000 years is no higher than that of previous interglacial periods, yet some of the same megafauna survived similar temperature increases.[73][74][75][76][77][78] In the Americas, a controversial explanation for the shift in climate is presented under the Younger Dryas impact hypothesis, which states that the impact of comets cooled global temperatures.[79][80][81]

Megafaunal extinction

Megafauna play a significant role in the lateral transport of mineral nutrients in an ecosystem, tending to translocate them from areas of high to those of lower abundance. They do so by their movement between the time they consume the nutrient and the time they release it through elimination (or, to a much lesser extent, through decomposition after death).[82] In South America's Amazon Basin, it is estimated that such lateral diffusion was reduced over 98% following the megafaunal extinctions that occurred roughly 12,500 years ago.[83][84] Given that phosphorus availability is thought to limit productivity in much of the region, the decrease in its transport from the western part of the basin and from floodplains (both of which derive their supply from the uplift of the Andes) to other areas is thought to have significantly impacted the region's ecology, and the effects may not yet have reached their limits.[84] The extinction of the mammoths allowed grasslands they had maintained through grazing habits to become birch forests.[7] The new forest and the resulting forest fires may have induced climate change.[7] Such disappearances might be the result of the proliferation of modern humans.[24]

Large populations of megaherbivores have the potential to contribute greatly to the atmospheric concentration of methane, which is an important greenhouse gas. Modern ruminant herbivores produce methane as a byproduct of foregut fermentation in digestion, and release it through belching or flatulence. Today, around 20% of annual methane emissions come from livestock methane release. In the Mesozoic, it has been estimated that sauropods could have emitted 520 million tons of methane to the atmosphere annually,[85] contributing to the warmer climate of the time (up to 10 °C warmer than at present).[85][86] This large emission follows from the enormous estimated biomass of sauropods, and because methane production of individual herbivores is believed to be almost proportional to their mass.[85]

Recent studies have indicated that the extinction of megafaunal herbivores may have caused a reduction in atmospheric methane. This hypothesis is relatively new.[87] One study examined the methane emissions from the bison that occupied the Great Plains of North America before contact with European settlers. The study estimated that the removal of the bison caused a decrease of as much as 2.2 million tons per year.[88] Another study examined the change in the methane concentration in the atmosphere at the end of the Pleistocene epoch after the extinction of megafauna in the Americas. After early humans migrated to the Americas about 13,000 BP, their hunting and other associated ecological impacts led to the extinction of many megafaunal species there. Calculations suggest that this extinction decreased methane production by about 9.6 million tons per year. This suggests that the absence of megafaunal methane emissions may have contributed to the abrupt climatic cooling at the onset of the Younger Dryas.[87] The decrease in atmospheric methane that occurred at that time, as recorded in ice cores, was 2–4 times more rapid than any other decrease in the last half million years, suggesting that an unusual mechanism was at work.[87]


The hyperdisease hypothesis, proposed by Ross MacPhee in 1997, states that the megafaunal die-off was due to an indirect transmission of diseases by newly arriving aboriginal humans.[89][90][91] According to MacPhee, aboriginals or animals travelling with them, such as domestic dogs or livestock, introduced one or more highly virulent diseases into new environments whose native population had no immunity to them, eventually leading to their extinction. K-selection animals, such as the now-extinct megafauna, are especially vulnerable to diseases, as opposed to r-selection animals who have a shorter gestation period and a higher population size. Humans are thought to be the sole cause as other earlier migrations of animals into North America from Eurasia did not cause extinctions.[89]

There are many problems with this theory in the scientific community, as this disease would have to meet several criteria: it has to be able to sustain itself in an environment with no hosts; it has to have a high infection rate; and be extremely lethal, with a mortality rate of 50–75%. Disease has to be very virulent to kill off all the individuals in a genus or species, and even such a virulent disease as West Nile Virus is unlikely to have caused extinction.[92]

However, diseases have been the cause for some extinctions. The introduction of avian malaria and avipoxvirus, for example, have had a negative impact on the endemic birds of Hawaii.[93]


The golden toad of Costa Rica, extinct since around 1989. Its disappearance has been attributed to a confluence of several factors, including El Niño warming, fungus, habitat loss and the introduction of invasive species.[94]
There are roughly 880 mountain gorillas remaining in existence. 60% of primate species face an anthropogenically driven extinction crisis and 75% have declining populations.[95]
Angalifu, a male northern white rhinoceros at the San Diego Zoo Safari Park (died December 2014).[96] Currently there are only three remaining alive on earth.[97]

The loss of species from ecological communities, defaunation, is primarily driven by human activity.[5] This has resulted in empty forests, ecological communities depleted of large vertebrates.[98][24] This is not to be confused with extinction, as it includes both the disappearance of species and declines in abundance.[99] Defaunation effects were first implied at the Symposium of Plant-Animal Interactions at the University of Campinas, Brazil in 1988 in the context of neotropical forests.[100] Since then, the term has gained broader usage in conservation biology as a global phenomenon.[101][100]

Big cat populations have severely declined over the last half-century and could face extinction in the following decades. According to IUCN estimates: lions are down to 25,000, from 450,000; leopards are down to 50,000, from 750,000; cheetahs are down to 12,000, from 45,000; tigers are down to 3,000 in the wild, from 50,000.[102] A December 2016 study by the Zoological Society of London, Panthera Corporation and Wildlife Conservation Society showed that cheetahs are far closer to extinction than previously thought, with only 7,100 remaining in the wild, and crammed within only 9% of their historic range.[103] Human pressures are to blame for the cheetah population crash, including prey loss due to overhunting by people, retaliatory killing from farmers, habitat loss and the illegal wildlife trade.[104]

The term pollinator decline refers to the reduction in abundance of insect and other animal pollinators in many ecosystems worldwide beginning at the end of the twentieth century, and continuing into the present day.[105] Pollinators, which are necessary for 75% of food crops, are declining globally in both abundance and diversity.[106] A 2017 study led by Radboud University's Hans de Kroon indicated that the biomass of insect life in Germany had declined by three-quarters in the previous 25 years. Participating researcher Dave Goulson of Sussex University stated that their study suggested that humans are making large parts of the planet uninhabitable for wildlife. Goulson characterized the situation as an approaching "ecological Armageddon", adding that "if we lose the insects then everything is going to collapse."[107]

Various species are predicted to become extinct in the near future,[109] among them the rhinoceros,[110] primates,[95] pangolins,[111] and giraffes.[112][113] Hunting alone threatens bird and mammalian populations around the world.[114][115][116] Some scientists and academics assert that industrial agriculture and the growing demand for meat is contributing to significant global biodiversity loss as this is a significant driver of deforestation and habitat destruction; species-rich habitats, such as significant portions of the Amazon region, are being converted to agriculture for meat production.[6][117][118][119] A 2017 study by the World Wildlife Fund (WWF) found that 60% of biodiversity loss can be attributed to the vast scale of feed crop cultivation required to rear tens of billions of farm animals.[120] Moreover, a 2006 report by the Food and Agriculture Organization (FAO) of the United Nations, Livestock's Long Shadow, also found that the livestock sector is a "leading player" in biodiversity loss.[121] According to the WWF's 2016 Living Planet Index, global wildlife populations have declined 58% since 1970, primarily due to habitat destruction, over-hunting and pollution. They project that if current trends continue, 67% of wildlife could disappear by 2020.[122][123] 189 countries, which are signatory to the Convention on Biological Diversity (Rio Accord),[124] have committed to preparing a Biodiversity Action Plan, a first step at identifying specific endangered species and habitats, country by country.[125]


Recent extinction

Recent extinctions are more directly attributable to human influences, whereas prehistoric extinctions can be attributed to other factors, such as global climate change.[4][5] The International Union for Conservation of Nature (IUCN) characterises 'recent' extinction as those that have occurred past the cut-off point of 1500,[127] and at least 875 species have gone extinct since that time and 2012.[128] Some species, such as the Père David's deer[129] and the Hawaiian crow,[130] are extinct in the wild, and survive solely in captive populations. Other species, such as the Florida panther, are ecologically extinct, surviving in such low numbers that that they essentially have no impact on the ecosystem.[131]:318 Other populations are only locally extinct (extirpated), still existence elsewhere, but reduced in distribution,[131]:75–77 as with the extinction of gray whales in the Atlantic,[132] and of the leatherback sea turtle in Malaysia.[133]

Habitat destruction

Bramble Cay melomys were declared extinct in June 2016. This is likely the first mammalian extinction due to anthropogenic climate change.[134]

Global warming is widely accepted as being a contributor to extinction worldwide, in a similar way that previous extinction events have generally included a rapid change in global climate and meteorology. It is also expected to disrupt sex ratios in many reptiles which have temperature-dependent sex determination.
Satellite image of rainforest converted to oil palm plantations.[135]

The removal of land to clear way for palm oil plantations releases carbon emissions held in the peatlands of Indonesia.[136][137] Palm oil mainly serves as a cheap cooking oil,[138] and also as a (controversial) biofuel. However, damage to peatland contributes to 4% of global greenhouse gas emissions, and 8% of those caused by burning fossil fuels.[139] Palm oil cultivation has also been criticized for other impacts to the environment,[140][141] including deforestation,[142] which has threatened critically endangered species such as the orangutan.[143][144] The IUCN stated in 2016 that the species could go extinct within a decade if measures are not taken to preserve the rainforests in which they live.[145] Tree-kangaroos are also threatened with extinction as the result of palm oil deforestation.[146]

Rising levels of carbon dioxide are resulting in influx of this gas into the ocean, increasing its acidity. Marine organisms which possess Calcium Carbonate shells or exoskeletons experience physiological pressure as the carbonate reacts with acid. This is already resulting in coral bleaching on various coral reefs worldwide, which provide valuable habitat for very high biodiversity. Marine gastropods, bivalves and other invertebrates are also affected, as are any organisms that feed on them.

Some researchers suggest that by 2050 there could be more plastic than fish in the oceans by weight.[37]


The Vaquita, the world's most endangered marine mammal, has been reduced to only 30 individuals as of February 2017. They are often killed by commercial fishing nets.[147] The species could be extinct by autumn of 2017, according to WWF conservationists.[148]

Overhunting can reduce the local population of game animals by more than half, as well as reducing population density, and may lead to extinction for some species.[149] Populations located nearer to villages are significantly more at risk of depletion.[150][151]

The surge in the mass killings by poachers involved in the illegal ivory trade along with habitat loss is threatening African elephant populations.[152][153] In 1979, their populations stood at 1.7 million; at present there are fewer than 400,000 remaining.[154] Prior to European colonization, scientists believe Africa was home to roughly 20 million elephants.[155] According to the Great Elephant Census, 30% of African elephants (or 144,000 individuals) disappeared over a seven-year period, 2007 to 2014.[153][156] African elephants could become extinct by 2035 if poaching rates continue.[113]
The collapse of Atlantic cod off the coast of Newfoundland in 1992 as a result of overfishing. The population never recovered, completely altering the ecosystem and rendering the species locally extinct.

Fishing has had a devastating effect on marine organism populations for several centuries even before the explosion of destructive and highly effective fishing practices like trawling.[157] Humans are unique among predators in that they regularly predate on other adult apex predators, particularly in marine environments;[38] bluefin tuna, blue whales, and various sharks in particular are particularly vulnerable to predation pressure from human fishing. A 2016 study published in Science concludes that humans tend to hunt larger species, and this could disrupt ocean ecosystems for millions of years.[158]


Toughie, the last Rabbs' fringe-limbed treefrog, died in September 2016.[160] The species was killed off from the chytrid fungus Batrachochytrium dendrobatidis[161]

The decline of amphibian populations has also been identified as an indicator of environmental degradation. As well as habitat loss, introduced predators and pollution, Chytridiomycosis, a fungal infection thought to have been accidentally spread by human travel,[3] has caused severe population drops of several species of frogs, including (among many others) the extinction of the golden toad in Costa Rica and the Gastric-brooding frog in Australia. Many other amphibian species now face extinction, including the reduction of Rabb's fringe-limbed treefrog to an endling, and the extinction of the Panamanian golden frog in the wild. Chytrid fungus has spread across Australia, New Zealand, Central America and Africa, including countries with high amphibian diversity such as cloud forests in Honduras and Madagascar. Batrachochytrium salamandrivorans is a similar infection currently threatening salamanders. Amphibians are now the most endangered vertebrate group, having existed for more than 300 million years through three other mass extinctions.[3]:17

Millions of bats in the US have been dying off since 2012 due to a fungal infection spread from European bats, which appear to be immune. Population drops have been as great as 90% within five years, and extinction of at least one bat species is predicted. There is currently no form of treatment, and such declines have been described as "unprecedented" in bat evolutionary history by Alan Hicks of the New York State Department of Environmental Conservation.[citation needed]